The present disclosure relates to a sensor device and a robot.
An existing sensor device is allowed to perform so-called multimodal sensing and the multimodal sensing allows for acquiring a plurality of pieces of physical information (modals) as sensor information (for example, see PTLs 1 and 2).
PTL 1: International Publication No. WO 2009/144767
PTL 2: Japanese Unexamined Patent Application Publication No. 2018-9792
It is desired to perform a highly accurate multimodal sensing.
It is desirable to provide a sensor device and a robot allowed to perform a highly accurate multimodal sensing.
A sensor device according to an embodiment of the present disclosure includes: a flexible layer having at least one hole; and a sensor structure attached with the flexible layer, the sensor structure including an imaging device, the imaging device being configured to observe the flexible layer and observe an object in an outside world through the hole of the flexible layer.
A robot according to an embodiment of the present disclosure includes: a sensor device; and a control device that performs a robot control based on sensor information from the sensor device, in which the sensor device includes: a flexible layer having at least one hole; and a sensor structure attached with the flexible layer, the sensor structure including an imaging device, the imaging device being configured to observe the flexible layer and observe an object in an outside world through the hole of the flexible layer.
The sensor device or the robot according to the embodiment of the present disclosure allows for observation of the flexible layer attached to the sensor structure and observation of an object in the outside world through the hole of the flexible layer by virtue of the imaging device installed in the sensor structure.
In the following, some embodiments of the present disclosure are described in detail with reference to the drawings. It is to be noted that description is made in the following order.
For a stable manipulation task, it is effective to use a fingertip sensor allowed to perform so-called multimodal sensing (having a plurality of senses). The multimodal sensing allows for acquiring a plurality of pieces of physical information (modals) as sensor information. Examples include a tactile sense (for example, slip detection, material recognition, and contact recognition) and a proximity sense (for example, Depth (distance) recognition and object recognition). However, it is difficult to develop a sensor allowed to acquire desired all tactile information and proximity information.
The fingertip sensor is allowed to acquire tactile information by, for example, observing the deformation of a contact surface of a fingertip. Proximity information is acquirable by, for example, observing an environment of the outside world. However, it is difficult to develop an effective means of simultaneously observing both the deformation of the contact surface and the environment of the outside world.
Meanwhile, a proposed sensor device is allowed to not only observe the outside world through a transparent flexible layer (gel) with no hole but also observe the deformation of the gel at the same time. However, in a case where the transparent gel with no hole is used, the dirtiness or wear of the gel makes it difficult to observe the outside world. In addition, in a case where the transparent gel with no hole is used, it is unlikely to detect the deformation of the gel in a normal direction of a contact surface with an object that is an observation target and thus it is difficult to detect contact with the object with a good sensitivity.
Accordingly, for a sensor device according to the embodiment, a technique enabling multimodal sensing to be performed with use of a compliant gel with a hole is proposed. The sensor device according to the embodiment allows for observing the outside of a contact surface through the hole made in the compliant gel. In addition, the formation of the hole in the gel facilitates the deformation of the gel, which makes it possible to detect contact with a good sensitivity.
The sensor device according to the embodiment is usable in robots in a variety of forms that are likely to come into contact with an environment, such as a manipulation robot, a legged robot, and a drone. Description will be made below by taking for example a manipulation robot having a finger serving as a manipulator.
For a manipulation robot, it is important to feed back multimodal sensor information in order to stably execute a task that necessitates contact. For the manipulation robot, a necessary modal varies depending on task. For example, necessary modals for a task involving a pressing action and a task involving a holding action are different. It is demanded to perform a manipulation action while selecting an appropriate modal in accordance with a task. However, it is not realistic to replace a finger serving as a manipulator for each task. Thus, a multimodal sensor device allowed to measure a variety of physical quantities is demanded as a fingertip sensor. In particular, a proximity sense and a tactile sense are absolutely essential for manipulation. Accordingly, it is desired to develop a sensor device allowed to simultaneously acquire both proximity sense and tactile sense.
In a case where the outside world is observed through a transparent gel and, simultaneously, the deformation of the gel is observed, there are two concerns. First, a contact strength is unlikely to be detected due to a poor sensitivity in a normal direction. Secondly, breakage of a front surface due to repeated use results in a difficulty in object detection. Accordingly, an improvement in the sensitivity in the normal direction and a structure allowing for object detection irrespective of breakage of the front surface are demanded.
The sensor device 3 according to the embodiment is usable in the robot 5 having, for example, a hand 1. The hand 1 has a finger 2 serving as a manipulator. The sensor device 3 is provided in, for example, the finger 2. The robot 5 includes a control device that performs a robot control based on sensor information from the sensor device 3.
The sensor device 3 includes the gel 10 serving as the flexible layer, a sensor structure 20 attached with the gel 10, and a sensor information processor 40.
The gel 10 includes a transparent compliant material. At least one hole 11 is made in the gel 10. The gel 10 may have a meshed structure having a plurality of holes 11. The gel 10 may have, for example, a grid structure or a honeycomb structure having the plurality of holes 11. The plurality of holes 11 may be made at regular intervals.
An imaging device 30 is installed in the sensor structure 20. The imaging device 30 allows for observation of the gel 10 and observation of an object 4 in the outside world through the hole 11.
The sensor device 3 has a function as a tactile sensor that acquires tactile information on the basis of deformation information regarding the gel 10 observed via the imaging device 30 and a function as a proximity sensor that acquires proximity information on the basis of observation information regarding the object 4 observed through the hole 11 of the gel 10.
The sensor information processor 40 includes an information processor that acquires, as information regarding a plurality of modals, the tactile information and the proximity information on the basis of the sensor information from the imaging device 30.
The sensor information processor 40 may acquire, as the proximity information, information (modal) including at least one of information regarding object recognition or information regarding a distance to the object 4 as illustrated in
The imaging device 30 may include at least one color image sensor 31 allowed to acquire color images serving as an observation image of the outside world and an observation image of the deformation (a deformation image) of the gel 10. For example, an RGB camera allowed to acquire an RGB image may be included as the color image sensor 31. In addition, at least one distance sensor 32 allowed to acquire distance information may be included as the imaging device 30. For example, a Depth sensor allowed to acquire a Depth image may be included as the distance sensor 32. In addition, at least one color image sensor 31 allowed to acquire a color image and distance information may be included as the imaging device 30. For example, an RGB-D camera 34 (
The mesh structure of the gel 10 may include a honeycomb structure having the hole 11 in a hexagonal shape as illustrated in
In the gel 10, a width Gw of the gel 10 (a width of the partition 12) and a size Gs of the hole 11 serve as parameters that affect a recognition rate of the object 4. A gel occupancy rate (a ratio of occupancy of the region other than the hole 11) affects the recognition rate of the object 4 as described later. The width Gw of the gel 10 and the size Gs of the hole 11 may be determined in accordance with a desired performance for object recognition.
In addition, in the gel 10, the shape of the hole 11 serves as a parameter that determines the deformability of the gel 10. The shape of the hole 11 may be determined in accordance with a desired detection performance for the contact position or a slip detection performance. For example, the hole 11 in a hexagonal shape (the honeycomb structure) lowers the deformability, which reduces the detection performance for the contact position or the slip detection performance as compared with in a case where the hole 11 is in a quadrangular (rectangular) shape.
In addition, the front surface of the gel 10 in the form of a curved surface makes it possible to stably detect slip. As illustrated in
In the sensor device 3, the width Gw of the gel 10 and the size Gs of the hole 11 affect the object recognition rate. As illustrated in
Ideally, the sensor device 3 first learns, for example, an RGB image in a state with no gel 10 through an object recognition network as illustrated in
In the sensor device 3, the width Gw of the gel 10 and the size Gs of the hole 11 affect the object recognition rate. In
In
In the sensor device 3, the shape of the hole 11 determines the deformability of the gel 10. The gel 10 is caused to have the grid structure (a lattice-shaped structure) as illustrated in
In the sensor device 3, different structures may be applied to the gel 10 in accordance with the magnitude of an assumable application load. For example, a grid structure (see, for example,
For example, a height Gh of the gel 10 is subject to a limitation depending on an angle of view of the color image sensor 31. In particular, unless the height Gh of the gel 10 is reduced at an end portion, the field of vision is blocked, which makes it difficult to observe the outside world. The curvature radius of the front surface of the gel 10 (the envelope surfaces 14 and 15, see
In order to favorably observe the deformation of the gel 10 through the color image sensor 31, an illumination light source 16 such as an LED (Light Emitting Diode) may be attached to the sensor device 3. As for a position of the illumination light source 16, light may be applied from the side of the gel 10 or may be applied from an imaging device 30 side (a rear surface side of the gel 10). A plurality of illumination light sources 16 may be provided.
As illustrated in
In addition, only a portion of the front surface of the gel 10 may be provided with a semispherical protrusion 22 as illustrated in
In addition, the shape of the hole 11 in the gel 10 is not limited to a quadrangle and a hexagon and a circular hole 23 may be made as illustrated in
In addition, a rectangular fine protrusion 24 may be arranged at a portion of the front surface of the gel 10 as illustrated in
As illustrated in
As illustrated in
The whole of the gel 10 may be in a shape allowing the gel 10 to be used directly as the finger 2. In other words, the shape of the whole of the gel 10 may be in a shape comparable to the finger 2. This allows for detecting proximity and contact relative to the object 4 in all directions.
The configuration of the gel 10 may be changed in accordance with a location where the sensor device 3 is to be provided. For example, a density of the grid of the gel 10 may be increased at a location where a high contact position accuracy or slip detection accuracy is required. For example, the density of the grid may be increased at a fingertip (a distal joint) 2A of the finger 2 as illustrated in
In the single sensor device 3, the configuration of the gel 10 may be changed in accordance with location. For example, in the single sensor device 3, different friction coefficients may be distributed in the front surface of the gel 10 in accordance with location to make an initial slip likely to be detected.
Possible methods of increasing the friction coefficient include causing a portion serving as a contact surface (the front surface of the gel 10) with the object 4 to be a flat surface to increase a contact area with the object 4, forming a fine unevenness on the front surface of the gel 10, using a sticky material, and using a material with a friction coefficient that is increased by heat.
Possible methods of decreasing the friction coefficient include causing the portion serving as the contact surface with the object 4 to be a curved surface to reduce the contact area with the object 4 or using a material having properties opposite to those in a case where the friction coefficient is increased. For example, in a case where the sensor device 3 is to be provided in the fingertip 2A, a structure causing the fiction coefficient to become higher as approaching a distal end of the fingertip 2A may be employed.
For example, only an outer peripheral portion 41 of the gel 10 may be bonded as a bonded portion 41 to the sensor structure 20 of the sensor device 3 as illustrated in
In addition, for example, a transparent plate-shaped substance 42 such as a gel sheet may be provided to stick the gel 10 to the sensor structure 20 as illustrated in
The color image sensor 31 in the sensor device 3 may include an RGB camera, a pinhole camera, an IR (infrared) sensor, an event camera, or the like. A microlens array or the like may be disposed in the color image sensor 31.
In order to clearly observe each of the deformation of the gel 10 and the outside world, the sensor device 3 may have a depth of field or a focal length that is changed in accordance with whether the deformation of the gel 10 or the outside world is to be observed. In this case, the plurality of color image sensors 31 may be used or the depth of field or the focal length of the single color image sensor 31 may be automatically changed. In addition, the depth of field or the focal length may be changed in accordance with the distance information from the Depth sensor serving as the distance sensor 32. Adjusting the depth of field to be shallow to focus on distant objects causes the gel 10 to blur with a suitability for observation of the outside world and thus the object recognition rate is improved. In contrast, adjusting the depth of field to be shallow to focus on nearby objects causes a background to blur as the gel 10 is centrally focused on and thus the recognition rate of the gel 10 is improved.
The distance sensor 32 in the sensor device 3 may include an RGB-D camera, a ToF (Time of Flight) sensor, a dToF (Direct Time of Flight) sensor, a LiDAR (Light Detection and Ranging), a stereovision, or the like. In addition, the distance sensor 32 may include a sensor with patterned irradiation, a sensor that estimates a distance from a blurred image, an ultrasonic sensor, or the like.
As the gel 10 is not deformed before coming into contact with the object 4, a process to ignore a portion corresponding to the gel 10 may be performed in the sensor device 3 in a case where the outside world is to be observed. In this case, for example, a location of the gel 10 in a captured image may be stored in advance to perform the process to ignore the portion corresponding to the gel 10 in the captured image.
For example, it is also possible to estimate a distance from an appearance of a pattern of shadow created by the gel 10 when irradiation with light from the illumination light source 16 such as an LED is performed as illustrated in
In the sensor device 3, the portion corresponding to the gel 10 in a captured image may be caused to become unnoticeable by disposing the plurality of color image sensors 31 to cause shooting angles to differ and combining a plurality of images captured from the different angles as illustrated in
In addition, in the sensor device 3, a mirror 33 may be provided in the sensor structure 20 to cause the color image sensor 31 to capture an image via the mirror 33 as illustrated in
In addition, in a case where it is possible to simultaneously measure an RGB image and a distance as illustrated in
In addition, in the sensor device 3, the sensor structure 20 and the gel 10 may have a finger-shaped structure as a whole with a plurality of imaging devices 30 disposed with respect to the single sensor device 3 as illustrated in
In addition, it is also possible to combine a plurality of sensor devices 3 to form the finger 2 of the robot 5. For example, the plurality of sensor devices 3 may be disposed with a knuckle 2C in between as illustrated in
Ideally, the sensor device 3 may first learn an RGB image in the state with no gel 10 through an object recognition network as described above. Then, while a learning result through the object recognition network in the state with no gel 10 is taken into account, an RGB image obtained in the state with the gel 10 may be learnt through the object recognition network (
In a case where, for example, a dToF sensor, a dToF LiDAR, or the like is used as the distance sensor 32 for distance measurement, the sensor device 3 may ignore data regarding a portion covered by the gel 10 (data regarding a reflected wave L2 from the gel 10) in obtained sensor data. This makes it possible to accurately acquire data regarding a reflected wave L1 from the object 4 to perform distance measurement.
In a case where, for example, a dToF sensor, a dToF LiDAR, or the like is used as the distance sensor 32 in the sensor device 3, a threshold of time to acquire sensor data is set, which makes it possible to separate the reflected wave L1 from the object 4 from the reflected wave L2 from the gel 10 in a histogram in the sensor data as illustrated in
An increase in the proximity of the object 4 to the front surface of the gel 10 makes it difficult to separate the reflected wave LI from the object 4 from the reflected wave L2 from the gel 10. In this case, an insensitive zone may be provided so that the gel 10 is considered to be almost in contact as illustrated in, for example,
In a case where an RGB camera is used as the color image sensor 31 in the sensor device 3, variations in the amount of blur with distance may be used to estimate a distance from a blurred image as illustrated in, for example,
In addition, in a case where a distance is to be estimated from the amount of blur of an image using the color image sensor 31 in the sensor device 3, a distance to a point measured by, for example, a ToF sensor serving as the distance sensor 32 is further used as a reference as illustrated in, for example,
In addition, in addition, in a case where a distance is to be estimated from the amount of blur using the color image sensor 31 in the sensor device 3, distances to points measured by, for example, a plurality of ToF sensors serving as the distance sensor 32 are further used as references as illustrated in, for example,
In addition, the sensor devices 3 may be attached to a plurality of fingers 2 of the robot 5 to estimate the distance to the object 4 by the principle of triangulation on the basis of sensor data obtained from the plurality of sensor devices 3 as illustrated in, for example,
In addition, the distance to the object 4 may be estimated from, for example, sensor information from a head sensor 51 (for example, an image sensor) provided on a head of the robot 5 and sensor information from the sensor device 3 provided in the finger 2 of the hand 1 of the robot 5 as illustrated in, for example
The sensor device 3 generates tactile information on the basis of a deformation image of the gel 10 as illustrated in
The sensor device 3 is allowed to detect a movement in the tangential direction of a deformed portion of the gel 10 on the basis of a deformation image of the gel 10 and estimate the contact position. For example, an RGB image at 0 g is considered as a reference. First, the RGB image is subjected to grayscale transformation. Subsequently, a deformed portion of the gel 10 is detected by differential information calculation. In addition, a center position of pixel values is obtained using an optical flow, which makes it possible to estimate a center (a centroid) of a contact point. This makes it possible to estimate the contact position.
An initial slip is a phenomenon where a partial slip of the contact surface with the object 4 begins from an end thereof and is also referred to as a premonitory phenomenon of slip. An initial slip region gradually expands to spread all over the contact region, which results in transition to a generally so-called “slip” (also referred to as whole slip) and, consequently, occurrence of a motion relative to the object 4 being in contact with the gel 10. Here, “fixation” refers to a state in which static friction occurs, for example, all over the contact surface between the gel 10 and a held object, or object 4, with no relative motion therebetween. Meanwhile, a “slip (whole slip)” refers to a state with a relative motion between two objects that are in contact with each other with occurrence of kinetic friction. Here, it refers to a slip with a relative motion between the gel 10 and a held object due to occurrence of kinetic friction all over the contact surface therebetween.
The “initial slip” is also referred to as a premonitory phenomenon of occurrence of the above-described slip (whole slip) and refers to a phenomenon where kinetic friction occurs at, for example, a portion of the contact surface between the gel 10 and a held object. Such an initial slip state is supposed to exist during transition from a “fixation” state to a “slip” state. In the initial slip state, no relative motion between the gel 10 and the held object occurs.
The contact region is divided into a “fixation region” where no initial slip occurs (i.e., a partial region where static friction occurs within the contact surface between the gel 11 and the held object) and a “slip region” where an initial slip occurs (i.e., a partial region where kinetic friction occurs within the contact surface between the gel 10 and the held object). The degree of slip may be indicated by a ratio between the two regions. Here, a ratio of the fixation region relative to the contact region is defined as a “fixation rate.” At a fixation rate of 1 (=100%), the contact region is in a state of being fully fixed with no slip region. Inversely, at a fixation rate of 0, the entirety of the contact region becomes the slip region, resulting in a state of suffering occurrence of a slip (a whole slip). Inversely, at a fixation rate of 0, the entirety of the contact region becomes the slip region, resulting in a state of suffering occurrence of a slip (a whole slip).
In the slip region, a phenomenon where the gel 10 deformed in the shearing direction is restored is seen as illustrated in
The sensor device 3 detects an initial slip using, for example, an optical flow. Although a slip is unlikely to be detected merely by watching an RGB image, the optical flow makes the amount of shearing clear. A difference in vector direction of the optical flow makes it possible to detect an initial slip (a partial slip). In an image of the optical flow seen on the left side in the bottom tier in
Referring to
In the sensor device 3, the front surface of the gel 10 is partially colored to provide the colored part 25 as illustrated in above-mentioned
In addition, tracking of each pattern is stabilized by changing the color of the colored part 25 or changing the shape of the pattern of the colored part 25 as illustrated in
The sensor device 3 may detect a texture of the object 4 from an RGB image. For example, the sensor device 3 first detects an edge or a feature amount of the object 4 from an RGB image and performs tracking. Although the gel 10 is also simultaneously detected at this time, the position of the gel 10 may be stored in advance to add a process to ignore it or a process in which, for example, detection of an edge in a horizontal direction and an edge in a vertical direction is skipped may be performed. For example, when a movement of the texture occurs, the movement amount of the texture at which a whole slip (for example, a relative movement between the finger 2 and the object 4) is considered to occur may be defined as a whole slip amount.
The sensor device 3 may estimate the contact force from a magnitude of the deformation of the gel 10. For example, the contact force may be estimated from an area of a deformed region as illustrated in
[1.6 Control of Robot]
The robot 5 includes a control device that performs an action control of each unit of the robot 5. The control device of the robot 5 performs the action control of each unit of the robot 5 on the basis of the sensor information from the sensor device 3 to cause the robot 5 to execute a task. The control device of the robot 5 uses an appropriate modal at an appropriate timing while switching modals of the sensor device 3 with the execution of the task.
Subsequently, the control device of the robot 5 use, as the modals of the sensor device 3 provided in the hand 1, TEXTURE, INITIAL SLIP (START-TO-SLIP DETECTION), WHOLE SLIP, and CONTACT POSITION to perform the task of opening the lid of the object 4 (
The pairing illustrated in
The control device of the robot 5 is allowed to execute a task by lining up the paired skills in sequence.
For the object holding task, the control device of the robot 5 first uses, as the modal of the sensor device 3, OBJECT RECOGNITION to approach the object 4 using the approach-to-object skill (Step S101) as illustrated in
For the button pressing task, the control device of the robot 5 first uses, as the modal of the sensor device 3, OBJECT RECOGNITION to approach the object 4 using the approach-to-object skill (Step S201) as illustrated in
Registration of the skills in a tree-shaped makes it possible for the control device of the robot 5 to execute a moderately complicated task with a branch as illustrated in
The control device of the robot 5 first uses, as the modal of the sensor device 3, OBJECT RECOGNITION to approach the object 4 using the approach-to-object skill (Step S301). The control device of the robot 5 subsequently uses, as the modal of the sensor device 3, DISTANCE to approach the object 4 using the approach-to-object skill (Step S302). Here, in response to a movement of the object 4 away from the robot 5, the control device of the robot 5 returns to the process in Step S301.
In response to the detection of contact with the object 4, the control device of the robot 5 subsequently uses, as the modal of the sensor device 3, CONTACT POSITION to come into contact with the object 4 using the contact position control skill (Step S303). Here, in a case where the object 4 is nearby though the contact becomes undetected, the control device of the robot 5 returns to the process in Step S302. Meanwhile, in a case where the contact becomes undetected and the object 4 is not nearby, the control device of the robot 5 returns to the process in Step S301.
In response to the contact position being appropriate, the control device of the robot 5 subsequently uses, as the modal of the sensor device 3, INITIAL SLIP to perform the slip avoidance control using the slip avoidance control skill until termination is required (Step S304). Here, in a case where the object 4 slips down as failing to be held, the control device of the robot 5 returns to the process in Step S301. In response to the requirement of the termination, the control device of the robot 5 terminates the object holding task.
The control device of the robot 5 may simultaneously perform a plurality of the skills in parallel as illustrated in
The control device of the robot 5 first uses, as the modal of the sensor device 3, OBJECT RECOGNITION to approach the object 4 using the approach-to-object skill (Step S401). In response to the robot 5 approaching the object 4 at the predetermined distance (for example, ** cm) or less, the control device of the robot 5 subsequently uses, as the modal of the sensor device 3, DISTANCE to approach the object 4 using the approach-to-object skill (Step S402). In response to the detection of contact with the object 4, the control device of the robot 5 subsequently uses, as the modal of the sensor device 3, CONTACT POSITION to come into contact with the object 4 using the contact position control skill. In addition, the control device of the robot 5 uses, as the modal of the sensor device 3, WHOLE SLIP to perform a slip reduction/allowance control using the slip reduction/allowance control skill in parallel (Step S403). The control device of the robot 5 repeats the process in Step S403 until termination is required. In response to the requirement of the termination, the control device of the robot 5 terminates the task.
The control device of the robot 5 may be caused to learn the respective termination conditions and branch conditions for the skills through a neural network. For example, information regarding each modal of the sensor device 3 and the skill number of each skill of the robot 5 may be inputted to a neural network to determine the termination conditions and the branch conditions for the skills.
The control device of the robot 5 may set a priority for each of the skills. The control device of the robot 5 may more preferentially perform a higher-priority skill.
The control device of the robot 5 may determine the priorities by learning through a neural network. In determining the priorities, they may be determined from human demonstration data. For example, information regarding each modal of the sensor device 3 and the skill number of each skill of the robot 5 may be inputted to a neural network on the basis of the human demonstration data to determine the priorities of the skills.
The control device of the robot 5 may create a single skill by combining a plurality of modals. For example, the contact position control skill may be created by combining, as modals, WHOLE SLIP, CONTACT POSITION, and CONTACT FORCE.
The control device of the robot 5 does not necessarily pair a modal with a skill. For example, the control device of the robot 5 may output the control value of the robot 5 in accordance with a predetermined control algorithm on the basis of the modal of the control sensor device 3. The predetermined control algorithm may include, for example, mathematical expression base (a model base), neural network, If-then rule base, or the like.
The control device of the robot 5 includes a signal acquirer 700, an object recognizer 100, a distance measurer 101, an initial slip detector 102, a whole slip detector 103, a contact position detector 104, and a contact force detector 105. The control device of the robot 5 also includes an approach-to-object controller (object recognition) 200, an approach-to-object controller 201, a slip reduction controller 202, a slip allowance controller 203, a contact position controller 204, and a contact force controller 205. The control device of the robot 5 also includes a control switching processor 300, a plurality of finger controllers 400, a hand controller 500, and a robot controller 600.
The signal acquirer 700, the object recognizer 100, the distance measurer 101, the initial slip detector 102, the whole slip detector 103, the contact position detector 104, and the contact force detector 105 may be implemented by the sensor information processor 40 of the sensor device 3.
The signal acquirer 700 acquires, as the sensor information from the sensor device 3, data such as an RGB image, an RGB-D image, Point Cloud (point cloud), a Depth image, event camera data, image change information, or a marker motion vector.
The object recognizer 100 outputs data such as an object classification result, a Bounding box position, or Point Cloud on the basis of a signal acquired by the signal acquirer 700.
The distance measurer 101 outputs, for example, data such as a distance and Point Cloud. The initial slip detector 102 outputs data such as a slip flag, a fixation rate, and slip region information. The whole slip detector 103 outputs data such as a slip flag and a slip amount. The contact position detector 104 outputs data such as the contact position. The contact force detector 105 outputs data such as the contact force.
The approach-to-object controller (object recognition) 200, the approach-to-object controller (distance) 201, the slip reduction controller 202, the slip allowance controller 203, the contact position controller 204, and the contact force controller 205 each output data such as joint angle position, speed, and acceleration, force.
Each of the plurality of finger controllers 400 outputs data such as joint angle position, speed, acceleration, and force.
The hand controller 500 outputs data such as joint angle position, speed, acceleration, and force.
The contact position detector 104 includes an image acquirer 800, an image preprocessor 801, a reference image storage 802, an image differential detector 803, a feature amount tracker 804, and a centroid-of-deformation calculator 805.
The image acquirer 800 outputs, for example, image-related data. The image preprocessor 801 outputs, for example, data regarding a reference image and image-related data. The reference image storage 802 stores, for example, the data regarding the reference image from the image preprocessor 801.
The image differential detector 803 outputs, for example, image-related data obtained from a differential between the data regarding the reference image stored in the reference image storage 802 and the image-related data from the image preprocessor 801. The feature amount tracker 804 outputs, for example, tracking data. The centroid-of-deformation calculator 805 outputs, for example, contact position data.
The initial slip detector 102 includes an image acquirer 900, an image preprocessor 901, a reference image storage 902, an image differential detector 903, a feature amount tracker 904, a deformation vector magnitude detector 905, a deformation vector angle detector 906, and an initial slip detector 907.
The image acquirer 900 outputs, for example, image-related data. The image preprocessor 901 outputs, for example, data regarding a reference image and image-related data. The reference image storage 902 stores, for example, the data regarding the reference image from the image preprocessor 901.
The image differential detector 903 outputs, for example, image-related data obtained from a differential between the data regarding the reference image stored in the reference image storage 902 and the image-related data from the image preprocessor 901. The feature amount tracker 904 outputs, for example, tracking data and vector data. The deformation vector magnitude detector 905 outputs, for example, data regarding a magnitude of a deformation vector. The deformation vector angle detector 906 outputs, for example, data regarding an angle of the deformation vector. The initial slip detector 907 outputs, for example, data regarding a slip flag and data regarding a fixation rate.
As described hereinabove, the sensor device 3 and the robot 5 according to the embodiment allow for observation of a flexible layer, or the gel 10, attached to the sensor structure 20 and observation of the object 4 in the outside world through the hole 11 of the flexible layer by virtue of the imaging device 30 installed in the sensor structure 20. This makes it possible to perform a highly accurate multimodal sensing.
The sensor device 3 and the robot 5 according to the embodiment allow for implementation of a function as a tactile sensor that acquires tactile information on the basis of deformation information regarding the flexible layer observed via the imaging device 30 and a function as a proximity sensor that acquires proximity information on the basis of observation information regarding the object 4 observed through the hole 11 of the flexible layer. In the sensor device 3 according to the embodiment, the hole 11 is made in the gel 10, which facilitates the deformation of the gel 10. In the sensor device 3 according to the embodiment, a plurality of pieces of information is acquirable merely by the single sensor device 3, which makes it possible to save an installation space in the robot 5. In addition, a highly sensitive tactile sensor and a high-resolution proximity sensor are allowed to be simultaneously implemented merely by the single sensor device 3, which makes it possible to achieve a more stable and accurate manipulation action.
In the sensor device 3 and the robot 5 according to the embodiment, the single sensor device 3 is allowed to acquire a plurality of modals. As the plurality of modals is acquirable, a complicated robot action becomes possible. In addition, as the plurality of modals is acquirable, detection of a failure and recovery therefrom become possible, which allows for an action in an environment with a highly uncertain environment. In addition, as the plurality of modals is acquirable, the necessity of installing an additional sensor is eliminated and space efficiency is improved.
In addition, in the sensor device 3 according to the embodiment, the use of an image-based sensor causes space resolution to be high, which makes it possible to raise sensitivities to contact, slip, and the like. In addition, the sensor device 3 according to the embodiment is allowed to exhibit both of proximity sense and tactile sense without the necessity of sacrificing the respective detection accuracies. In addition, the sensor device 3 according to the embodiment is free from a large influence of wearing of the contact surface of the sensor device 3 (the front surface of the gel 10) on the accuracies in proximity sense and tactile sense. In addition, in the sensor device 3 according to the embodiment, separation between an imaging system and the front surface of the gel 10 is possible, so that replacement of the imaging system and the gel 10 is easy and maintainability and expandability are high. In addition, in the sensor device 3 according to the embodiment, it is possible to change the characteristics of the sensor as a whole by changing the shape of the gel 10, so that characteristics of the sensor as a whole are easily changeable in accordance with the purpose of use.
In addition, in the robot 5 according to the embodiment, a control block is divided (paired) on a modal-by-modal basis, which facilitates adjustment of a control parameter. The modal-based division of the control block facilitates disablement of a control of a malfunctioned modal, which makes the malfunction unlikely to have an influence on the entirety of the control. The modal-based division of the control block makes it possible to modularize the control block, which allows for a versatile use for various purposes of use.
Comparison with Related Art
A technique according to PTL 1 (International Publication No. WO 2009/144767) relates to a sensor including a pressure-sensitive sheet and a proximity sensor installed in a hole penetrating the pressure-sensitive sheet and the sensor allows for both detection of a contact pressure and a proximity sense, or distance measurement. In the technique according to PTL 1, the hole is made in the pressure-sensitive sheet, so that a pressure-sensitive region is reduced with an increase in the number of holes with a detection accuracy decreased. In addition, the space resolution in proximity sense and the accuracy of contact detection have a trade-off relationship due to a balance with a hole size. In addition, the pressure-sensitive sheet and the proximity sensor are integrated and difficult to separate. This makes it difficult to maintain the sensor, replace the pressure-sensitive sheet, and change a shape design of the pressure-sensitive sheet.
In contrast to the above, in the sensor device 3 according to the embodiment, the employment of the meshed structure of the compliant material (the gel 10) eliminates the trade-off between the region of the proximity sense and the region of the tactile sense, which makes it possible to simultaneously raise the accuracies of the plurality of modals. By virtue of the hole 11 having a large structure, object recognition using an image also becomes possible. In addition, the separation between the compliant material and the imaging system makes replacement easy, so that maintainability and expandability are high.
A technique according to PTL 2 (Japanese Unexamined Patent Application Publication No. 2018-9792) relates to a sensor having both a proximity function to detect a distance to an object in a non-contact manner on the basis of a change in capacitance and a tactile function to detect a change in magnetism attributed to a displacement of a magnetic body responsive to an external force. In the technique according to PTL 2, detection of a proximity sense is based on a change in capacitance, which makes recognition based on image information, such as object recognition, difficult. In addition, a proximity sensor is not allowed to be disassembled and replaced as being embedded in a compliant object, so that maintainability and expandability are low. In addition, an accuracy in proximity sense is greatly influenced by a deterioration or a change in characteristics of the compliant object due to prolonged use.
In contrast to the above, in the sensor device 3 according to the embodiment, the employment of the meshed structure of the compliant material (the gel 10) allows for both a proximity sense function, or object recognition, and distance measurement. In addition, the separation between the compliant material and the imaging system makes replacement easy, so that maintainability and expandability are high. The separation between the compliant material and the imaging system also reduces a direct influence of a deterioration of the compliant material on the imaging system.
It is to be noted that effects described herein are merely by way of example and not of limitation and any other effect are possible. The same applies to effects of other embodiments hereinbelow.
A technique of the present disclosure is not limited to the above description of the embodiment and may be modified in a variety of manners.
For example, the present technology may have the following configuration.
The present technology with the following configuration allows for observation of a flexible layer attached to a sensor structure and observation of an object in the outside world through a hole of the flexible layer by virtue of an imaging device installed in the sensor structure. This makes it possible to perform a highly accurate multimodal sensing.
(1)
A sensor device including:
a flexible layer having at least one hole; and
a sensor structure attached with the flexible layer, the sensor structure including an imaging device, the imaging device being configured to observe the flexible layer and observe an object in an outside world through the hole of the flexible layer.
(2)
The sensor device according to (1), in which
the sensor device has a function as a tactile sensor that acquires tactile information on the basis of deformation information regarding the flexible layer observed via the imaging device and a function as a proximity sensor that acquires proximity information on the basis of observation information regarding the object observed through the hole of the flexible layer.
(3)
The sensor device according to (2), further including
an information processor that acquires, as information regarding a plurality of modals, the tactile information and the proximity information on the basis of sensor information from the imaging device.
(4)
The sensor device according to (2) or (3), in which
the proximity information includes at least one of information regarding object recognition or information regarding a distance to the object.
(5)
The sensor device according to any one of (2) to (4), in which
the tactile information includes:
information regarding an initial slip and a whole slip of the flexible layer relative to the object; and
at least one of information regarding a contact position with the object or information regarding a contact force relative to the object.
(6)
The sensor device according to any one of (1) to (5), including
as the imaging device, at least one color image sensor configured to acquire a color image.
(7)
The sensor device according to (6), further including
as the imaging sensor, at least one distance sensor configured to acquire distance information.
(8)
The sensor device according to any one of (1) to (5), including
as the imaging device, at least one color image sensor configured to acquire a color image and distance information.
(9)
The sensor device according to any one of (1) to (8), in which
the flexible layer has a grid structure or a honeycomb structure, the grid structure or the honeycomb structure having a plurality of the holes.
(10)
The sensor device according to any one of (1) to (9), in which
a front surface of the flexible layer has a curved shape.
(11)
The sensor device according to any one of (1) to (10), in which
a ratio of occupancy of a region other than the hole in the flexible layer is 10% or less.
(12)
The sensor device according to any one of (1) to (11), in which
the flexible layer includes a transparent compliant material.
(13)
The sensor device according to any one of (1) to (12), in which
a front surface of the flexible layer partially has a slit.
(14)
The sensor device according to any one of (1) to (13), in which
a front surface of the flexible layer partially has a protrusion.
(15)
The sensor device according to any one of (1) to (14), in which
a front surface of the flexible layer partially includes a colored part.
(16)
The sensor device according to (15), in which
the colored part includes a plurality of colored parts different in shape or color.
(17)
The sensor device according to any one of (1) to (16), in which
the flexible layer has, as the hole, a plurality of holes, and
the plurality of holes is in respective shapes in which respective directions of the plurality of holes are toward the imaging device as approaching, as the flexible layer being seen in a lateral direction, a bottom surface from a front surface.
(18)
The sensor device according to any one of (1) to (17), in which
the flexible layer and the sensor structure form a whole or a part of a manipulator of a robot as a whole.
(19)
A robot including:
a sensor device; and
a control device that performs a robot control based on sensor information from the sensor device,
in which the sensor device includes:
a flexible layer having at least one hole; and
a sensor structure attached with the flexible layer, the sensor structure including an imaging device, the imaging device being configured to observe the flexible layer and observe an object in an outside world through the hole of the flexible layer.
(20)
The robot according to (19), further including
a manipulator,
in which the sensor device as a whole forms a whole or a part of the manipulator.
The present application claims the benefit of Japanese Priority Patent Application JP2021-214428 filed with the Japan Patent Office on Dec. 28, 2021, the entire contents of which are incorporated herein by reference.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
2021-214428 | Dec 2021 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/040997 | 11/2/2022 | WO |