DISINFECTION ROBOTS

Abstract
A UV based surface disinfection system that consists of the UV light source, a robot arm, and an omni directional mobile base. The mobile robot can be programmed autonomously and be able to bring the UV light source to the centimeters away from surfaces to achieve effective and efficient surface disinfection. The mobile robot can navigate autonomously in a complicated environment to perform disinfection operation in a large area.
Description
FIELD OF THE INVENTION

The present invention relates to robots for cleaning areas and, more particularly, to robots for cleaning using ultraviolet light.


BACKGROUND OF THE INVENTION

Current surface sanitizing methods include spraying sanitizing chemicals or using UV light. There is no convincing evidence that spraying chemicals in the air is effective in sanitizing a space, since it is difficult to ensure that enough residual chemical settles down on the surfaces after spraying in the air. Furthermore, it is also difficult to directly spray a liquid form of chemical on surfaces due to the negative environmental impact and possible damage to the surfaces of items such as books.


Currently ultraviolet (UV) based disinfection lights are either fixed on walls or are placed on mobile bases. They are usually meters away from the surfaces that need the disinfection. The studies have shown that in order to achieve effective disinfection within a short period of time period the UV light has to be only a few centimeters away from the surface to be disinfected since UV with a wavelength in the range of 250 nano meters decays significantly while being transmitted through the air. Therefore, hours of irradiation time are usually required to achieve effective disinfection using presently available UV based disinfection systems. Currently there does not exist a product available in the market nor a prototype that can irradiate UV at a distance of a few centimeters away from surfaces to achieve disinfection within seconds.


The major difficulties are that the disinfection surfaces are large and have complicated shapes. Thus, it is not feasible to use current robot programming methods to generate a robot motion plan to control the motion of the robot so as to sweep over surfaces and irradiate them with UV from a few centimeters away.


SUMMARY OF THE INVENTION

According to the present invention a UV based surface disinfection system utilizes a UV light source that is mounted on a mobile robotic arm. The robot can be programmed to act autonomously. Thus, the robot can move to a selected place along the surface to be disinfected and the robot's arm is programmed to sweep over the surface at that place in order to bring the UV light source to within a distance of a few centimeters away from the surfaces, whereby effective and efficient surface disinfection is achieved


Autonomous control is achieved with a framework that uses global localization as an initial condition. Then the control mainly compares a reference point cloud and a present point cloud in the end effector frame (E.E.F.) of the mobile manipulator based on Wasserstein distance. Then it outputs rigid special transformation information between them at low frequency (e.g., 10 Hz) and desired velocity information along the optimal path of the end effector at high frequency (e.g., 50 Hz).





BRIEF DESCRIPTION OF THE DRAWINGS

This patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.


The foregoing and other objects and advantages of the present invention will become more apparent when considered in connection with the following detailed description and appended drawings in which like designations denote like elements in the various views, and wherein:



FIG. 1 is a perspective view of a disinfection robot with an Omni-wheel mobile base according to the present invention;



FIG. 2 is a block diagram of the electronic system of the disinfection robot of the present invention;



FIG. 3 is a diagram of the disinfection robot of the present invention showing a mobile manipulator and mobile robot coordinate systems;



FIG. 4 shows an uncertain pose of an object in a world reference frame (W.R.F);



FIGS. 5A and 5B show images of the mobile manipulator of the present invention in different unstructured environments;



FIG. 6 is a schematic diagram of an “Object Perceptive Local Planer” with point cloud acquisition and point cloud registration-based vector space/sliced Wasserstein distance based non-vector space controllers in the end effector (UV light) frame according to the present invention;



FIG. 7 is an image of a Light Detection and Ranging (LiDAR) module on an arm of the robot of the present invention and a time of flight (ToF) camera at the end effector (E.E) of the mobile manipulator;



FIG. 8 shows the detailed framework of “Object Perceptive Local Planer” during a complete disinfection process on a movable object;



FIG. 9 is a perspective view of a UV-C light design for use with the present invention;



FIG. 10 is a block diagram of a control system for the UV-C light of FIG. 9;



FIG. 11 is a schematic of the electronic system for the UV-C light of FIG. 9;



FIG. 12 is an image showing the results of disinfection of E. coli with the present invention for different distances and times;



FIGS. 13A to 13C show Alpha HCoV-229E in normal human lung MRC5 fibroblasts after exposure to continuous far UV-C in occupied public locations at the current regulatory exposure limit for 0 mJ/cm3, 0.5 mJ/cm3, 1 mJ/cm3 and 2 mJ/cm3, respectively;



FIG. 14A shows exposures of four cultures to UVC LED at 270 nm for 3000/0.5 min, 6000/1 min, 12000/2 min and 24000/3 min, and FIG. 14B shows a graph of the number of colonies versus dosage in uJ/cm2;



FIG. 15 is an illustration of reference frames and transforms for a movable object disinfection mobile manipulation task;



FIG. 16 is an illustration of point cloud registration according to an embodiment of the invention;



FIG. 17 is an illustration of path transformation according to an embodiment of the invention;



FIG. 18 illustrates a planned trajectory in vector space as two 3 dimensional points;



FIG. 19 shows a reference tube in a non-vector space as a 3-dimensional point set;



FIG. 20 is a block diagram of a single tube following a control framework in Wasserstein non-vector space;



FIG. 21 shows three connected reference tubes in Wasserstein non-vector space as three 3-dimensional point sets and FIG. 21B is a block diagram of a multiple connected tubes following control framework in Wasserstein non-vector space;



FIG. 22 is a series of photographs illustrating a non-vector space motion plan and control process with unexpected object movements; and



FIG. 23 shows photographs of an occupied object (sofa) and an empty container (shelves); and



FIG. 24 illustrates an object detection result on a chair according to the present invention.





DETAILED DESCRIPTION OF THE INVENTION

As shown in FIG. 1, the design of the disinfection robot 10 of the present invention has two main parts, a mobile base 12 and a universally positionable manipulator 14 with a UV lamp 16 at its end effector 15. The base 12 has three-Omni-directional wheels 11 that provide the advantage of being able to translation in any direction without turning. This will enable the robot to move in a narrow proximity.


The manipulator includes a LiDAR laser ranging module in the UVC lamp 16 at the end of the manipulator arm. It is used to measure the distance between the UVC lamp and the disinfected surface. In this way, the lamp is able to get much closer to the surface so as to improve the sterilization efficiency.


An electronic control system 20 for the robot is mounted on the base 12 and mainly consists of one processing board, e.g., a NVIDA Jetson which provides multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. This processing board is used in the present invention for sensor information processing and task planning. The system also includes a microprocessor for robot control, e.g., a STMicroelectronics STM32. The on-board sensors include a multi-line laser scanner and a 360 degrees camera. The system communicates using Internet as well as cellular service. The architecture of the electronic system is illustrated in FIG. 2.


As shown in FIG. 2 a user can control the electronic system of the robot from a remote PC 22 over a WiFi or other transmission link. The remote signal is received by a WiFi interface or transceiver 23 located on a processing board 24. The signal is placed on an internal Advanced RISC Machine (ARM) bus. Further a USB camera 52 has its output directed to the ARM bus through a USB Host and a 16-line laser 58 has its output directed to the ARM bus through an Ethernet interface. The bus is connected to both a 6 core NVIDA Carmel ARM processor 20A and a 384 core NVIDIA Volta GPU 20B. The inputs are processed in these units to develop the planned trajectory and the output of the processing board 24 that holds these units is passed through a USB Host to a USB Slave on a Control Board 25.


An ARM Cortex processor 28 receives the processed signals and converts them into a drive signal for the wheels 11 and the movement of the joints on the manipulator 14. In particular the wheel drive signals pass to a universal asynchronous receiver-transmitter (UART) 26, which in turn divides the input into output signals for the motor drivers and motors for Omni wheels 11A, 11B and 11C on the robot base 12. The signals for the joints are passed through pulse with modulator circuit 27 and then to the motor drivers 29 and ARM motors for joints 1, 2 and 3.


As shown in FIG. 3, the universally positionable manipulator 14 of robot 10 has 3 rotation joints, which are located in three XYZ joint frames, i.e., no. 3 at 30, joint frame no. 4 at 32 and joint frame no. 5 at 34, respectively. The mobile base is located on frame no. 1 at 31. Each joint can have coordinates in xyz space as indicated, e.g., for joint 30 there are x3, y3 and z3 axes. Further, mobile base 12 can move in the x1, y1 and z1 axes. The equations for motion are as follows:










1
0

T

=

[




-

C
z





S
z



0



x
01






-

S
z





-

C
z




0



y
01





0


0


1



z
01





0


0


0


1



]


,




2
1

T

=

[



1


0


0



x
12





0


1


0



y
12





0


0


1



z
12





0


0


0


1



]


,










3
2

T

=

[




C
1



0



S
1




x
23





0


1


0



y
23






-

S
1




0



C
1




z
23





0


0


0


1



]


,




4
3

T

=

[




C
2



0



S
2



0




0


1


0


0





-

S
2




0



C
2




L
1





0


0


0


1



]


,










5
4

T

=

[




C
3



0



S
3



0




0


1


0


0





-

S
3




0



C
3




L
2





0


0


0


1



]


,




6
5

T

=

[



1


0


0


0




0


1


0


0




0


0


1



L
3





0


0


0


1



]












6
0

T

=





1
0

T





2
1

T





3
2

T





4
3

T





5
4

T





6
5

T


=

[




-

C
z



C
123





S
z




-

S
123



C
z




A





-

S
z



C
123





-

C
z





-

S
123



S
z




B





-

S
123




0



C
123



C




0


0


0


1



]



,







A
=


x
01

+


S
z

(


y
12

+

y
23


)

-


C
z

(


x
12

+

x
23


)

-


C
z

(



L
3



S
123


+


L
2



S
12


+


L
1



S
1



)



,







B
=


y
01

-


C
z

(


y
12

+

y
23


)

-


S
z

(


x
12

+

x
23


)

-


S
z

(



L
3



S
123


+


L
2



S
12


+


L

1





S
1



)



,






C
=


z
01

+

z
12

+

z
23

+


L
3



C
123


+


L
2



C
12


+


L
1



C
1







where Sz and Cz equal sin(θz) and cos(θz), respectively; Oz means current orientation of the mobile platform; S1 and C1 equal sin(q1) and cos(q1), respectively; q1 is the angle value of the joint 1; S12 and C12 equal sin(q1+q2) and cos(q1+q2), respectively; q2 is the angle value of the joint 2; S123 and C123 equal sin(q1+q2+q3) and cos(q1+q2+q3), respectively; q3 is the angle value of the joint 3; L1, L2 and L3 are the length of link on the robot arm.


The inverse kinematics are determined as follows:







q
2

=

Atan

2


(


±


1
-


(



x
ee
2

+

y
ee
2

+

z
ee
2

-

L
1
2

-

L
2
2



2


L
1



L
2



)

2




,

(



x
ee
2

+

y
ee
2

+

z
ee
2

-

L
1
2

-

L
2
2



2


L
1



L
2



)











q
1

=

Atan

2


(


z
ee

,



x
ee
2

+

y
ee
2


2


)


Atan

2


(



±

L
2





1
-


(



x
ee
2

+

y
ee
2

+

z
ee
2

-

L
1
2

-

L
2
2



2


L
1



L
2



)

2




,

(


L
1

+


L
2

(



x
ee
2

+

y
ee
2

+

z
ee
2

-

L
1
2

-

L
2
2



2


L
1



L
2



)


)













q
3

=


Atan

2


(


n
y

,

n
x


)


-

q
1

-

q
2












P
x

=


x
0

-



0
2


Tx
ee













P
y

=


y
0

-



0
2


Ty
ee













θ
z

=

Atan

2


(


y
ee

,

x
ee


)












θ
front

=



-

P
y


-

L


θ
z



R











θ
left

=




P
x



cos
(

π
6

)


+


P
y



sin
(

π
6

)


-

L


θ
z



R











θ
right

=



-

P
x



cos
(

π
6

)


+


P
y



sin
(

π
6

)


-

L


θ
z



R






where xee is the distance from end end-effector 15 to the arm platform frame along the X axis of arm platform frame; yee is the distance from the end-effector to the arm platform along the Y axis of arm platform frame; zee is the distance from the end-effector to arm platform along the Z axis of arm platform frame; nx and ny belong to the rotation matrix; x0, y0 are the position of the end-effector on the world reference frame (W.R.F.); Px, Py and θz are the pose of the mobile platform; θfront, θleft and θright are the angle of wheel 11 of the mobile platform 12; L is the distance between center of the mobile platform and the wheel 11; R is the radius of wheel 11; and 02T is a transformation matrix from no. 2 frame to no. 0 frame.


The Jacobian matrix is defined as:











0
6


(



v




w



)


=





[

J
arm






J
base

]




[





θ
.

a







θ
.

a




]










J
arm

=

[





C
z

(



L
3



C
123


+


L
2



C
12


+


L
1



C
1



)





C
z

(



L
3



C
123


+


L
2



C
12



)





C
z



L
3



C
123







-


S
z

(



L
3



C
123


+


L
2



C
12


+


L
1



C
1



)





-


S
z

(



L
3



C
123


+


L
2



C
12



)





-

S
z



L
3



C
123








-

L
3



S
123


-


L
2



S
12


-


L
1



S
1







-

L
3



S
123


-


L
2



S
12






-

L
3



S
123







S
z




S
z




S
z






-

C
z





-

C
z





-

C
z






0


0


0



]


,









J
base

=

[




-

2
3


R

cos


π
6




0




2
3


R

cos


π
6







R
3




-


2

R

3





R
3





0


0


0




0


0


0




0


0


0





-

R

3

L






-

R

3

L






-

R

3

L






]






where {dot over (θ)}a is a vector of joint velocity and {dot over (θ)}b is a vector of wheel velocity.


Dynamics modeling is achieved by the following equations:









D

(
q
)



q
¨


+


C

(

q
,

q
.


)



q
.


+



(
q
)


=
τ








D

(
q
)

=

[




i
=
1

n


{



m
i





J

v
i


(
q
)

T




J

v
i


(
q
)


+




J

ω
i


(
q
)

T




R
i

(
q
)



I
i





R
i

(
q
)

T




J

ω
i


(
q
)



}


]


,








C

(

q
,

q
.


)

=





i
=
1

n




c
ijk

(
q
)



q
.



=




i
=
1

n



1
2



{





d
kj





q
i



+




d
kj





q
j



-




d
ij





q
k




}



q
.





,










(
q
)

=


[




1

(
q
)

,



2

(
q
)

,



3

(
q
)

,



4

(
q
)


]

T


,









J

v

c

1



(
q
)

=

[




-

C
z



L

c

1




C
1




0


0



-

2
3


R

cos


π
6




0




2
3


R

cos


π
6








S
z



L

c

1




C
1




0


0



R
3




-


2

R

3





R
3






-

L

c

1




S
1




0


0


0


0


0



]


,









J

v

c

2



(
q
)

=

[




-


C
z

(



L

c

2




C
12


+


L
1



C
1



)





-

C
z



L

c

2




C
12




0



-

2
3


R

cos


π
6




0




2
3


R

cos


π
6








S
z

(



L

c

2




C
12


+


L
1



C
1



)





S
z



L

c

2




C
12




0



R
3




-


2

R

3





R
3







-

L

c

2




S
12


-


L
1



S
1






-

L

c

2




S
12




0


0


0


0



]


,








J

v

c

3



(
q
)

=

[




-


C
z

(



L

c

3




C
123


+


L
2



C
12


+


L
1



C
1



)





-


C
z

(



L

c

3




C
123


+


L
2



C
12



)





-

C
z



L

c

3




C
123





-

2
3


R

cos


π
6




0




2
3


R

cos


π
6








S
z

(



L

c

3




C
123


+


L
2



C
12


+


L
1



C
1



)





S
z

(



L

c

3




C
123


+


L
2



C
12



)





S
z



L

c

3




C
123





R
3




-


2

R

3





R
3







-

L

c

3




S
123


-


L
2



S
12


-


L
1



S
1







-

L

c

3




S
123


-


L
2



S
12






-

L

c

3




S
123




0


0


0



]









J

v

c

4



(
q
)

=

[



0


0


0



-

2
3


R

cos


π
6




0




2
3


R

cos


π
6






0


0


0


0


0


0




0


0


0


0


0


0



]









J

v

c

5



(
q
)

=

[



0


0


0


0


0


0




0


0


0



R
3




-


2

R

3





R
3





0


0


0


0


0


0



]










J

ω
1


(
q
)

=

[




S
z



0


0


0


0


0





-

C
z




0


0


0


0


0




0


0


0



-

R

3

L






-

R

3

L






-

R

3

L






]


,









J

ω
2


(
q
)

=

[




S
z




S
z



0


0


0


0





-

C
z





-

C
z




0


0


0


0




0


0


0



-

R

3

L






-

R

3

L






-

R

3

L






]


,









J

ω
3


(
q
)

=

[




S
z




S
z




S
z



0


0


0





-

C
z





-

C
z





-

C
z




0


0


0




0


0


0



-

R

3

L






-

R

3

L






-

R

3

L






]


,









J

ω
6


(
q
)

=

[



0


0


0


0


0


0




0


0


0


0


0


0




0


0


0



-

R

3

L






-

R

3

L






-

R

3

L






]


,







P
=



m
1





L

c

1




S
1


+


m
2





(



L
1



S
1


+


L

c

2




S
12



)


+


m
3





(



L
1



S
1


+


L
2



S
12


+


L

c

3




S
123



)


+


m
4






,









i

(
q
)

=



P




q
i







where D(q) is the inertia matrix; C(q,{dot over (q)}) is the Coriolis matrix; g(q) is the gravitational potential energy; Jvi(q) is the linear velocity Jacobian matrix of i-th joint; Jωi(q) is the angular velocity of the Jacobian matrix of i-th joint; RI(q) is rotation matrix; II is the inertia tensor; dij is an element in the inertia matrix; Lci is the distance between i-th joint and the center of i-th link; m1, m2 and m3 are the weight of Link 1, Link 2 and Link 3, respectively; and m4 is the weight of mobile platform.


During Motion planning, the velocity, acceleration, position and time according to the known initial and end points are acquired. Based on this information a plan is generated to control the end-effector so it follows a desired trajectory. When planning this trajectory, it is divided into three sections: third-order, fifth-order and third-order trajectory, as shown in the following formula.






t
0->t1:a13t3+a12t2+a11t+a10=x(t)






t
1->t2a25t5+a24t4+a23t3+a22t2+a21t+a20=x(t)






t
2->tf:a33t3+a32t2+a31t+a30=x(t)


After performing the 3-5-3 order trajectories in the planner, the pose of each point in task space can be transformed into joint space to control the robot's joint through inverse kinematics and a Jacobian matrix as shown above. The pose of a robot provides information on the location of the robot in either two or three dimensions and also its orientation. For mobile robots, the pose can be a simple three element vector, but for arm-based robots, a matrix is often used to describe the pose. Therefore, in joint space, the joint controller can publish commands to control the joint based on the position and velocity, where the velocity and position correspond to a specific time.


In the controller system, the input contains two parts: feedforward and feedback. The first part (feedforward) is acquired through the Dynamics model indicated above to compute the torque in the controller, and the formula of torque is set forth below. The second part (feedback) is the position and velocity. Based on the position and velocity error, the position loop and velocity loop are constructed. Then the feedforward of torque is combined with the position loop and velocity loop to calculate the decoupling output. The controller output is torque. This torque value satisfies the requirement of the entire robot system in order to realize a target, including mutual influence between each joint. Finally, the torque output is provided to the driver for the manipulator, and the driver generates current to make the manipulator motors operate according to the corresponding torque. The formulas for torque are







τ
1

=



d
11




q
¨

1


+


d
12




q
¨

2


+


d
13




q
¨

3


+


d
14




q
¨

4


+


d
15




q
¨

5


+


d
16




q
¨

6


+




i
=
1

6






j
=
1

6





c

ij

1


(
q
)




q
.

i




q
.

j




+



1

(
q
)









τ
2

=



d
21




q
¨

1


+


d
22




q
¨

2


+


d
23




q
¨

3


+


d
24




q
¨

4


+


d
25




q
¨

5


+


d
26




q
¨

6


+




i
=
1

6






j
=
1

6





c

ij

2


(
q
)




q
.

i




q
.

j




+



2

(
q
)









τ
3

=



d
31




q
¨

1


+


d
32




q
¨

2


+


d
33




q
¨

3


+


d
34




q
¨

4


+


d
35




q
¨

5


+


d
36




q
¨

6


+




i
=
1

6






j
=
1

6





c

ij

3


(
q
)




q
.

i




q
.

j




+



3

(
q
)









F
x

=



d
41




q
¨

1


+


d
42




q
¨

2


+


d
43




q
¨

3


+


d
44




q
¨

4


+


d
45




q
¨

5


+


d
46




q
¨

6


+




i
=
1

6






j
=
1

6





c

ij

4


(
q
)




q
.

i




q
.

j




+



4

(
q
)









F
y

=



d
51




q
¨

1


+


d
52




q
¨

2


+


d
53




q
¨

3


+


d
54




q
¨

4


+


d
55




q
¨

5


+


d
56




q
¨

6


+




i
=
1

6






j
=
1

6





c

ij

5


(
q
)




q
.

i




q
.

j




+



5

(
q
)









M
z

=



d
61




q
¨

1


+


d
62




q
¨

2


+


d
63




q
¨

3


+


d
64




q
¨

4


+


d
65




q
¨

5


+


d
66




q
¨

6


+




i
=
1

6






j
=
1

6





c

ij

6


(
q
)




q
.

i




q
.

j




+



6

(
q
)











[




τ
right






τ
front






τ
left




]

=


[




-

2
3


R

cos


π
6





R
3




-

R

3

L







0



-


2

R

3





-

R

3

L









2
3


R

cos


π
6





R
3




-

R

3

L






]

[




F
x






F
y






M
z




]






Where τ1, τ2 and τ3 are the torque of joint 1, joint 2 and joint 3 respectively; τright, τfront and τleft are the torque of the wheel in the mobile platform; dij is the element in the inertia matrix; cijk is the Coriolis matrix of k-th joint; g(q) is the gravitational potential energy; and Fx, Fy and Mz are the driving force and moment applied to the mobile platform.


It is vital for the robot to have robustness and flexibility when executing mobile manipulation tasks, such as indoors disinfection. However, there are difficulties in mobile manipulation tasks in unstructured environments. To be more specific, there are three main challenges in indoor mobile manipulation


As shown in FIG. 4, the object to be disinfected, e.g., the white board, can be movable, then WLT can vary from time to time. Then it is almost impossible to finish the task with a path described in the world reference frame (W.R.F.). In FIG. 4 the rigid object 40 is a movable white board. It can be located in W.R.F coordinates 42. The robot base frame also has a set of coordinates 44 and the end effector frame (E.E.F.) has a limited range (red circle) which only partially engages the white board and could not by itself plot the desired path 46, which zig zags across the surface of the white board as shown.



FIG. 5 illustrates two an unstructured environments, like an office, library and so on, that may have unstable and inaccurate localization based on a global point cloud map due to occlusion (blocking) of the LiDAR's path to the desired object, changes of environment (movement of objects), etc. Also, the robot can be interrupted in lots of ways (e.g., a person standing in the way of the movable base or manipulator), so the robot is expected to respond smartly and safely to those interruptions. Then the robot should recover and go on to complete the unfinished task smoothly and quickly.


The “Object Perceptive Local Planer” method of the present invention as shown in FIG. 6 is designed to realize this objective. As shown in FIG. 6 at 60, X(t) segments are segmented from a real-time point cloud. At 62 Xref(s) segments are segmented from a global map as the end effector's desired path. A point cloud sliced Wasserstein based perceptive motion local planner 64 uses the selected segments X(t) from step 60 to map the UV desired path compared to the actual UV path. The comparison is updated at a rate of 10 Hz. At step 64 the speed profile of the end effector is updated along the geodesics path at 50 Hz. Thus, the optimal transport path from X(t) to Xref (s) is established.


One key to this process is that the motion planning and controlling steps utilize both spatial and shape information about an object of interest in the global frame to adapt to unexpected disturbances in the robot local frame. The “Object Perceptive Local Planer” takes advantage of a time-of-flight (ToF) camera 54 attached to the UV light 16 at the end-effector (E.E.) and the Light Detection and Ranging (LiDAR) device 17 on the robot base as shown in FIG. 7. A ToF camera is a range imaging camera system employing time-of-flight techniques to resolve the distance between the camera and the subject for each point of an image, by measuring the round-trip time of an artificial light signal provided by a laser. Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR, in which the entire scene is captured with each laser pulse, as opposed to point-by-point with a laser beam such as in scanning LIDAR systems.


The overall framework of the Algorithm is shown in FIG. 8. As can be seen in FIG. 8, a present point cloud segmentation 82 receives a present image I(t) and a present point cloud X(t) from the robot. Also input to the circuit of FIG. 8 are the signals camera_intrinsics.yami & camera_tof_If.launch, whose use is described below. Unit 82 provides a visual cone in the local frame and produces Xseg(t) that is delivered to Object Perceptive Tracker (point cloud registration-based vector space/sliced Wasserstein distance based non-vector space controllers) unit 88. Reference point cloud generator 84 receives UV_desired.txt, Map.ply and p_num(s) from which it generates X′seg(s), which is applied to unit 88. Unit 88 also receives E.E. State (velocity acceleration) and E.E. constraints. The E.E. constraints come from joint dynamic constraints and environmental information (e.g., friction) after passing through dynamics module 87.


The sliced perceptive object tracker 88 engages in trajectory planning with dynamic constraints on the geodesics path and Wasserstein barycentre as intersections from X to X′. Tracker 88 produces as an output to the point cloud registration (based on sliced Wasserstein distance) at a rate of 10 Hz, which is passed to the point cloud sliced Wasserstein based perceptive motion local planner 85 (which is equivalent to unit 64 of FIG. 6). Planner 85 has its output tf: from/UV to/UV desired directed to generator 84 as an input along with X′seg(s) and LiDAR_FoV.yaml. A second output of the Planner 85 is applied to a calculation block 86 which engages in e_calculation (known/map to/UV). This block 86 provides the input P_num(s) to generator 84. Finally, an output p(T) at 50 Hz from tracker 88 is applied to kinematics unit 83 to create q(t) which is directed to the robot 89, which in turn provides the present image and present point cloud applied to unit 82.


In brief the framework uses global localization as an initial condition, then it mainly compares reference point cloud and present point cloud in the end effector frame (E.E.F.) of the mobile manipulator based on Wasserstein distance. Then it outputs rigid transformation information between them at low frequency and desired velocity information along the optimal path of the end effector at high frequency.


In E.E.F., Wasserstein Distance (X(t), X_ref (s)) leads to a rigid transform and an optimal transport plan with a geodesics path between them. Directly estimating the rigid transform of two-point clouds is time consuming. However, with the Wasserstein barycenter, as the intersections of two-point clouds, the transform can be rapidly obtained and is guaranteed to be on the shortest path in terms of optimal transport from the source point cloud to the destination. The main contribution of this part is to integrate the perception and control module together to provide “spatial perceptive ability” of the motion in real time.


The UV-C light module, as shown in FIG. 9, is used to disinfect the target area. The module 16 is mounted on the robot manipulator arm 14. The robot arm brings the UV-C light module to within a few centimeters of the target surface. The model moves along the target surface at a fixed distance from it to ensure uniform disinfection performance. Users may remotely control the robot through remote PC 22, which is connected to the control 20 by WiFi, Bluetooth or some other communications link 21 as shown in FIG. 2. To assist the user control, a USB camera 52 and ToF 3D cameras 54 are installed to capture the real-time image of the working environment. The IR sensors are installed to protect the human while the disinfection process is going on.


As shown in FIG. 9, there are 4 IR sensors 50 with one installed on each side of the UV light module 16 to measure the distance between the UV light 56 and the object. When the distance becomes too close, a warning signal is sent and the operation of the robot is stopped. Four LED signal lights 58 are installed, one at each corner of the light module 16 to remind the user that the device is in operation. The 360 degrees USB camera 52 is installed at the front of the module to record the image. The image and video are sent while the robot is operating. The two ToF 3D cameras 54 are also installed at the front of the module 16 to collect 3D point clouds of the landscape and objects in front of the UV light module. A 100 W UV light 56 is installed on the front of the UV module to disinfect the area.


A block diagram of the control system of the UVC is shown in FIG. 10 where a power control board 90 is provided with 24-volt power. Board 90 is in communication with a UV-C light module controller 92. In addition, 12-volt power is delivered to controller 92. The controller activates the UV light 56 and the LED signal lights 58. Controller 62 also activates the IR Sensors 50 and receives the output from these sensors to control the distance of the UV light 16 from the surface to be cleaned.


A schematic of the control system of the UVC is shown in FIG. 11. At the heart of the control system is a microcontroller unit (MCU) 94. It is programmed to drive the 4 IR sensors 50A, 50B, 50C and 50D and receive their outputs from connector 95. Further, it operates the UV lamp or light 56 and the LED signal lights 58 through driver circuits 96. The output of USB camera 52 is provided to a USB connector.


Test were conducted with the disinfection robot of the present invention. The photograph of FIG. 12 shows the effect on 222 nm E. coli. In the control sample (upper left) no illumination was provided and the bacteria proliferated. Where the exposure time was 1 minute at a distance of 10 cm from the sample (upper center), there was a large reduction in the number of bacteria. When the exposure time was increased to 5 minutes (upper right) there was a drastic reduction in the number of bacteria over that when the exposure was only for 1 minute. Increasing the exposure to 3 minutes and 10 minutes (left and right lower, respectively) caused corresponding reductions in the bacteria.



FIG. 13 shows that beta-HCoV-OC43 results with continuous far-UVC exposure in occupied public locations at the current regulatory exposure limit (−3 mJ/cm2/hour) results in a 90% viral inactivation in 8 minutes, 95% in 11 minutes, 99% in 16 minutes and 99.9% inactivation in 25 minutes. Thus, while staying within current regulatory dose limits, a low-dose-rate of far-UVC exposure can safely provide a major reduction in the ambient level of airborne coronaviruses in occupied public locations. The photographs of FIG. 13 show alpha HCoV-229E in normal human lung MRC5 fibroblasts. FIG. 13A is for anti-human coronavirus spike glycoprotein without UV exposure. FIGS. 13B, C and D show DAPI fluorescent stain at exposures of 0.5 mJ/cm2, 1 mJ/cm2 and 2 mJ/cm2, respectively.



FIG. 14A shows exposures of four cultures to UVC LED at 270 nm for 3000/0.5 min, 6000/1 min, 12000/2 min and 24000/3 min. FIG. 14B shows a graph of the number of colonies versus dosage in uJ/cm2. This test shows that a 270 nm UVC: 1 min-6 mj/cm2 results in a 90%+disinfection efficiency. By comparison exposure with a LUMENIZER 222 nm lamp for 5 min caused a 90%+disinfection efficiency. Similarly, a test with a David J. Brenner 222 nm UVC for 8 min at 2 mI/cm2 resulted in a 90%+disinfection efficiency Thus, the present invention produces the same disinfection efficiency as the prior art, but in less time.


As noted above, a disinfection motion plan, such as the trajectory of the robot End-Effector (E.E.) UVC light 16, can be described in the World Reference Frame (custom-characterW.R.F) established in the global point cloud map. Such a disinfection plan is valid based on the assumption that the pose of the object being disinfected is static in the Global Reference Frame (G.R.F) where TWL, is static. Static objects include buttons on the walls, fixed bookshelves and desks. Then with pose estimation of the robot end effector in the global map(TWEE), the pose control error can be calculated and gradually eliminated. However, this planning and control framework suffers in two respects.


First, when encountering movable objects like chairs, the static object pose assumption cannot be valid since their pose (TWL) can vary from time to time in W.R.F. See FIG. 15, which illustrates the reference frames and transforms for a movable object disinfection mobile manipulation task. Therefore, the disinfection motion planning and control scheme in G.R.F. can cause collusions and/or unsatisfactory disinfection performance on movable objects. Second, the control error depends on the pose estimation of the robot end effector in the global map (TWEE), which can be unreliable when the environment changes, such as a new indoor furniture layout, compared with the one when the global map is established.


When considering the disinfection task with respect to a movable object, it becomes clear that the relative spatial relationship from the end effector (R_(E.E.)) to the latest object pose (R_(L.R.F)) is vital. Therefore, to meet the challenges of disinfecting movable objects, it is proposed as part of the present invention that the disinfection motion planning include a Local Reference Frame (R_(L.R.F)) which is attached to the movable object. The advantage of this is that it avoids motion re-planning each time the object pose is changed.


Additionally, the pose estimation reference is no longer the whole environment, just the object, which can reject the unreliable pose estimation caused by the environment changes. Two frameworks, one in vector space and one in non-vector space, are considered for accomplishing these innovative ideas, including reference frame conversion of the motion plan, error estimation and control in the local reference frame (R_(L.R.F)).


For the movable object, the vector space method uses the point cloud registration algorithm to output the transformation matrix in the vector space based on two different point clouds, one of the point clouds is the reference point cloud and the other one is the currently observed point cloud, called the “target point cloud.” Both reference and target point cloud are described in the Local Reference Frame (custom-characterL.R.F). The transformation matrix is acquired for converting from the reference point cloud to the target point cloud. Then robot can follow the given path and trajectory to finish the disinfection task and does not need to perform the motion planning again.


The vector space method needs to obtain the reference point cloud in advance. A relatively complete reference point cloud is usually collected, and it is described in the custom-characterL.R.F. The end effector of the robot is provided with a 3D camera, which collects the target point cloud. The target point cloud is also described in the same coordinate frame with the reference point cloud. Moreover, when there are a reference point cloud and a target point cloud, the point cloud registration algorithm is used to calculate the transform matrix. First, the Point Feature Histogram (PFH) of the reference point cloud and the target point cloud need to be calculated. PFH features are not only related to the three-dimensional data of the coordinate axis, but they are also related to the surface normal. Then, the algorithm of Iterative Closest Point (ICP) is used to calculate the transformation relationship between these two different point clouds with their PFH features. The ICP algorithm will output a result with a score value. A lower score means a more accurate and confident result. FIG. 16 shows the reference point cloud, target point cloud and the output result. In the FIG. 16 the white cloud represents the reference point cloud, and the blue cloud represents the target point cloud. The orange point represents the result of the reference point cloud being transformed by the output transformation matrix.


After obtaining the transform relationship, the original motion planning that is attached to the movable object can be transferred to the target object by the output transform relationship. Therefore, the robot just needs to follow the given path and trajectory to disinfect the movable object without re-planning the entire path and trajectory for the movable object. The path of motion planning shows in the FIG. 17. The green line represents the original path with the reference point cloud and the red line represents the target path with the target reference point cloud.


The idea of non-vector space planning and control framework involves using the movable object points set observed at custom-characterE.E. directly as the system state, which directly ensures the relative spatial relationship from the end effector to the object and largely omits feature extraction and vectorization computation cost in the vector-space algorithms, such as localization.


In motion plan conversion, the original vector space motion plan is a series of UVC light (custom-characterE.E.) trajectory (time-varying pose, velocity, and acceleration vectors) in the custom-characterW.R.F. On the other hand, the motion plan is described by a series of continuously evolving sets, called “tube” in non-vector space. By having a virtual end effector ideally travel as described in the mentioned trajectory in a simulation environment (or the real end effector moves in real world), the 3D scanner on the end effector would simultaneously collect a series of points cloud of the object to be disinfected. Therefore the motion plan is converted from vectors described in custom-characterW.R.F. to sets observed in custom-characterE.E.. A motion plan conversion example from vector-space line trajectory in custom-characterW.R.F (FIG. 18) to the non-vector space tube (first and last set in FIG. 19) is shown. The planned trajectory (Yd(t)/Yd(s)) in the vector space are shown in FIG. 18, where p1,p2 are two 3 dimensional points with a normal. In FIG. 19 the reference tube ({circumflex over (K)}(t)/{circumflex over (K)}(s)) in the non-vector space ({circumflex over (K)}) is a 3 dimensional points set.


As for the control process, the 3D scanner at the robot end effector obtains segmented points set K(t) of the movable object in real-time. Then a designed non-vector space controller converges the Wasserstein distance as the control error from K(t) to {circumflex over (K)} by calculating the appropriate velocity of the end effector. During the control process, the movable object can still be moved and the robot end effector will robustly adopt such unexpected changes with the system shown in FIG. 20.


In order to cover a larger object surface, a more complicated disinfection path can be expressed by more than one single tube in non-vector space. The snapshots in FIG. 21A show start and end sets in three connected reference tubes in custom-characterE.E. during a complete chair disinfection in Wasserstein non-vector space. The directions of three reference tubes are shown by arrows on one side, the front, the other side and the top. FIG. 21B shows multiple reference tubes following a control framework in Wasserstein non-vector space. In particular, the Current point set K(t) is received from the end effector. This set is combined with the Desired points set to {circumflex over (K)}(s*) to derive the set error that provides Wasserstein space control for the end effector. The Current points set is also provided to a distance calculator along with the Perceptive Reference Tube set {circumflex over (K)}(s) to generate the Desired points set to {circumflex over (K)}(s*) used in the error calculation. In addition the Desired points set to {circumflex over (K)}(s*) and the set error are applied to the connected reference tube {circumflex over (K)}i(s) which in turn is used to derive the Perceptive Reference Tube set {circumflex over (K)}(s) that is used in the distance calculator to derive the Desired points set to {circumflex over (K)}(s*)



FIG. 22B shows the control processes with unexpected object movements. In particular, the series of images show the robot approaching a chair, the chair being moved, the robot finding the chair in its new location and the robot disinfecting the chair at the new location.


The vector space framework realizes high position control accuracy because it estimates the Euclidean transformation from the past scanned object to the present observed object once by point cloud registration algorithms. During the disinfection period, the end effector can approach the object closely with both an updated motion plan in R_(G.R.F) and closed-loop control in R_(G.R.F). However, the if the shape of the observed object is different from the reference object's shape, the point cloud registration will fail. Also, if the object is moved during the control process, the point cloud registration must be performed one more time.


The non-vector space framework shows better robust performance when there are partial observations of an interested object because these observations do not rely on the vector space transform precision of point cloud registration, and the unexpected movements of the object due to it's closed-loop control in R_(E.E.) throughout the whole disinfection process without a feature extraction process. Also, a disinfection plan can be used on multiple objects with similar shapes because the representation of sets error is Wasserstein distance, which can describe not only isometric transform but also shape difference. The drawback of non-vector space is that both the storage size and computation cost of creating the set are large. This is overcome by a point cloud voxel down-sampling filter and would be solved more efficiently by tools like compressive sensing.


At the disinfection action level, there are still many situations, such as the object being occupied, an occupied object becoming free, an empty container and a full container as shown in FIG. 23. If the robot uses the same strategy and action for all these situations, it will a waste a lot of time and perform some meaningless operations. It may even sometimes do harm to the people who using the object. Thus, when the robot faces different situations, the robot should deal with them using different strategies and actions. For example, if there is a sofa and a person is sitting there at the same time, the robot should perform a skip action when the end effector is close to the person; If the robot disinfects the bookshelf, when the robot meets some empty space along the bookshelf, the robot should also perform a skip action to avoid disinfecting an empty space.


In facing the challenge of performing actions depending on the situations, the robot needs to detect the object state when it is performing disinfection tasks. After the occupancy or empty states are detected the system can rearrange its action to skip disinfection or postpone it. Because the system uses event-based detection in its framework, and the event-based motion planning allows rearrangement of the action without redoing the entire motion plan, its operation can be very efficient.


The most important thing in this embodiment is to detect and classify different situations, which is realized by an algorithm called an “action online planner”. According to the information from the RGB camera and 3D camera, the situations can be classified. Through the RGB camera, using the You Only Look Once (YOLO) algorithm can find different objects, such as chairs, sofa and human as shown in FIG. 24. Besides, the 3D camera, which that is located in the front of the end effector, can help to detect an empty or full container because the point cloud information shows different distances when the container is full and when it is empty. The action online planner collects all this sensor information in real-time and classifies different situations based on this information. After that, the action online planner will rearrange the action sequence, skipping the empty containers and occupied objects, and performing the disinfection action just on those free objects and full containers.


While the invention is explained in relation to certain embodiments, it is to be understood that various modifications thereof will become apparent to those skilled in the art upon reading the specification. Therefore, it is to be understood that the invention disclosed herein is intended to cover such modifications as fall within the scope of the appended claims.

Claims
  • 1. A disinfection robot comprising: a mobile base with omni-directional wheels;a universally positionable manipulator with an end effector (E.E.);a UV lamp module attached to the end effector of the manipulator that includes a UV disinfection lamp and a ranging module using sensors to detect the distance of the lamp from a surface to be disinfected; andan electronic control system for planning a path for the UV lamp to be moved over the surface based on information from the sensors by controlling the directional wheels of the base and the movement of the manipulator.
  • 2. The disinfection robot according to claim 1 wherein the ranging module includes a multi-line laser scanner and a 360 degrees camera.
  • 3. The disinfection robot according to claim 2 wherein the ranging module is a LiDAR system.
  • 4. The disinfection robot according to claim 1 further including a remote processor with a transceiver and a transceiver connected to the electronic control circuit so the remote processor and electronic control circuit can communicate over a transmission link so that a user can control the robot remotely.
  • 5. The disinfection robot according to claim 4 wherein the remote processor and the electronic control circuit communicate over a WiFi transmission link.
  • 6. The disinfection robot according to claim 1 wherein the universally positionable manipulator has 3 rotational joints, which are located in three XYZ joint frames, and the mobile base is located in a separate XYZ frame.
  • 7. The disinfection robot according to claim 1 wherein the electronic control system acquires in task space from the sensors the velocity, acceleration, position and time according to the known initial and end points, and generates a plan for a trajectory to control the end-effector so it follows this trajectory.
  • 8. The disinfection robot according to claim 7 wherein the trajectory is divided into three sections: third-order, fifth-order and third-order trajectories; and after performing the third-order, fifth-order and third-order trajectories in the planner, the pose of each point in task space is transformed into joint space to control the robot's joint through an inverse kinematics and a Jacobian matrix so that commands can be published to control the joints based on the position and velocity, where the velocity and position correspond to a specific time.
  • 9. The disinfection robot according to claim 1 wherein the electronic control system contains a feedforward part and a feedback part and a controller for the manipulator, wherein a first part (the feedforward) is acquired through a Dynamics model to compute torque in the controller and the second part (feedback) is the position and velocity,wherein based on the position and velocity error, the position loop and velocity loop are constructed,wherein the feedforward of torque is combined with the position loop and velocity loop to calculate the decoupling output, the controller output is a torque whose value satisfies the requirement of the entire robot system in order to realize a target, including mutual influence between each joint; andwherein the torque output is provided to a driver for the manipulator, and the driver generates current to make manipulator motors operate according to the corresponding torque.
  • 10. The disinfection robot according to claim 1 wherein the electronic control system plans a path according to an object perceptive local planner method comprising the steps of: segmenting X(t) segments from a real-time point cloud;segmenting Xref(s) segments from a global map as the end effector's desired path;using the selected segments X(t) with a point cloud sliced Wasserstein based perceptive motion local planner to map the UV desired path compared to the actual UV path;updating the comparison; andupdating the speed profile of the end effector along the geodesics path, whereby the optimal transport path from X(t) to Xref (s) is established.
  • 11. The disinfection robot according to claim 10 wherein the comparisons is updated at a rate of 10 Hz and the speed profile is updated at a rate of 50 Hz.
  • 12. The disinfection robot according to claim 1 further including a time-of-flight (ToF) camera attached to the UV light module at the end-effector and a Light Detection and Ranging (LiDAR) device on the robot base, and wherein the method of planning the path comprises the steps of: receiving a present image, I(t), a present point cloud X(t) and camera_intrinsics.yaml & camera_tof_If.launch signals from the robot at a present point cloud segmentation processor;producing at the present point cloud segmentation processor a visual cone in the local frame and Xseg(t), which is delivered to a sliced perceptive object tracker;receiving UV_desired.txt, Map.ply and p_num(s) at a.reference point cloud generator, which in turn generates X′seg(s), which is applied to the sliced perceptive object tracker along with an EE State (velocity or acceleration) and EE constraints, wherein the EE constraints come from joint dynamic constraints and environmental information (e.g., friction) after passing through a dynamics module;causing the sliced perceptive object tracker to engage in trajectory planning with dynamic constraints on a geodesics path and Wasserstein barycentre as intersections from X to X′, wherein the tracker produces as an output the point cloud registration (based on sliced Wasserstein distance);passing the point cloud registration to a point cloud sliced Wasserstein based perceptive motion local planner, wherein the planner has its output tf: from/UV to/UV desired directed to the reference point cloud generator as an input along with X′seg(s) and LiDAR_FoV.yami;applying a second output of the planner to a calculation block which engages in e_calculation (known/map to/UV), which provides the input P_num(s) to the reference point cloud generator; andapplying an output p(T) from the tracker to a kinematics unit to create q(t), which is directed to the robot, and which in turn provides the present image and present point cloud applied to the present point cloud segmentation unit, whereby global localization is used as an initial condition, then it mainly compares reference point cloud and present point cloud in the end effector frame (E.E.F.) of the mobile manipulator based on Wasserstein distance, and then outputs rigid transformation between them at a low frequency and desired velocity along the optimal path of the end effector at a high frequency.
  • 13. The disinfection robot according to claim 1 for conducting a disinfection task with respect to a movable object, further including a 3D camera on the end effector of the robot, and wherein the electronic control system includes a disinfection motion planning module with a Local Reference Frame (R_(L.R.F)) attached to the movable object that carries out the steps of”obtaining a reference point cloud in advance;collecting a target point cloud with the 3D camera;utilizing a vector space with a point cloud registration algorithm to output a transformation matrix in the vector space based on the reference point cloud and the target point cloud, wherein the transformation matrix is acquired for converting from the reference point cloud to the target point cloud;transferring the original motion planning that is attached to the movable object to the target object according to the output transformation matrix; andcausing the robot end effector to follow a given path and trajectory to finish the disinfection of the movable object without being required to perform additional motion planning for the entire path and trajectory.
  • 14. The disinfection robot according to claim 13 wherein the point cloud registration algorithm comprises the steps of: calculating a Point Feature Histogram (PFH) of the reference point cloud and the target point cloud, wherein the. PFH features are related to the three-dimensional data of the coordinate axis and the surface normal; andcalculating the transformation relationship between the two different point clouds with their PFH features using the algorithm of Iterative Closest Point (ICP), which outputs a result with a score value where lower score means a more accurate and confident result.
  • 15. The disinfection robot according to claim 1 for conducting a disinfection task with respect to a movable object, further including a 3D camera on the end effector of the robot, and wherein the electronic control system includes a disinfection motion planning module using a movable object points set observed at E.E. directly as the system state, which directly ensures the relative spatial relationship from the end effector to the object;wherein the motion plan is a series of continuously evolving sets or “tube” in non-vector space generated by (a) expert demonstration in E.E. by joystick E.E. control or (b) convention from vector space trajectory in W.R.F., and t e module comprises the steps of:having the 3D scanner on the end effector simultaneously collect a series of cloud points of the object to be disinfected while the E.E. is controlled by an expert in a disinfection demonstration process (a);converting the motion plan from vectors described in W.R.F. to sets observed in E.E. to the non-vector space tube while a virtual end effector with ToF LiDAR FoV culling point cloud collector travels in a described trajectory in a simulation environment (b);having the 3D scanner at the robot end effector obtain segmented points set K(t) of the movable object in real-time;using a designed non-vector space controller to converge the Wasserstein distance as the control error from K(t) to {circumflex over (K)} with theoretical and experimentally proven exponential stable by calculating the appropriate velocity of the end effector; andcausing the robot end effector to maintain a certain relative spatial relationship to the movable object and to impose effective UVC dosage without being required to perform additional motion planning for the entire path and trajectory.
CROSS-REFERENCE TO RELATED APPLICATION

The present application claims the benefit of priority to U.S. provisional patent application Ser. No. 63/270,246, filed Oct. 21, 2021, which is hereby incorporated by reference in its entirety.

Provisional Applications (1)
Number Date Country
63270246 Oct 2021 US