The invention comprises a robotic device that automates certain functions of the laboratory workbench, such as drawing liquid from one or more reservoirs, depositing the liquid in one or more wells, discarding a used pipette tip, and adding on a new pipette tip. The device is equipped with cameras and a machine vision module which enables it to identify and categorize all objects on a workbench and to determine if a foreign or unknown object has entered the workbench during operation and to issue an alert. The invention further comprises a computing device that receives natural language instructions from a user, translates the instructions into a middleware language, and then compiles them into device-specific control instructions which it provides to the robotic device.
Biotechnology is a burgeoning field. A substantial amount of research and development is conducted in the laboratory through experiments. Experiments often require the execution of mundane but exacting actions, such as filling dozens of test tubes with exact quantities of various liquids. A reliable experiment requires consistency and accuracy in these actions. It is very difficult to reproduce the same experiment multiple times or to scale the experiment to include additional material or steps.
In the prior art, this task often would be performed by a person, which is a tedious and often error-prone endeavor.
The prior art also includes certain automated devices that can perform the measuring and mixing of liquids, such as a robot currently offered by manufacturer Opentrons. These prior art devices leverage the technology of 3D printers.
However, these prior art devices, such as robot 200, are difficult to program and require the user to understand a programming language or an arcane set of instructions or control signals specific to the device. Operation is difficult and tedious because a person either needs to manually input the location of each object or is limited to using equipment that is designed specifically for the device, such as a rack with specific types and numbers of wells and reservoirs. Moreover, these prior art devices do not have the ability to detect a foreign or unknown object (such as a user's hands, or a fallen pipette), or to determine if the material to be transported is absent, if the material to be delivered is not present in sufficient quantity, or if the quantity of material to be delivered is not correct, and the prior art devices would keep operating even if a new object appeared on the workbench, which might result in an injury or broken materials, both of which could compromise the underlying experiment.
What is needed is an improved automated, robotic device for use in the laboratory that is easier to program, that can accommodate a typical laboratory workbench and a range of different materials, that can reproduce the same experiment any number of times with complete accuracy and consistency, that can scale to include additional materials or steps, and that can detect the introduction of a foreign or unknown object onto the workbench or other situations requiring user attention.
The invention comprises an automated robotic device that can draw liquid from one or more reservoirs and deposit the liquid into one or more wells. The device can discard a used pipette tip and add on a new pipette tip. The device is equipped with machine vision which allows it to identify and categorize all objects on a workbench and to determine if a foreign or unknown object has entered the workbench during operation and to issue an alert. The device is equipped with additional optical sensors (including basic cameras) and/or pressure touch sensors on pipettes that will allow for the monitoring of material levels in wells. The device can be programmed using natural language instructions, which are translated into a middleware language and then compiled into device-specific control instructions. The device can reproduce the same experiment any number of times, and it can scale to include additional materials or steps.
The features and advantages described in this summary and the following detailed description are not all-inclusive. Many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings and specifications.
Server 510 operates translator 550 and compiler 560 (discussed below), as well as machine vision module 570. Machine vision module 570 obtains image and video data captured by stationary camera 330 and mobile camera 340 and performs image recognition algorithms. Client device 520 provides user interface 580.
Server 510 generates a computing object for each physical object. The physical objects in work bench 400 correspond to workbench computing objects 620. In one embodiment, each computing object has an object type 621, such as reservoir, well, pipette tip, liquid, and other. Each computing object also can be assigned an Object ID 622 (which is a unique identifier for the object). Coordinates 623 can be captured for the boundaries and/or the middle of the physical object, and the presence and the content manifest 624, such as depth, of any liquid in the object can be ascertained, for example, by using a laser, infrared sensor, or other sensor.
With reference again to
A very skilled and trained bioinformatics programmer who wishes to not use natural language instructions 810, can instead provide instructions in this intermediate language which can be directly translated into device-specific control language 830. Device-specific control language instructions 830 are in the language understood by controller 270. This language might be specific to controller 270, much like a device driver on a PC might be specific to a certain brand and type of peripheral. Notably, if a different controller 270 or automated robotic device 300 is used, the same intermediate language instructions 820 can be utilized, and compiler 560 can compile those instructions into a device-specific control language that is suitable for the different controller or automated robotic device.
Thus, intermediate language 730 and intermediate language instructions 820 are device-independent and therefore can be viewed as middleware. Natural language 710 also is device-independent.
In this example, a new physical object 1020 has appeared on workbench 400. Physical object 1020 might be a user's hand, a piece of equipment that has broken or fallen (such as a pipette tip), or another physical object altogether. Server 510 will detect physical object 1020 and will determine that its coordinates do not match any known object. Server 510 then will generate alert 1030. Alert 1030 can include audio (e.g., a loud beep), light (e.g., a blinking red light), an email to the user, a text message (e.g., SMS or MMS message) to the user, other output on a user interface device (such as a text alert on the display), or other means of obtaining the user's attention. The user optionally can then stop automated robotic device 300 to remove physical object 1020.
Other events that require user attention also can be identified and an alert generated. For example, in
One of ordinary skill in the art will appreciate that other exception handling mechanisms can be implemented by server 510.
References to the present invention herein are not intended to limit the scope of any claim or claim term, but instead merely make reference to one or more features that may be covered by one or more of the claims. Materials, processes and numerical examples described above are exemplary only, and should not be deemed to limit the claims.
This application claims the benefit of U.S. Provisional Application No. 62/468,514, filed on Mar. 8, 2017, and titled “Robotic Device with Machine Vision and Natural Language Interface for Automating a Laboratory Workbench,” which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62468514 | Mar 2017 | US |