The present disclosure relates to work safety equipment and, more specifically, to work safety equipment used for inspection and maintenance of confined work environments.
Some work environments, such as, for example, confined spaces, include areas with limited or restricted ingress or egress that are not designed for continuous occupancy. Work in confined work environments is typically regulated by the owner and/or operator of the confined work environments. Example confined work environments include, but are not limited to, manufacturing plants, coal mines, larger tanks, vessels, silos, storage bins, hoppers, vaults, pits, manholes, tunnels, equipment housings, ductwork, and pipelines.
In some situations, a confined space entry by one or more workers (e.g., entrants) may present inherent health or safety risks associated with a confined space, such as potential exposure to a hazardous atmosphere or material that may injure or kill entrants, material within the confined space that has the potential to trap or even engulf an entrant, walls or floors that have shifted or converge into a smaller area that may trap or asphyxiate an entrant, unguarded machinery or potential stored energy (e.g., electrical, mechanical, or thermal) within equipment. Moreover, the occurrence of a safety event, e.g., outbreak of a fire or chemical spill within the confined space, may further put the entrant at risk. To help ensure safety of entrants, confined space entry procedures may include lockout-tagout of pipes, electrical lines, and moving parts associated with the confined space, purging the environment of the confined space, testing the atmosphere at or near entrances of the confined space, and monitoring of the confined space entry by an attendant (e.g., a worker designated as hole-watch).
The systems and techniques of this disclosure relate to improving work safety in work environments, such as confined spaces, by using machine vision to analyze location marking labels in a work environment to control an unmanned aerial vehicle (UAV) within the work environment. Although techniques of this disclosure are described with respect to confined spaces for example purposes, the techniques may be applied to any designated or defined region of a work environment. In some examples, the designated or defined region of the work environment may be delineated using geofencing, beacons, optical fiducials, RFID tags, or any other suitable technology for delineating a region or boundary of a work environment.
In some examples, an imaging device is mounted on a UAV to capture one or more images of a location marking label in a confined space. A processor communicatively coupled to the imaging device is configured to receive the one or more images of the location marking label. The processor also is configured to process the one or more images to decode data embedded on the location marking label. For example, the decodable data may include a location of the location marking label in the confined space or a command readable by the processor. Based on the data decoded from the location making label, the processor is configured to control the UAV. For example, the processor may control navigation of the UAV or command the UAV to perform a task, such as observing hazards (e.g., gas monitoring) in the confined space or performing work in the confined space. In some examples, the imaging device may further capture one or more images of an entrant, e.g., in a man-down situation, and the processor may determine an approximate location of the entrant and/or observe hazards near the entrant, e.g., to relay to a rescue response team. In this way, the disclosed systems and techniques may improve work safety in confined spaces by enabling a UAV to navigate through confined space to observe hazards in the confined space and/or perform work in the confined space. By observing hazards in the confined space and/or performing work in the confined space, the disclosed systems and techniques may reduce the number of entrants required for a confined space entry or entry-required rescue and/or reduce the duration of a confined space entry or entry-required rescue response time, thereby reducing entrant exposure to potential hazards in the confined space.
In some examples, the disclosure describes a system including a UAV that includes an imaging device and a processor communicatively coupled to the imaging device. The processor may be configured to receive, from the imaging device, an image of a confined space, detect a location marking label within the image, process the image to decode data embedded on the location marking label, and control navigation of the UAV within the confined space based on the data decoded from the location marking label.
In some examples, the disclosure describes a system including a confined space entry device that includes an imaging device and a processor communicatively coupled to the imaging device. The processor may be configured to receive, from the imaging device, an image of a confined space, detect a location marking label within the image, process the image to decode data embedded within the location marking label, and control navigation of the confined space entry device within the confined space based on the data decoded from the location marking label.
In some examples, the disclosure describes a method including deploying, into a confined space, an unmanned aerial vehicle (UAV), the UAV including an imaging device. The method also includes receiving, by a processor communicatively coupled to the imaging device, an image of the confined space captured by the imaging device. The method also includes detecting a location marking label within the image. The method also includes processing, by the processor, the image to decode data embedded on the location marking label. The method also includes controlling, by the processor, navigation of the UAV within the confined space based on the data decoded from the location marking label.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
The details of one or more examples of this disclosure are set forth in the accompanying drawings and the description below. It is to be understood that the examples may be used and/or structural changes may be made without departing from the scope of the invention. Other features, objects, and advantages of this disclosure will be apparent from the description and drawings, and from the claims.
The systems and techniques of this disclosure relate to improving work safety in work environments by using machine vision to analyze location marking labels in a work environment to control a work environment analysis device, such as an unmanned aerial vehicle (UAV), within the work environment. Although techniques of this disclosure are described with respect to confined space work environments for example purposes, the techniques may be applied to any designated or defined region of a work environment. For example, the designated or defined region of the work environment may be delineated by physical boundaries, such as a confined space vessel, or using, for example, geofencing, beacons, optical fiducials, RFID tags, or any other suitable technology for delineating a region or boundary of a work environment.
In some examples, an imaging device is mounted on a UAV and configured to capture one or more images of a confined space. In other examples, the imaging device may be mounted on a different vehicle or a device wearable by an entrant or attendant. A processor communicatively coupled to the imaging device is configured to receive the one or more images of the confined space. The processor may be mounted on-board the UAV (or other vehicle or wearable device), such that the imaging device and processor are components of the same confined space entry device, or remotely-located from the confined space entry device (e.g., a remote server or control station). The processor also is configured to detect a location marking label within the received image and process the one or more images to decode data embedded on the location marking label. For example, the data may include a location of the location marking label in the confined space or a command readable by the processor. Based on the data decoded from the location making label, the processor is configured to control the UAV. For example, the processor may control navigation of the UAV or command the UAV to perform a task, such as observing hazards in the confined space (e.g., gas monitoring) or performing work in the confined space. In this way, the disclosed systems and techniques may improve work safety in confined spaces by enabling a UAV to navigate through confined space to observe hazards in the confined space and/or perform work in the confined space. By observing hazards in the confined space and/or performing work in the confined space, the disclosed systems and techniques may reduce the number of entrants required for a confined space entry or entry-required rescue and/or reduce the duration of a confined space entry or entry-required rescue response time, thereby reducing entrant exposure to potential hazards in the confined space.
Confined space 106 includes a confined work environment, such as areas with limited or restricted ingress or egress and not designed for continuous occupancy by humans. Confined space 106 has particularized boundaries delineating a volume, region, or area defined by physical characteristics. For example, confined space 106 may include a column having manholes 108 and 110, trays 112, 114, and 116, and circumferential wall 118. In other examples, confined space 106 may include, but is not limited to, a manufacturing plant, a coal mine, a tank, a vessel, a silo, a storage bin, a hopper, a vault, a pit, a manhole, a tunnel, an equipment housing, a ductwork, and a pipeline. In some examples, confined space 106 includes internal structures, such as agitators, baffles, ladders, manways, passageways, or any other physical delineations. The particularized boundaries and internal structures define the interior space 120 of confined space 106. In some examples, confined space 106 may hold liquids, gases, or other substances that may be hazardous to the health or safety of an entrant, e.g., pose a risk of asphyxiation, toxicity, engulfment, or other injury. Confined space 106 may require specialized ventilation and evacuation systems for facilitating a temporarily habitable work environment, e.g., for a confined space entry. Although described with respect to confined space 106, the systems and techniques of the disclosure may be applied to any designated or defined region of a work environment. For example, the designated or defined region of the work environment may be delineated using, for example, geofencing, beacons, optical fiducials, RFID tags, or any other suitable technology for delineating a region or boundary of a work environment.
As shown in
UAV 102 is configured to enter confined space 106. For example, UAV 102 may be designed to fit within interior space 120, such as, for example, through manholes 108 or 110 and between wall 118 and trays 112, 114, or 116. In examples in which confined space 106 holds a particular liquid or gas, UAV 102 may be designed to operate in environments having the particular liquid or gas, such as, for example, in environments containing flammable and/or corrosive liquids and/or gases.
Confined space 106 includes one or more location marking labels 122A, 122B, 122C, 122D, 122E, 122F, and 122G (collectively, “location marking labels 122”). Location marking labels 122 may be located on an interior surface or an exterior surface of confined space 106. Each respective location marking label of location marking labels 122 is associated with a respective location in confined space 106. Each respective location marking label of location marking labels 122 includes at least one respective optical pattern embodied therein. The at least one optical pattern includes a machine-readable code (e.g., decodable data). In some examples, location marking labels 122, e.g., optical pattern embodied thereon, may be a retroreflective material layer. In some examples, the machine-readable code may be printed with infrared absorbing ink to enable an infrared camera to obtain images that can be readily processed to identify the machine-readable code. In some examples, location marking labels 122 include an adhesive layer for adhering location marking labels to a surface of confined space 106. In some examples, location marking labels 122 include an additional mirror film layer that is laminated over the machine-readable code. The mirror film may be infrared transparent such that the machine-readable code is not visible in ambient light but readily detectable within images obtained by an infrared camera (e.g., with some instances of imaging device 104). Additional description of a mirror film is found in PCT Appl. No. PCT/US2017/014031, filed Jan. 19, 2017, which is incorporated by reference herein in its entirety. The machine-readable code is unique to a respective location marking label of location marking labels 122, e.g., a unique identifier, unique location data, and/or unique command data. In this way, system 100 may use the machine-readable code to identify a location of UAV 102 inside confined space 106 or command system 100 to perform an operation.
Location marking labels 122 are embodied on a surface of confined space 106 to be visible such that imaging device 104 may obtain images of the location marking labels 122 when UAV 102 is inside confined space 106. Location marking labels may be any suitable size and shape. In some examples, of location marking labels 122 include rectangular shape that are between approximately 1 centimeter by 1 centimeter to approximately 1 meter by 1 meter, such as approximately 15 centimeters by 15 centimeters. In some examples, each location marking label of location marking labels 122 may be embodied on a label or tag affixed to a variety of types surfaces of interior 120 of confined space 106, such as, for example, floors, walls (e.g., wall 118), ceilings, or other internal structures (e.g., trays 112, 114, or 116), using an adhesive, clip, or other fastening means to be substantially immobile with respect to interior 120 of confined space 106. In such examples, location marking labels 122 may be referred to as “optical tags” or “optical labels.” By affixing to a surface of interior 120 of confined space 106, location marking labels 122 may be associate with a specific location within confined space 106.
In some examples, a respective location marking label of location marking labels 122 may be embodied on a label or tag affixed to a variety of types of or exterior surfaces of confined space 106. By affixing to an exterior surface of confined space 106, location marking labels 122 (e.g., location marking label 122G) may be associated with a specific exterior feature of confined space 106, such as manhole 110 or other ingress to confined space 106.
In some examples, confined space 106 is manufactured with location marking labels 122 embodied thereon. In some examples, location marking labels 122 may be printed, stamped, engraved, or otherwise embodied directly on a surface of interior 120 of confined space 106. In some examples, location marking labels 122 may include a protective material layer, such as a thermal or chemical resistant film. In some examples, a mix of types of embodiments of location marking labels 122 may be present in confined space 106. For example, a respective location marking label of location marking labels 122 may be printed on a surface of interior 120 of confined space 106, while a second respective location marking label of location marking labels 122 is printed on a label affixed to a surface of interior 120 of confined space 106. In this way, location marking labels 122 may be configured to withstand conditions within confined space 106 during operation of the confined space, such as, for example, non-ambient temperatures, pressures, and/or pH, fluid and/or material flow, presence of solvents or corrosive chemicals, or the like.
Each respective location marking label of location marking labels 122 may have a relative spatial relation with respect to each different location marking label of location marking labels 122. The relative spatial relation of location marking labels may be recorded in a repository of system 100 configured to store a model of confined space 106. The model may include a location of each respective location marking label of location marking labels 122 within confined space 106. For example, location marking labels 122D is a specific distance and trajectory from location marking labels 122E and 122F. In some examples, imaging device 104 may view each of 122D and 122E and/or 122F from a location of UAV 102 within confined space 106. By viewing each of 122D and 122E and/or 122F system 100 may determine the relative location of UAV 102 within confined space 106. In some examples, an anomaly in the relative spatial relation (e.g., an altered or displaced relative spatial relation) with respect to location marking labels 122 may indicate damage to interior 120 of confined space 106. For example, by viewing each of 122B and 122A and/or 122C system 100 may determine that location marking label 122B is displaced, e.g., that portion 124 of tray 112 is displaced or otherwise damaged such that location marking label 122B is displaced from a location of location marking label 122B in the model. In this way, system 100 may determine a relative location of UAV 102 within confined space 106 and/or determine a condition present in confined space 106 such as a displaced surface of interior 120 of confined space 106. By determining a relative location of UAV within confined space 106 and/or determining a condition present in confined space 106, system 100 may determine a path of travel of UAV 102 (e.g., at least one distance vector and at least one trajectory) to a second location within confined space 106 or that repair to interior 120 is required. In this way, system 100 may control navigation of UAV 102 within confined space 106 based on the data decoded from a respective location marking label of location marking labels 122.
Imaging device 104 obtains and stores, at least temporarily, images 126D, 126E, and 126F (collectively, “images 126”) of interior 120 of confined space 106. Each respective image of images 126 may include a respective location marking label of location marking labels 122. In some examples, computing device 103 communicatively coupled to imaging device 104, receives images 126 from imaging device 104 in near real-time for near real-time processing. Imaging device 104 may receive multiple images 126 at a frequency at a position and orientation of imaging device 104. For instance, imaging device 104 may receive an instance of images 126 once every second.
Imaging device 104 may be an optical camera, video camera, infrared or other non-human-visible spectrum camera, or a combination thereof. Imaging device 104 may be a mounted by a fixed mount or an actuatable mount, e.g., moveable along one or more degrees of freedom, on UAV 102. Imaging device 104 includes a wired or wireless communication link with computing device 103. For instance, imaging device 104 may transmit images 126 to computing device 103 or to a storage system communicatively coupled to computing device 103 (not shown in
Computing device 103 includes a processor to processes one or more images of images 126 to decode data embedded on location marking labels 122. Computing device 103 may detect a respective location marking label of location marking labels 122 within a respective image of images 126. In some examples, computing device 103 may detect location marking labels 122 based at least in part on a general boundary, optical pattern, color, reflectivity (e.g., reflectively of a selected wave length of radiation, such as infrared radiation), or the like of location marking labels 122. Computing device 103 also may process one or more images of images 126 to identify the machine-readable codes of the location marking labels 122. For example, in examples in which confined space 106 holds a material hazardous to UAV 102 (e.g., dust, liquids, or gas that may damage UAV 102), a respective location marking label of location marking labels 122 (e.g., location marking label 122G) may enable UAV 102 to determine that UAV 102 should not enter confined space 106. Additionally, or alternatively, a processor of computing device 103 may process one or more images of images 126 to determine a spatial relation between one or more location marking labels 122 and UAV 102. To determine the spatial relation between one or more location marking labels 122 and UAV 102, computing device 103 may determine, from one or more images of images 126 and, optionally, a model of location marking labels 122 within confined space 106, a position of each respective location marking label of the one or more location marking labels 122 and/or an orientation of each respective location marking label of the one or more location marking labels 122 with respect to a coordinate system relative to UAV 102.
For example, computing device 103 may process one image of images 126 to determine the spatial relation between a respective location marking label of location marking labels 122, such as a distance of UAV 102 from the respective location marking label of location marking labels 122 and/or an orientation of UAV 102 relative to the respective location marking label of location marking labels 122. The spatial relation may indicate that UAV 102 (or imaging device 104) is a distance from a respective location marking label of location marking labels 122, e.g., 3 meters. The spatial relation may indicate UAV 102 (or imaging device 104) has a relative orientation to a respective location marking label of location marking labels 122, e.g., 90 degrees. The spatial relation may indicate a different respective location marking label of location marking labels is located a distance and direction vector from a current location of UAV 102 (e.g., UAV 102 may locate a second respective location marking label of location marking labels 122 based on the spatial relation between a first respective location marking label of location marking labels 122.
In some examples, computing device 103 may process at least one image of images 126 to determine the distance of UAV 102 from the respective location marking label of location marking labels 122 by determining a resolution of the respective location marking label of location marking labels 122 in the one image of images 126. For example, a first resolution of the respective location marking label of location marking labels 122 may include decodable data indicating that imaging device 104 is a first distance from the respective location marking label of location marking labels 122 during acquisition of a first image of images 126. Similarly, a second resolution of the respective location marking label of location marking labels 122 may include second decodable data indicating that imaging device 104 is a second distance from the respective location marking label of location marking labels 122 during acquisition of a second image of images 126.
Additionally, or alternatively, computing device 103 may process at least one image of images 126 to determine an orientation of UAV 102 (e.g., based on a known orientation of imaging device 104 relative to UAV 102) relative to the respective location marking label of location marking labels 122. For example, a respective location marking label of location marking labels 122 may include decodable data indicating an orientation of the respective location marking label of location marking labels 122 relative to confined space 106, e.g., the at least one image of images 126 may indicate an orientation of a coordinate system relative to interior 120 of confined space 106. In this way, computing device 103 may determine a location and/or an orientation of UAV 102 within confined space 106 based on data decoded from at least one image of images 126 of at least one location marking label of location marking labels 122.
Additionally, or alternatively, computing device 103 may process at least one image of images 126 to determine an orientation of a respective location marking label of location marking labels 122 relative to confined space 106 (e.g., based on a known orientation of imaging device 104 relative to UAV 102 or relative to other location marking labels of location marking labels 122 having a known orientation). For example, a respective location marking label of location marking labels 122 may include decodable data indicating an orientation of the respective location marking label of location marking labels 122. Computing device 103 may associate an orientation of UAV 102 (e.g., based on a known orientation of imaging device 104 relative to UAV 102 or relative to other location marking labels of location marking labels 122 having a known orientation) with the determined orientation of the respective location marking label of location marking labels 122 to determine an orientation of the respective location marking label of location marking labels 122 relative to confined space 106. In this way, computing device 103 may determine a location and/or an orientation of a respective location marking label of location marking labels 122 within confined space 106 based on data decoded from at least one image of images 126 of the respective location marking label of location marking labels 122.
Additionally or alternatively, computing device 103 may use one or more algorithms, such as simultaneous localization and mapping (SLAM) algorithms, to process at least one image of images 126 to determine the spatial relation between at least one respective location marking label of location marking labels 122, such as a distance of UAV 102 from at least one respective location marking label of location marking labels 122 and/or an orientation of UAV 102 relative to at least one respective location marking label of location marking labels 122. Identifiable key points in SLAM processing may include at least one respective location marking label of location marking labels 122. Computing device 103 may determine, e.g., by SLAM processing, a three-dimensional point cloud or mesh including a model of confined space 106 based on at least one respective location marking label of location marking labels 122. Computing device 103 may be configured to record in a repository of system 100 the three-dimensional point cloud or mesh as a model of confined space 106. The three-dimensional point cloud or mesh may provide a relatively higher definition model of confined space 106 that may be used by computing device 103 to improve an ability of computing device 103 to process relatively lower resolution images 126. For example, in example in which images 126 include relatively lower resolution images 126 (e.g., images obtained in conditions, such as smoke, debris, or low light, inside confined space 106 that obscure or otherwise reduce the resolution of the images), computing device 103 may use the three-dimensional point cloud or mesh determined by SLAM processing to improve the usability of the relatively lower resolution images (e.g., by registering at least a portion of the relatively lower resolution images 126 to the relatively higher resolution three-dimensional point cloud or mesh).
Additionally, or alternatively, system 100 may include an environmental sensor communicatively coupled to a computing device 103 and mounted on UAV 102. The environmental sensor may include, but is not limited to, a multi-gas detector for testing flammable gases lower explosive limit (LEL), toxic gases (e.g., hydrogen sulfate, carbon monoxide, etc.), and/or oxygen levels (e.g., oxygen depletion), a temperature sensor, a pressure sensor, or the like. Computing device 103 may, based on a command decoded from at least one image of images 126, cause the environmental sensor to collect environmental information in confined space 106. In this way, computing device 103 may determine an environmental condition, such as presence of harmful gases, dangerously low or high oxygen levels, or hazardous temperature or pressure, within confined space 106.
As another example, computing device 103 may process a plurality of images 126 (e.g., two or more images of images 126) to determine the spatial relation between a plurality of location marking labels 122 (e.g., two or more location marking labels of location marking labels 122). For example, computing device 103 may process images 126D, 126E, and 126F of, respectively, location marking labels 122D, 122E, and 122F to determine a location and/or orientation of UAV 102 within confined space 106. In some examples, computing device 103 may process each respective image (e.g., images 126D, 126E, and 126F), as discussed above, to determine and compare locations and/or orientations of UAV 102 relative the respective location marking label (e.g., location marking labels 122D, 122E, and 122F). For example, computing device 103 may use a plurality of distances of UAV 102 from location marking labels 122D, 122E, and 122F determined from images 126D, 126E, and 126F to triangulate the location of UAV 102 within confined space 106. In this way, computing device 103 may determine a location and/or an orientation of UAV 102 within confined space 106 based on data decoded from a plurality of images 126 of a plurality of location marking labels 122. Using a plurality of images 126 of a plurality of location marking labels 122 may allow system 100 to more accurately determine a location and/or an orientation of UAV 102 within confined space 106.
In some examples, system 100 includes additional components, such as, for example, a remotely-located control station 128 communicatively coupled to computing device 103 and/or imaging device 104. For example, remotely-located control station 128 may be communicatively coupled to computing device 103 and/or imaging device 104 by any suitable wireless connection, including, for example, via a network 130, such as a local area network. Remotely-located control station 128 may include an interface operable by a user, such as a human operator or a machine.
In some examples, system 100 may be configured to respond to an entry-required rescue situation in confined space 106, e.g., when an entrant is disabled and unable to be retrieved by non-entry means. For example, UAV 102 may be deployed in confined space 106 to search for a disabled entrant. Imaging device 104 may be configured to capture images 126 of interior 120, as discussed above. Computing device 103 may obtain images 126 from imaging device 104 to determine if images 126 include the disabled entrant. For example, computing device 103 may include image recognition software to identify characteristics of optical images of the disabled entrant such as a shape of an entrant, an optical tag associated with (e.g., attached to PPE worn by) the disabled entrant, or an anomaly in interior 120 caused by the presence of the disabled entrant. As another example, computing device 103 may include image recognition software to identify infrared characteristics of the disabled entrant such as infrared radiation emitted by the disabled entrant. In some examples, system 100 may both determine a location of UAV 102 within confined space 106, as discussed above, and identify a man-down within confined space 106. For example, in response to identifying the disabled entrant, system 100 may then determine a location of UAV 102, as discussed above. In this way, computing device 103 may identify the disabled entrant and determine the approximate location of the disabled entrant within confined space 106. In response to identifying a man-down, system 100 may optionally determine an environmental condition within confined space 106. In some examples, system 100 may provide environmental condition information to a rescue response team, e.g., via a remotely-located control station 128, and/or determine whether environmental conditions allow for safe rescue of the disabled entrant. In this way, system 100 may reduce the number of entrants required for an entry-required rescue of the disabled entrant, reduce the duration of the entry-required rescue, and/or reduce exposure of rescuers to environmental conditions within confined space 106 that may injure potential rescuers.
Other examples involving other types of confined space 106, other internal structures within confined space 106, and/or local environmental conditions within confined space 106 are contemplated.
UAV 200 is a rotorcraft, typically referred to as a multicopter. The example design shown in
UAV 200 includes one or more supporting struts 206A, 206B, 206C, and 206D (collectively, “supporting struts 206”) that connect each rotor of rotors 202 to at least one other rotor of rotors 202 (e.g. that connect each rotor/shroud assembly to at least one other rotor/shroud assembly). Supporting struts 206 provide overall structural rigidity to UAV 200.
UAV 200 includes computing device 210. Computing device 210 includes a power source for powering the UAV and a processor for controlling the operation of UAV 200. Computing device 210 may include additional components configured to operate UAV 200 such as, for example, communication units, data storage modules, gyroscopes, servos, and the like. Computing device 210 may be mounted on one or more supporting struts 206. In some examples, computing device 210 may include firmware and/or software that include a flight control system. The flight control system may generate flight control instructions. For example, flight control instructions may be sent to rotors 202 to control operation of rotors 202. In some examples, flight control instructions may be based on flight-control parameters autonomously calculated by computing device 210 (e.g., an on-board guidance system or an on-board homing system) and/or based at least partially on input received from a remotely-located control station. In some examples, computing device 210 may include an on-board autonomous navigation system (e.g. a GPS-based navigation system). In some examples, as discussed above with respect to
In some examples, UAV 200 may include one or more wireless transceivers 208. Wireless transceivers 208 may send and receive signals from a remotely-located control station, such as, for example, a remote controller operated by a user. Wireless transceiver 208 may be communicatively coupled to computing device 210 to, for example, rely signals from wireless transceiver 208 to computing device 210, and vice versa.
UAV 200 includes one or more imaging device 212. As discussed above, computing device 210 may receive images from imaging device 212. In some examples, imaging device 212 may wirelessly transmit real-time images (e.g. as a continuous or quasi-continuous video stream, or as a succession of still images) by transceiver 208 to a remotely-located control station operated by a user. This can allow the user to guide UAV 200 over at least a portion of the aerial flight path by operation of flight controls of the remotely-located control station, with reference to real-time images displayed on a display screen of the control station. In some examples, two or more such real-time image acquisition devices may be present; one capable of scanning at least in a downward direction, and one capable of scanning at least in an upward direction. In some examples, such a real-time image acquisition device may be mounted on a gimbal or swivel mount 214 so that the device can scan upwards and downwards, and e.g. in different horizontal directions.
Any of the components mentioned above (e.g. computing device 210, wireless transceiver 208, imaging device 212) may be located at any suitable position on UAV 200, e.g., along supporting struts 206. Such components may be relatively exposed or one or more such components may be located partially or completely within a protective housing (with a portion, or all, of the housing being transparent if it is desired e.g. to use an image acquisition device that is located within the housing). In some examples, UAV 200 may include additional components such as environmental sensors and payload carriers.
Imaging device 302 may be the same as or substantially similar to imaging device 104 of
Environmental sensor 324 is communicatively coupled to computing device 304. Environmental sensor 324 may include any suitable environmental sensor 324 for mounting to confined space entry device 300, e.g., UAV 102 or UAV 200. For example, environmental sensor 324 may include multi-gas sensor, a thermocouple, a pressure transducer, or the like. In this way, environmental sensor 324 may be configured to detect gases (e.g., flammable gas lower explosive limit, oxygen level, hydrogen sulfide, and/or carbon monoxide), temperature, pressure, or the like to enable confined space entry device 300 to monitor and/or provide alters of environmental conditions that pose health and/or safety hazards to entrants.
Computing device 304 may include one or more processor 306, one or more communication units 308, one or more input devices 310, one or more output devices 312, power source 314, and one or more storage devices 316. One or more storage devices 316 may store image processing module 318, navigation module 320, and command module 322. One or more of the devices, modules, storage areas, or other components of confined space entry device 300 may be interconnected to enable inter-component communications (physically, communicatively, and/or operatively). In some examples, such connectivity may be provided by system bus, a network connection, an inter-process communication data structure, or any other method for communicating data.
Power source 314 may provide power to one or more components of confined space entry device 300. In some examples, power source 314 may be a battery. In some examples, power source 314 may receive power from the primary alternative current (AC) power supply. In some examples, confined space entry device 300 and/or power source 314 may receive power from another source.
One or more input devices 310 of confined space entry device 300 may generate, receive, or process input. Such input may include input from a keyboard, pointing device, voice responsive system, environmental detection system, biometric detection/response system, button, sensor, mobile device, control pad, microphone, presence-sensitive screen, network, or any other type of device for detecting input from a human or a machine. One or more output devices 312 of confined space entry device 300 may generate, transmit, or process output. Examples of output are tactile, audio, visual, and/or video output. Output devices 312 may include a display, sound card, video graphics adapter card, speaker, presence-sensitive screen, one or more USB interfaces, video and/or audio output interfaces, or any other type of device capable of generating tactile, audio, video, or other output. Output devices 312 may include a display device, which may function as an output device using technologies including liquid crystal displays (LCD), quantum dot display, dot matrix displays, light emitting diode (LED) displays, organic light-emitting diode (OLED) displays, cathode ray tube (CRT) displays, e-ink, or monochrome, color, or any other type of device for generating tactile, audio, and/or visual output. In some examples, confined space entry device 300 may include a presence-sensitive display that may serve as a user interface device that operates both as one or more input devices 310 and one or more output devices 312.
One or more communication units 308 of computing device 304 may communicate with devices external to confined space entry device 300 by transmitting and/or receiving data, and may operate, in some respects, as both an input device and an output device. In some examples, communication units 308 may communicate with other devices over a network, e.g., imaging device 302, external computing devices, hubs, and/or remotely-located control stations. In other examples, one or more communication units 308 may send and/or receive radio signals on a radio network such as a cellular radio network. In other examples, one or more communication units 308 may transmit and/or receive satellite signals on a satellite network such as a Global Positioning System (GPS) network. Examples of one or more communication units 308 may include a network interface card (e.g. such as an Ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that can send and/or receive information. Other examples of one or more communication units 308 may include Bluetooth®, GPS, 3G, 4G, and Wi-Fi® radios found in mobile devices as well as Universal Serial Bus (USB) controllers and the like.
One or more processor 306 of confined space entry device 300 may implement functionality and/or execute instructions associated with confined space entry device 300. Examples of one or more processor 306 may include microprocessors, application processors, display controllers, auxiliary processors, one or more sensor hubs, and any other hardware configured to function as a processor, a processing unit, or a processing device. Confined space entry device 300 may use one or more processor 306 to perform operations in accordance with one or more aspects of the present disclosure using software, hardware, firmware, or a mixture of hardware, software, and firmware residing in and/or executing at confined space entry device 300.
One or more storage devices 316 within computing device 304 may store information for processing during operation of confined space entry device 300. In some examples, one or more storage devices 316 are temporary memories, meaning that a primary purpose of the one or more storage devices is not long-term storage. One or more storage devices 316 within computing device 304 may be configured for short-term storage of information as volatile memory and therefore not retain stored contents if deactivated. Examples of volatile memories may include random access memories (RAM), dynamic random-access memories (DRAM), static random-access memories (SRAM), and other forms of volatile memories known in the art. One or more storage devices 316, in some examples, also include one or more computer-readable storage media. One or more storage devices 316 may be configured to store larger amounts of information than volatile memory. One or more storage devices 316 may further be configured for long-term storage of information as non-volatile memory space and retain information after activate/off cycles. Examples of non-volatile memories may include magnetic hard disks, optical discs, floppy disks, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. One or more storage devices 316 may store program instructions and/or data associated with one or more of the modules described in accordance with one or more aspects of this disclosure.
One or more processor 306 and one or more storage devices 316 may provide an operating environment or platform for one or more modules, which may be implemented as software, but may in some examples include any combination of hardware, firmware, and software. One or more processor 306 may execute instructions and one or more storage devices 316 may store instructions and/or data of one or more modules. The combination of one or more processor 306 and one or more storage devices 316 may retrieve, store, and/or execute the instructions and/or data of one or more applications, modules, or software. One or more processor 306 and/or one or more storage devices 316 may also be operably coupled to one or more other software and/or hardware components, including, but not limited to, one or more of the components illustrated in
One or more modules illustrated in
One or more storage devices 316 stores image processing module 318. In some examples, image processing module 318 includes a data structure that maps optical pattern codes on a location marking label having the optical pattern codes embodied thereon to a unique identifier and/or location information. In some examples, image processing module 318 may include an associative data structure (e.g., a repository) including a model that includes locations of each respective location marking label within a confined space. Image processing module 318 may use the model to map a unique identifier to a location within a confined space.
One or more storage devices 316 stores navigation module 320. Navigation module 320 may include a list of rules defining possible paths of travel and/or maneuvers within a confined space. For example, navigation module 320 may use a database, a list, a file, or other structure to map optical pattern codes on a location marking label to distance vector and/or trajectory information defining a path of travel between one or more location marking labels and/or a maneuver to be performed at or near the location marking label (e.g., landing in a predetermined location). Additionally, or alternatively, navigation module 320 may use data embodied on a respective location marking label to determine distance vector and/or trajectory information defining a path of travel between the respective location marking label and one or more different location marking labels. In examples in which confined space entry device 300 includes a wearable device, navigation module 320 may output, e.g., via output devices 312, a navigational message that includes one or more of an audible message and a visual message. By associating paths of travel with a respective location marking label, navigation module 320 may enable confined space entry device to determine, and optionally execute, navigation through a confined space.
One or more storage devices 316 stores command module 322. Command module 322 may include a list of commands defining possible operations to be performed by confined space entry device 300. For example, command module 322 may use a database, a list, a file, or other structure to map optical pattern codes on a location marking label to data defining an operation to be performed by confined space entry device 300 at or near a location marking label. Additionally, or alternatively, command module 322 may use data embodied on a respective location marking label to determine distance vector and/or trajectory information defining a path of travel to a location where an operation is to be performed by confined space entry device 300. Example task include, but are not limited to, sampling (e.g., sampling gases, temperature, or the like in the local environment, or retrieving a product sample), performing a maneuver (e.g., landing in a predetermined location), imaging (e.g., an area within the confined space), cleaning (e.g., cleaning a component such as a sensor within the confined space), performing work (e.g., repairing a component such as a sensor within the confined space), or retrieving data from a remote server. By associating one or more tasks with a respective location marking label, command module 322 may enable confined space entry device 300 to conserve resources such as, for example, battery, processing power, sampling capability, or the like.
In some examples, confined space entry device 300 may include a user interface module for display of processor 306 outputs via output devices 312 or to enable an operator to configure image processing module 318, navigation module 320, and/or command module 322. In some examples, output devices 312 receives from navigation module 320, via processor 306, audio, visual, or tactile instructions understandable by a human or machine to navigate through a confined space. In some examples, input devices 310 may receive user input including configuration data for image processing module 318, e.g., optical patterns associated with a respective location marking label, navigation module 320, e.g., a module including locations of each location marking label within a confined space, and command module 322, e.g., possible task to be performed at each respective location marking label. The user interface module may process the configuration data and update the image processing module 318, navigation module 320, and/or command module 322 using the configuration data.
In some cases, the code operates as a more generalized version of the code where a full rectangular retroreflective substrate is available, and the correction code is left fully intact for recovery and verification. The location finder uses all corners of the code and an alternating white/black pattern along the top edge allows for a single system to differentiate and decode multiple code sizes.
In some examples, location marking label 400 is printed onto 3M High Definition License Plate Sheeting Series 6700 with a black ink using an ultraviolet (UV) inkjet printer, such as MIMAKI UJF-3042HG or 3M™ Precision Plate System to produce an optical tag. The ink may contain carbon black as the pigment and be infrared absorptive (i.e., appears black when viewed by an infrared camera). The sheeting may include a pressure-sensitive adhesive layer that allows the printed tag to be laminated onto surfaces within a confined space. In some examples, the location marking label 400 is visible to the user. In some examples, an additional layer of mirror film can be laminated over the sheeting with the printed location marking label 400, thereby hiding the printed location marking label 400 from the unaided eye. As the mirror film is transparent to infrared light, an infrared camera can still detect the location marking label 400 behind the mirror film, which may also improve image processing precision. The mirror film can also be printed with an ink that is infrared transparent without interfering with the ability for an infrared camera to detect the location marking label 400. In some examples, location marking label 400 may include one or more additional protective layers, such as, for example, a protective film configured to resist deterioration in environments within a confined space (e.g., temperature or chemical resistance protective films).
In some examples, location marking label 400 may be generated to include one or more layers that avoid the high reflectivity of a mirror film but be infrared transparent such that the machine-readable code is not visible in ambient light but readily detectable within images obtained by an infrared camera. This construction may be less distracting to workers or other users. For example, location marking label 400 may include a white mirror film, such as those disclosed in PCT/US2017/014031, incorporated herein by reference in its entirety, on top of a retroreflective material. The radiometric properties of the retroreflective light of a location marking label may be measured with an Ocean Optics Spectrometer (model number FLAME-S-VIS-NIR), light source (model HL-2000-FHSA), and reflectance probe (model QR400-7-VIS-BX) over a geometry of 0.2-degree observation angle and 0-degree entrance angle, as shown by percent of reflectivity (R%) over a wavelength range of 400-1000 nanometers.
In general, any material that prevents the conforming layer material from contacting cube corner elements 512 or flowing or creeping into low refractive index area 538 can be used to form the barrier layer. Exemplary materials for use in barrier layer 534 include resins, polymeric materials, dyes, inks (including color-shifting inks), vinyl, inorganic materials, UV-curable polymers, multi-layer optical films (including, for example, color-shifting multi-layer optical films), pigments, particles, and beads. The size and spacing of the one or more barrier layers 534 can be varied. In some examples, one or more barrier layers 534 may form a pattern on the retroreflective sheet. In some examples, one may wish to reduce the visibility of the pattern on the sheeting. In general, any desired pattern can be generated by combinations of the described techniques, including, for example, indicia such as letters, words, alphanumerics, symbols, graphics, logos, or pictures. The patterns can also be continuous, discontinuous, monotonic, dotted, serpentine, any smoothly varying function, stripes, varying in the machine direction, the transverse direction, or both; the pattern can form an image, logo, or text, and the pattern can include patterned coatings and/or perforations. The pattern can include, for example, an irregular pattern, a regular pattern, a grid, words, graphics, images, lines, and intersecting zones that form cells.
The low refractive index area 538 is positioned between (1) one or both of barrier layer 534 and conforming layer 532 and (2) cube corner elements 512. The low refractive index area 538 facilitates total internal reflection such that light that is incident on cube corner elements 512 adjacent to a low refractive index area 538 is retroreflected. As is shown in
Low refractive index layer 538 includes a material that has a refractive index that is less than about 1.30, less than about 1.25, less than about 1.2, less than about 1.15, less than about 1.10, or less than about 1.05. In general, any material that prevents the conforming layer material from contacting cube corner elements 512 or flowing or creeping into low refractive index area 538 can be used as the low refractive index material. In some examples, barrier layer 534 has sufficient structural integrity to prevent conforming layer 532 from flowing into a low refractive index area 538. In such examples, low refractive index area may include, for example, a gas (e.g., air, nitrogen, argon, and the like). In other examples, low refractive index area includes a solid or liquid substance that can flow into or be pressed into or onto cube corner elements 512. Example materials include, for example, ultra-low index coatings (those described in PCT Patent Application No. PCT/US2010/031290), and gels.
The portions of conforming layer 532 that are adjacent to or in contact with cube corner elements 512 form non-optically active (e.g., non-retroreflective) areas or cells. In some examples, conforming layer 532 is optically opaque. In some examples conforming layer 532 has a white color.
In some examples, conforming layer 532 is an adhesive. Example adhesives include those described in PCT Patent Application No. PCT/US2010/031290. Where the conforming layer is an adhesive, the conforming layer may assist in holding the entire retroreflective construction together and/or the viscoelastic nature of barrier layers 534 may prevent wetting of cube tips or surfaces either initially during fabrication of the retroreflective article or over time.
In the example of
Additional example implementations of a retroreflective article for embodying an optical pattern are described in U.S. patent application Ser. No. 14/388,082, filed Mar. 29, 2013, which is incorporated by reference herein in its entirety. Additional description is found in U.S. Provisional Appl. No. 62/400,865, filed Sep. 28, 2016; 62/485,449, filed Apr. 14, 2017; 62/400,874, filed Sept. 28, 2016; 62/485,426, filed Apr. 14, 2017; 62/400,879, filed Sep. 28, 2016; 62/485,471, filed Apr. 14, 2017; and 62/461,177, filed Feb. 20, 2017; each of which is incorporated herein by reference in its entirety.
The technique of
The technique of
The technique of
The technique of
In some examples, processing the image to decode data may include processing, by computing device 103, e.g., processor 306, a plurality of resolutions of the image. For example, a first resolution of the image may include a first data set and a second resolution of the image may include a second data. The first (e.g., lower) resolution of a respective image may include decodable data indicative of a unique identifier of the respective location marking label of location marking labels. The second (e.g., higher) resolution of the respective image may include decodable data indicative of the position of UAV 102 within confined space 106.
In some examples, as discussed above, processing may include determining, by the processor, an anomaly in the confined space based on the data decoded from the (first) location marking label of location marking labels 122 and the data decoded from the second location marking label of location marking labels 122.
The technique of
In some examples, the technique optionally includes determining, by computing device 103, e.g., processor 306, a landing location for UAV 102 based on the data decoded from the location marking label 122. For example, the data decoded from the location marking label 122 may include a landing location. The landing location may be remote from the location of the location marking label 122.
In some examples, the technique optionally includes controlling, by computing device 103, e.g., processor 306, communicatively coupled to environmental sensor 324 mounted on UAV 102, environmental sensor 324 to collect local environment information. For example, environmental sensor 324 may be configured to detect gases (e.g., flammable gas lower explosive limit, oxygen level, hydrogen sulfide, and/or carbon monoxide), temperature, pressure, or the like. By controlling environmental sensor 324 to collect location environmental information, the technique of
In some examples, the technique of optionally includes repeating capturing, by imaging device 104, an image; receiving, by computing device 103, e.g., processor 306, the image; processing, by computing device 103, e.g., processor 306, the image; and controlling, by computing device 103, e.g., processor 306, UAV 102. For example, the technique may include capturing, by imaging device 104, a second image of images 126 of a second location marking label of location marking labels 122 in confined space 106. The technique also may include receiving, by computing device 103, e.g., processor 306, the second image of images 126 of the second location marking label of location marking labels 122. The technique also may include processing, by computing device 103, e.g., processor 306, the second image of images 126 to decode data embedded within the second location marking label of location marking labels 122. The technique also may include controlling, by computing device 103, e.g., processor 306, UAV 102 based on the data decoded from the second location marking label of location marking labels 122. In some examples, as discussed above, processing may include determining, by computing device 103, e.g., processor 306, a position and/or an orientation of UAV 102 within confined space 106 based on the data decoded from the (first) location marking label of location marking labels 122 and the data decoded from the second location marking label of location marking labels 122. In this way, the technique may include using a plurality of images of a plurality of location marking labels to control navigation or an operation of a confined space entry device.
Various examples have been described. These and other examples are within the scope of the following claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2019/053780 | 5/8/2019 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62671042 | May 2018 | US |