This disclosure generally relates to identifying deficiencies in objects, and more specifically to systems and methods for identifying potential deficiencies in railway environment objects.
Traditionally, railroad inspectors inspect railroads for unsafe conditions and recommend actions to correct the unsafe conditions. For example, a railroad inspector may encounter a buckled railroad track and report the buckled railroad track to a railroad company. In response to receiving the report, the railroad company may take action to repair the buckled railroad track. However, the corrective action may not be performed in time to prevent the occurrence of an accident such as a train derailment.
According to an embodiment, a method includes capturing, by a machine vision device, an image of an object in a railway environment. The machine vision device is attached to a first train car that is moving in a first direction along a first railroad track of the railway environment. The method also includes analyzing, by the machine vision device, the image of the object using one or more machine vision algorithms to determine a value associated with the object. The method further includes determining, by the machine vision device, that the value associated with the object indicates a potential deficiency of the object and communicating, by the machine vision device, an alert to a component external to the first train car. The alert comprises an indication of the potential deficiency of the object.
In certain embodiments, the potential deficiency of the object is one of the following: a misalignment of a second railroad track; a malfunction of a crossing warning device; an obstructed view of a second railroad track; damage to the object; or a misplacement of the object. In some embodiments, the first railroad track of the railway environment is adjacent to a second railroad track of the railway environment, the component external to the first train car is attached to a second train car that is moving in a second direction along the second railroad track, and the alert instructs the second train car to perform an action. In certain embodiments, the component external to the first train car is a device located within a network operations center.
In some embodiments, the alert includes at least one of the following: a description of the object; a description of the potential deficiency; the image of the object; a location of the object; a time when the object was captured by the machine vision device of the first train car; a date when the object was captured by the machine vision device of the first train car; an identification of the first train car; an indication of the first direction of the first train car; and an indication of one or more train cars that are scheduled to pass through the railway environment within a predetermined amount of time. In certain embodiments, the machine vision device captures the image of the object and communicates the alert to the component external to the first train car in less than ten seconds. The machine vision device may be mounted to a front windshield of the first train car.
According to another embodiment, a system includes one or more processors and a memory storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations including capturing, by a machine vision device, an image of an object in a railway environment. The machine vision device is attached to a first train car that is moving in a first direction along a first railroad track of the railway environment. The operations also include analyzing the image of the object using one or more machine vision algorithms to determine a value associated with the object. The operations further include determining that the value associated with the object indicates a potential deficiency of the object and communicating an alert to a component external to the first train car. The alert comprises an indication of the potential deficiency of the object.
According to yet another embodiment, one or more computer-readable storage media embody instructions that, when executed by a processor, cause the processor to perform operations including capturing, by a machine vision device, an image of an object in a railway environment. The machine vision device is attached to a first train car that is moving in a first direction along a first railroad track of the railway environment. The operations also include analyzing the image of the object using one or more machine vision algorithms to determine a value associated with the object. The operations further include determining that the value associated with the object indicates a potential deficiency of the object and communicating an alert to a component external to the first train car. The alert comprises an indication of the potential deficiency of the object.
Technical advantages of certain embodiments of this disclosure may include one or more of the following. Certain systems and methods described herein include a machine vision device that analyzes railway environments for safety critical aspects such as track misalignments, malfunctioning warning devices, obstructed views of railroad tracks, pedestrians near railroad tracks, and washouts. In certain embodiments, the machine vision device detects and reports potential deficiencies in railway environments in real-time, which may lead to immediate corrective action and the reduction/prevention of accidents. In some embodiments, the machine vision device automatically detects deficiencies in railway environments, which may reduce costs and/or safety hazards associated with on-site inspectors.
Other technical advantages will be readily apparent to one skilled in the art from the following figures, descriptions, and claims. Moreover, while specific advantages have been enumerated above, various embodiments may include all, some, or none of the enumerated advantages.
To assist in understanding the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings, in which:
Network 110 of system 100 may be any type of network that facilitates communication between components of system 100. For example, network 110 may connect machine vision device 150a to machine vision device 150b of system 100. As another example, network 110 may connect machine vision devices 150 to UE 190 of network operations center 180 of system 100. One or more portions of network 110 may include an ad-hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a cellular telephone network, a 3G network, a 4G network, a 5G network, a Long Term Evolution (LTE) cellular network, a combination of two or more of these, or other suitable types of networks. One or more portions of network 110 may include one or more access (e.g., mobile access), core, and/or edge networks. Network 110 may be any communications network, such as a private network, a public network, a connection through Internet, a mobile network, a WI-FI network, a Bluetooth network, etc. Network 110 may include cloud computing capabilities. One or more components of system 100 may communicate over network 110. For example, machine vision devices 150 may communicate over network 110, including transmitting information (e.g., potential deficiencies) to UE 190 of network operations center 180 and/or receiving information (e.g., confirmed deficiencies) from UE 190 of network operations center 180.
Railway environment 120 of system 100 is an area that includes one or more railroad tracks 130. Railway environment 120 may be associated with a division and/or a subdivision. The division is the portion of the railroad under the supervision of a superintendent. The subdivision is a smaller portion of the division. The subdivision may be a crew district and/or a branch line. In the illustrated embodiment of
Railroad tracks 130 of system 100 are structures that allow train cars 140 to move by providing a surface for the wheels of train cars 140 to roll upon. In certain embodiments, railroad tracks 130 include rails, fasteners, railroad ties, ballast, etc. Train cars 140 are vehicles that carry cargo and/or passengers on a rail transport system. In certain embodiments, train cars 140 are coupled together to form trains. Train cars 140 may include locomotives, passenger cars, freight cars, boxcars, flatcars, tank cars, and the like.
In the illustrated embodiment of
Machine vision devices 150 of system 100 are components that automatically capture, inspect, evaluate, and/or process still or moving images. Machine vision devices 150 may include one or more cameras, lenses, sensors, optics, lighting elements, etc. In certain embodiments, machine vision devices 150 perform one or more actions in real-time or near real-time. For example, machine vision device 150a of train car 140a may capture an image of an object (e.g., railroad track 130b) of railway environment 120 and communicate an alert indicating a potential deficiency (e.g., track misalignment 170) to a component (e.g., machine vision device 150b or UE 190 of network operations center 180) external to train car 140a in less than a predetermined amount of time (e.g., one, five, or ten seconds).
In certain embodiments, machine vision devices 150 include one or more cameras that automatically capture images of railway environment 120 of system 100. Machine vision devices 150 may automatically capture still or moving images while train cars 140 are moving along railroad tracks 130. Machine vision devices 150 may automatically capture any suitable number of still or moving images. For example, machine vision devices 150 may automatically capture a predetermined number of images per second, per minute, per hour, etc. In certain embodiments, machine vision devices 150 automatically capture a sufficient number of images to capture the entire lengths of railroad tracks 130 within a predetermined area (e.g., a division or subdivision).
Machine vision device 150a of system 100 is attached to train car 140a. Machine vision device 150a may be attached to train car 140a in any suitable location that provides a clear view of railroad track 130a. For example, machine vision device 150a may be attached to a front end (e.g., front windshield) of train car 140a to provide a forward-facing view of railroad track 130a. As another example, machine vision device 150a may be attached to a back end (e.g., a back windshield) of train car 140a to provide a rear-facing view of railroad track 130a. In certain embodiments, machine vision device 150a captures images of railway environment 120 as train car 140a moves along railroad track 130a in direction of travel 160a.
Machine vision device 150b of system 100 is attached to train car 140b. Machine vision device 150b may be attached to train car 140b in any suitable location that provides a clear view of railroad track 130b. For example, machine vision device 150b may be attached to a front end (e.g., front windshield) of train car 140b to provide a forward-facing view of railroad track 130b. As another example, machine vision device 150b may be attached to a back end (e.g., a back windshield) of train car 140b to provide a rear-facing view of railroad track 130b. In certain embodiments, machine vision device 150b captures images of railway environment 120 as train car 140b moves along railroad track 130b in direction of travel 160b.
Machine vision devices 150 may inspect the captured images for objects. The objects may include railroad tracks 130, debris 172 (e.g., rubble, wreckage, ruins, litter, trash, brush, etc.), pedestrians 174 (e.g., trespassers), animals, vegetation, ballast, and the like. In some embodiments, machine vision devices 150 may use machine vision algorithms to analyze the objects in the images. Machine vision algorithms may recognize objects in the images and classify the objects using image processing techniques and/or pattern recognition techniques.
In certain embodiments, machine vision devices 150 use machine vision algorithms to analyze the objects in the images for exceptions. Exceptions are deviations in the object as compared to an accepted standard. Exceptions may include track misalignment (e.g., a curved, warped, twisted, or offset track) of one or more railroad tracks 130 (e.g., track misalignment 170 of railroad track 130b), debris 172 exceeding a predetermined size that is located on one or more railroad tracks 130 or within a predetermined distance of one or more railroad tracks 130, a pedestrian 174 (e.g., a trespasser) located on or within a predetermined distance of railroad tracks 130, a malfunction of a crossing warning device, an obstructed view of railroad tracks 130, damage to the object (e.g., a washout of the support surface of one or more railroad tracks 130), misplacement of the object, and the like.
In some embodiments, machine vision devices 150 may determine a value associated with the object and compare the value with a predetermined threshold (e.g., a predetermined acceptable value) to determine whether the object presents an exception. For example, machine vision device 150 may determine that track misalignment 170 of railroad track 130b of
Machine vision devices 150 may communicate one or more alerts to one or more components of system 100. The alerts may include indications of the exceptions (e.g., deficiencies) determined by machine vision devices 150. In certain embodiments, machine vision device 150a of
In certain embodiments, alerts generated by machine vision devices 150 may include one or more of the following: a description of the object (e.g., railroad track 130b); a description of the potential deficiency (e.g., track misalignment 170); the image of the object; a location of the object (e.g., a Global Positioning System (GPS) location of track misalignment 170 of railroad track 130b); a time when the object was captured by machine vision device 150 of train car 140; a date when the object was captured by machine vision device 150 of train car 140; an identification of train car 140 (e.g., train car 140a or train car 140b); an indication of direction of travel 160 of train car 140; an indication of one or more train cars that are scheduled to pass through railway environment 120 within a predetermined amount of time, and the like. In some embodiments, machine vision device 150a of
Network operations center 180 of system 100 is a facility with one or more locations that houses support staff who manage transportation-related traffic. For example, network operations center 180 may monitor, manage, and/or control the movement of trains across states, providences, and the like. Network operations center 180 may include transportation planning technology to facilitate collaboration between employees associated with network operations center 180. The employees may include dispatchers (e.g., a train dispatchers), support staff, crew members, engineers (e.g., train engineers), team members (e.g., security team members), maintenance planners, superintendents (e.g., corridor superintendents), field inspectors, and the like. In certain embodiments, network operations center 180 includes meeting rooms, televisions, workstations, and the like. Each workstation may include UE 190.
UE 190 of system 100 includes any device that can receive, create, process, store, and/or communicate information. For example, UE 190 of system 100 may receive information (e.g., a potential deficiency) from machine vision device 150 and/or communicate information (e.g., a confirmed deficiency) to machine vision device 150. UE 190 may be a desktop computer, a laptop computer, a mobile phone (e.g., a smart phone), a tablet, a personal digital assistant, a wearable computer, and the like. UE 190 may include a liquid crystal display (LCD), an organic light-emitting diode (OLED) flat screen interface, digital buttons, a digital keyboard, physical buttons, a physical keyboard, one or more touch screen components, a graphical user interface (GUI), and the like. While UE 190 is located within network operations center 180 in the illustrated embodiment of
In operation, machine vision device 150a is attached to train car 140a and machine vision device 150b is attached to train car 140b. Train car 140a is moving along railroad track 130a in southbound direction of travel 160a. Train car 140b is moving along railroad track 130b in northbound direction of travel 160b. Train car 140a enters railway environment 120 at time T1, and train car 140b is scheduled to enter railway environment 120 at a later time T2 (e.g., ten minutes after time T1). Machine vision device 150a captures an image of railway environment 120 at time T1 that includes railroad track 130b. Machine vision device 150a analyzes the image of railroad track 130b using one or more machine vision algorithms to determine a value associated with an alignment of railroad track 130b. Machine vision device 150a compares the alignment value to a predetermined acceptable alignment value and determines that the alignment value exceeds the predetermined acceptable alignment value. Machine vision device 150a determines, based on the comparison, that railroad track 130b includes a potential deficiency. Machine vision device 150a communicates an alert that includes an identification and a location of the potential deficiency to UE 190 of network operations center 180. A user of UE 190 confirms that the potential deficiency is an actual deficiency and communicates the identification and location of track misalignment 170 to machine vision device 150b of train car 140b prior to train car 140b encountering track misalignment 170. As such, system 100 may be used to alert a train of a dangerous condition in an upcoming railway environment, which may allow the train enough time to initiate an action that avoids the dangerous condition.
Although
Although
In certain embodiments, machine vision device 150b of
In some embodiments, machine vision device 150b automatically processes image 200 to identify one or more objects in image 200. Machine vision device 150b may use machine learning algorithms and/or machine vision algorithms to process image 200. In certain embodiments, machine vision device 150b automatically processes image 200 in real-time or in near real-time. In the illustrated embodiment of
In certain embodiments, machine vision device 150b automatically identifies one or more exceptions in image 200. For example, machine vision device 150b may capture image 200 of railroad track 130b, identify an exception (e.g., a curvature) in railroad track 130b of image 200, and use one or more algorithms to classify the exception as a potential deficiency (e.g., track misalignment 170). As another example, machine vision device 150b may capture image 200 of debris 172, identify an exception (e.g., debris 172 located too close to railroad track 130a, debris 172 obstructing a view of railroad track 130a, etc.) for debris 172 of image 200, and use one or more algorithms to classify the exception as a deficiency (e.g., a potential hazard to an oncoming train).
In some embodiments, machine vision device 150b generates one or more labels for image 200. The labels represent information associated with image 200. For example, machine vision device 150b may generate one or more labels for image 200 that identify one or more objects (e.g., railroad track 130b, debris 172, etc.). As another example, machine vision device 150b may generate one or more labels for image 200 that identify one or more potential deficiencies within image 200 (e.g., track misalignment 170, change in ballast profile 210, etc.). As still another example, machine vision device 150b may generate one or more labels for image 200 that provide additional information for image 200 (e.g., direction of travel 160a, end of vegetation growth 220, etc.). In some embodiments, machine vision device 150b superimposes one or more labels on image 200.
In certain embodiments, machine vision device 150b communicates image 200 to one or more external components (e.g., UE 190 of network operations center 180 of
Although
Although
In certain embodiments, machine vision device 150a of
In some embodiments, machine vision device 150a automatically processes image 300 to identify one or more objects in image 300. Machine vision device 150a may use machine learning algorithms and/or machine vision algorithms to process image 300. In certain embodiments, machine vision device 150a automatically processes image 300 in real-time or in near real-time. In the illustrated embodiment of
In certain embodiments, machine vision device 150a automatically identifies one or more exceptions in image 300. For example, machine vision device 150a may capture image 300 of railroad track 130b, identify an exception (e.g., a curved, buckled, warped, and/or twisted rail) in railroad track 130b of image 300, and use one or more algorithms to classify the exception as a deficiency (e.g., a track misalignment 170). As another example, machine vision device 150a may capture image 300 of debris 172, identify an exception (e.g., debris 172 located too close to railroad track 130b, debris 172 obstructing a view of railroad track 130b, etc.) for debris 172 of image 300, and use one or more algorithms to classify the exception as a deficiency (e.g., a potential hazard to an oncoming train).
In some embodiments, machine vision device 150a generates one or more labels on image 300. For example, machine vision device 150a may generate one or more labels on image 300 that identify one or more objects (e.g., railroad track 130a, railroad track 130b, debris 172, etc.). As another example, machine vision device 150a may generate one or more labels on image 300 that identify one or more potential deficiencies within image 300 (e.g., track misalignment 170, change in ballast profile 210, etc.). As still another example, machine vision device 150a may generate one or more labels on image 300 that provide additional information for image 300 (e.g., direction of travel 160b, end of vegetation growth 220, etc.). In some embodiments, machine vision device 150b superimposes one or more labels on image 300.
In certain embodiments, machine vision device 150a communicates image 300 to one or more components (e.g., UE 190 of network operations center 180 of
Although
Although
At step 420 of method 400, the machine vision device captures an image (e.g., image 300 of
At step 430 of method 400, the machine vision device analyzes the image of the object using one or more machine vision algorithms to determine a value associated with the object. For example, the machine vision device may analyze the image of the adjacent railroad track to determine a curvature value associated with the adjacent railroad track. As another example, the machine vision device may analyze the image of the debris to determine a size and/or shape value associated with the debris. As still another example, the machine vision device may analyze the image to determine a distance between the pedestrian and the adjacent railroad track. Method 400 then moves from step 430 to step 440.
At step 440 of method 400, the machine vision device compares the value associated with the object to a predetermined threshold. For example, the machine vision device may compare the curvature value associated with the adjacent railroad track to a predetermined curvature threshold. As another example, the machine vision device may compare the size and/or shape value associated with the debris to a predetermined size and/or shape threshold. As still another example, the machine vision device may compare the distance between the pedestrian and the adjacent railroad track to a predetermined distance threshold. Method 400 then moves from step 440 to step 450.
At step 450 of method 400, the machine vision device determines whether the comparison of the value associated with the object to the predetermined threshold indicates a potential deficiency of the object. In certain embodiments, the machine vision device may determine that the value associated with the object exceeds the predetermined threshold. For example, the machine vision device may determine that the curvature value associated with the adjacent railroad track exceeds the predetermined curvature threshold. As another example, the machine vision device may determine that the size and/or shape value associated with the debris exceeds the predetermined size and/or shape threshold. In certain embodiments, the machine vision device may determine that the value associated with the object is less than the predetermined threshold. For example, the machine vision device may determine that the distance (e.g., two feet) between the pedestrian and the adjacent railroad track is less than a predetermined threshold distance (e.g., five feet).
If, at step 450, the machine vision device determines that the comparison of the value associated with the object to the predetermined threshold does not indicate a potential deficiency of the object, method 400 advances from step 450 to step 465, where method 400 ends. If, at step 450, the machine vision device determines that the comparison of the value associated with the object to the predetermined threshold indicates a potential deficiency of the object, method 400 moves from step 450 to step 460, where the machine vision device communicates an alert to a component external to the train car. The alert may include one or more of the following: a description of the object; a description of the potential deficiency; the image of the object; a location of the object; a time when the object was captured by the machine vision device; a date when the object was captured by the machine vision device; an identification of the train car; an indication of the direction of travel of the train car; an indication of one or more train cars that are scheduled to pass through the railway environment within a predetermined amount of time, etc.
In certain embodiments, the machine vision device may communicate the alert to UE (e.g., UE 190) associated with a network operations center (e.g., network operations center 180 of
Modifications, additions, or omissions may be made to method 400 depicted in
Method 400 may be associated with any suitable transportation system (e.g., vehicles/roadways, vessels/waterways, and the like). Steps of method 400 may be performed in parallel or in any suitable order. While discussed as specific components completing the steps of method 400, any suitable component may perform any step of method 400. For example, one or more steps of method 400 may be automated using one or more components of the computer system of
Processing circuitry 520 performs or manages the operations of the component. Processing circuitry 520 may include hardware and/or software. Examples of a processing circuitry include one or more computers, one or more microprocessors, one or more applications, etc. In certain embodiments, processing circuitry 520 executes logic (e.g., instructions) to perform actions (e.g., operations), such as generating output from input. The logic executed by processing circuitry 520 may be encoded in one or more tangible, non-transitory computer readable media (such as memory 530). For example, the logic may comprise a computer program, software, computer executable instructions, and/or instructions capable of being executed by a computer. In particular embodiments, the operations of the embodiments may be performed by one or more computer readable media storing, embodied with, and/or encoded with a computer program and/or having a stored and/or an encoded computer program.
Memory 530 (or memory unit) stores information. Memory 530 (e.g., memory 124 of
Herein, a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such as field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate. A computer-readable non-transitory storage medium may be volatile, non-volatile, or a combination of volatile and non-volatile, where appropriate.
Herein, “or” is inclusive and not exclusive, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A or B” means “A, B, or both,” unless expressly indicated otherwise or indicated otherwise by context. Moreover, “and” is both joint and several, unless expressly indicated otherwise or indicated otherwise by context. Therefore, herein, “A and B” means “A and B, jointly or severally,” unless expressly indicated otherwise or indicated otherwise by context.
The scope of this disclosure encompasses all changes, substitutions, variations, alterations, and modifications to the example embodiments described or illustrated herein that a person having ordinary skill in the art would comprehend. The scope of this disclosure is not limited to the example embodiments described or illustrated herein. Moreover, although this disclosure describes and illustrates respective embodiments herein as including particular components, elements, feature, functions, operations, or steps, any of these embodiments may include any combination or permutation of any of the components, elements, features, functions, operations, or steps described or illustrated anywhere herein that a person having ordinary skill in the art would comprehend. Furthermore, reference in the appended claims to an apparatus or system or a component of an apparatus or system being adapted to, arranged to, capable of, configured to, enabled to, operable to, or operative to perform a particular function encompasses that apparatus, system, component, whether or not it or that particular function is activated, turned on, or unlocked, as long as that apparatus, system, or component is so adapted, arranged, capable, configured, enabled, operable, or operative. Additionally, although this disclosure describes or illustrates particular embodiments as providing particular advantages, particular embodiments may provide none, some, or all of these advantages.