This application includes embodiments that relate to robotic systems and methods of control and manipulation.
A variety of tasks may be performed by a robotic system that involve motion of an arm or a portion thereof. For example, a robot arm may be moved to contact or otherwise approach a target. As one example, a lever may be contacted by a robot arm. For instance, in a rail yard on one or more rail vehicle systems within the yard, a robot may be used to contact one or more brake levers. For example, between missions performed by a rail vehicle, various systems, such as braking systems, of the units of a rail vehicle may be inspected and/or tested. As one example, a brake bleeding task may be performed on one or more units of a rail vehicle system. In a rail yard, there may be a large number of rail cars in a relatively confined area, resulting in a large number of inspection and/or maintenance tasks. Conventional manipulation techniques may not provide a desired speed or accuracy in manipulation of a robot arm toward a target.
The robotic system may begin a task outside of the range of the articulatable arm. It may be necessary to move a base that is supporting the arm to be proximate to the target so as to allow the arm to contact it. It may be desirable to have a system and method that differs from those that are currently available.
In one embodiment, robotic system is provided that includes a base, an articulable arm, a visual acquisition unit, and a controller. The articulable arm may extend from a base and is movable toward a target. The visual acquisition unit can be mounted to the arm or the base and to acquire image data. The controller is operably coupled to the arm and the visual acquisition unit, and can derive from the image data environmental information corresponding to at least one of the arm or the target. The controller further can generate at least one planning scheme using the environmental information to translate the arm toward the target, select at least one planning scheme for implementation, and control movement of the arm toward the target using the at least one selected planning scheme.
In one embodiment, a method is provided that includes acquiring image data; deriving environmental information corresponding to at least one of an articulable arm or a target from the image data; generating a planning scheme using the acquired environmental information; and controlling movement of an arm toward a target using the planning scheme;
Optionally, the method may include acquiring additional image data during movement of the arm; generating additional environmental information from the additional image data; re-planning movement of the arm based at least in part on the additional environmental information; and moving a body supporting the arm towards the target based at least in part on the environmental information, the additional environmental information, or both.
In one embodiment, a robotic system is provided that includes an articulable arm extending from a base and configured to be movable toward a target, a visual acquisition unit configured to be mounted to the arm or the base and to acquire image data, and a controller operably coupled to the arm and the visual acquisition unit. The controller can derive from the image data environmental information corresponding to at least one of the arm or the target, generate at least one planning scheme using the environmental information to translate the arm toward the target, wherein each planning scheme is defined by at least one of path shape or path type, select at least one planning scheme for implementation based at least in part on the planning scheme providing movement of the arm in a determined time frame or at a determined speed, and control movement of the arm toward the target using the at least one selected planning scheme.
This application includes embodiments that relate to robotic systems and methods of control and manipulation. Various embodiments provide methods and systems for control of robotic systems, including unmanned or remotely controlled vehicles. Unmanned vehicles may be autonomously controlled. For example, various embodiments provide for control of a robotic vehicle to approach and/or contact a target. In some embodiments, the robotic systems may be controlled to contact an object and thereby manipulate that object. In various embodiments, one or more planning schemes may be selected to control motion of an articulable arm using acquired information describing or corresponding to the environment surrounding the articulable arm and/or the target.
At least one technical effect of various embodiments includes improving control (e.g., continuous servo control) reliability, accuracy, and/or precision for robotic systems. At least one technical effect of various embodiments is the improvement of robotic control to account for changes in the environment (e.g., motion of a target, or introduction of an obstacle after an initial movement plan is generated). The robotic system or vehicle may cooperatively engage the articulable arm and the position of the vehicle (via its propulsion system) in order to contact with the target object.
The depicted base, which may be referred to sometimes as a body or platform, may provide a foundation from which the arm extends, and provide a structure for mounting or housing other components, such as the visual acquisition unit (or aspects thereof), the processing unit (or aspects thereof), communication equipment (not shown in
The depicted arm is articulable and can move toward the target (e.g., based upon instructions or control signals from the processing unit). In some embodiments, the arm may be configured only to contact the target or otherwise approach the target (e.g., a camera or sensing device at the end of the arm may be positioned proximate the target for inspection of the target), while in other embodiments the arm may include a manipulation unit (not shown in
As seen in
As discussed, the visual acquisition unit can acquire environmental information corresponding to at least one of the arm, the target and the route from a point at the present location to a point proximate or adjacent to the target. For example, the environmental information may include information describing, depicting, or corresponding to the environment surrounding the arm, such as a volume sufficient to describe the environment within reach of the arm. In various embodiments, the perception acquisition unit 132 may include one or more of a camera, stereo camera, or laser sensor. For example, the visual acquisition unit may include on or more motion sensors, such as a Kinect motion sensor. The visual acquisition unit in various embodiments includes an infrared projector and a camera.
More than one individual device or sensor may be included in the depicted visual acquisition unit. For example, in the illustrated embodiment, the robotic system includes an arm-mounted visual acquisition unit 132 and a base-mounted visual acquisition unit 134. In some embodiments, the base-mounted visual acquisition unit 134 may be used to acquire initial environmental information (e.g., with the robotic system en route to the target, and/or when the arm is in a retracted position), and the arm-mounted visual acquisition unit 132 may obtain additional environmental information (e.g., during motion of the arm and/or when the arm is near the target) which may be used by the processing unit to dynamically re-plan movement of the arm, for example, to account for any motion by the target, or, as another example, to account for any obstacles that have moved into the path between the arm and the target.
The processing unit can generate an environmental model using the environmental information. The environmental information includes information describing, depicting, or corresponding to the environment surrounding the arm and/or the target, which may be used to determine or plan a path from the arm to the target that may be followed by the arm. In some embodiments, the desired movement is a movement of the arm (e.g., a distal portion of the arm) toward a target such as a brake lever, or other motion in which the arm is moving toward the target. In some embodiments, a grid-based algorithm may be utilized to model an environment (e.g., where the arm will move through at least a portion of the environment to touch the target 102). The environmental information may identify the target (e.g., based on a known size, shape, and/or other feature distinguishing the target from other aspects of the environment).
In some embodiments, the environmental information may be collected using Kinect or the like. In various embodiments, point cloud data points may be collected and grouped into a grid, such as an OctoMap grid or a grid formed using another three-dimensional (3D) mapping framework. The particular size and resolution of the grid is selected in various embodiments based on the size of the target, the nearness of the target to the arm, and/or the available computational resources, for example. For example, a larger grid volume may be used for an arm that has a relatively long reach or range, and a smaller grid volume may be used for an arm that has a relatively short reach or range, or when the arm is near the target. As another example, smaller grid cubes may be used for improved resolution, and larger grid cubes used for reduced computational requirements. In an example embodiment, where the robotic system can touch a brake lever with the arm, and where the arm has a range of 2.5 meters, the environmental model may be modeled as a sphere with a radius of 2.5 meters with cubes sized 10 centimeters×10 centimeters×10 centimeters. The sphere defining the volume of the environmental model may in various embodiments be centered around the target, around a distal end of the arm, around a visual acquisition unit (e.g., arm-mounted visual acquisition unit 132, base-mounted visual acquisition unit 134), and/or an intermediate point, for example, between a distal end of the arm and the target.
The depicted processing unit is also configured to select, from a plurality of planning schemes, at least one planning scheme to translate the arm toward the target. The processing unit uses the environmental model to select the at least one planning scheme. For example, using the relative location of the target and the arm (i.e., a portion of the arm configured to touch the target), as well as the location of any identified obstacles between the arm and the target, a path may be selected between the arm contact portion and the target. Depending on the shape of the path and/or complexity (e.g., the number and/or location of obstacles to be avoided), a planning scheme may be selected. As used herein, a planning scheme is a plan that sets forth a trajectory or path of the arm along a shape (or shapes) of a path as defined by a determined coordinate system. Accordingly, in various embodiments, each planning scheme of the plurality of schemes is defined by path shape or type and a coordinate system. In various embodiments, the at least one planning scheme may be selected to reduce or minimize time of motion and/or computational requirements while providing sufficient complexity to avoid any obstacles between the arm and the target. Generally, a motion planning scheme or algorithm is selected in various embodiments to provide for movement of the arm within a desired time frame or at a desired speed.
In various embodiments, the processing unit may select among a group of planning schemes that include at least one planning scheme that uses a first coordinate system and at least one other planning scheme that uses a second coordinate system (where the second coordinate system is different than the first coordinate system). For example, at least one planning scheme may utilize a Cartesian coordinate system, while at least one other planning scheme may utilize a joint space coordinate system.
As one example, the group of planning schemes may include a first planning scheme that utilizes linear trajectory planning in a joint space coordinate system. For example, a starting position and a target position for a motion may be defined. Then, using an artificial potential field algorithm, way points on the desired motion may be found. In this planning scheme, the motion is linear in the joint space (e.g., in 6 degrees of freedom of a robot arm), but non-linear in Cartesian space. After the way points are determined, velocities may be assigned to each way point depending on the task requirements. In some embodiment, the arm may be directed toward a lever that is defined in terms of 6D poses in Cartesian space. After obtaining the 6D poses in Cartesian space, the 6D pose in Cartesian space may be converted to 6 joint angles in joint space using inverse kinematics. With the joint angles determined, the desired joint angles on the motion trajectory may be determined. The first planning scheme as discussed herein (utilizing linear trajectory planning in a joint space coordinate system) may be particularly useful in various embodiments for motion in an open space, providing for relatively fast and/or easy planning in open space.
As another example, the group of planning schemes may include a second planning scheme that utilizes linear trajectory planning in a Cartesian coordinate system. For example, an artificial potential field algorithm may be used to fin way points on a desire motion trajectory in Cartesian space. Then, using inverse kinematics, corresponding way point in joint space may be found, with velocities to the way points assigned to implement the control. The second planning scheme as discussed herein (utilizing linear trajectory planning in a Cartesian coordinate system) may be particularly useful in various embodiments for motion in less open space, and/or for providing motion that may be more intuitive for a human operator working in conjunction with the robotic system.
As yet another example, the group of planning schemes may include a third planning scheme that utilizes point-to-point trajectory planning in a joint space coordinate system. In this planning scheme, target joint angles (e.g., joint angles of the portions of the arm at a beginning and end of a movement) may be defined with any internal way points. The third planning scheme as discussed herein (utilizing point-to-point trajectory planning in a joint space coordinate system) may be particularly useful in various embodiments for homing or re-setting, or to bring the arm to a target position (e.g., to a retracted or home position, or to the target) as quickly as possible.
Paths other than linear or point-to-point may be used. For example, a circular path (e.g., a path following a half-circle or other portion of a circle in Cartesian space) may be specified or utilized. As another example, a curved path may be employed. As another example, for instance to closely track a known surface profile, a path corresponding to a polygon or portion thereof may be employed, such as triangular or diamond shape.
More than one planning scheme may be employed for the movement from an initial position to the target. For example, a first planning scheme may be used to plan motion for a first portion of a motion, and a different planning scheme may be used to plan motion for a second portion of the motion. Accordingly, the processing unit in various embodiments control movement of the arm in a series of stages, with a first planning scheme used for at least one of the stages and a second, different planning scheme used for at least one other stage. In one example scenario, the first planning scheme described above may be used for an initial portion of the motion toward the target, for example in an open space or over a volume where precision may not be required. Then, for a portion of the motion closer to the target, the second planning scheme described above may be used for the motion toward the target. Finally, the third planning scheme described above may be used to retract the arm from the target and to a retracted or home position. Other combinations or arrangements of sequences of planning schemes used for a combined overall movement may be employed in various embodiments. Accordingly, the processing unit may select not only particular planning schemes to be used, but also sequences of planning schemes and transition points between the sequences of planning schemes for planning an overall motion.
Also, the processing unit of the illustrated example can plan movement of the arm toward the target using the selected at least one planning scheme. After the planning scheme (or sequence of planning schemes) has been selected, the depicted processing unit plans the motion. For example, a series of commands to control the motion of the arm (e.g., to move the joints of the arms through a series of predetermined angular changes at predetermined corresponding velocities) may be prepared. For example, for an arm that has multiple portions, the generated motion trajectories may be defined as a sequence of way points in joint space. Each way point in some embodiments includes information for 7 joint angles, velocity, and a timing stamp. The joint angles, timing stamp, and velocity may be put in a vector of points, and a command sent to drive the arm along the desired motion trajectory. For example, a program such as MotoROS may be run on the robotic system to implement the planned motion and commanded movement. Accordingly, the depicted processing unit controls movement of the arm toward the target using the at least one selected planning scheme.
The planning and motion of the arm may be adjusted in various embodiments. For example, processing unit may control the visual acquisition unit or portion thereof (e.g., arm-mounted visual acquisition unit 132) to acquire additional environmental information during movement of the arm (e.g., during movement of the arm toward the target). The processing unit may then dynamically re-plan movement of the arm (e.g., during movement of the arm) using the additional environmental information. For example, due to motion of the target during movement of the arm, a previously used motion plan and/or planning scheme used to generate the motion plan may no longer be appropriate, or a better planning scheme may be available to address the new position of the target. Accordingly, the processing unit in various embodiments uses the additional environmental information obtained during motion of the arm to re-plan the motion using an initially utilized planning scheme and/or re-plans the motion using a different planning scheme.
For example, the processing unit may use a first planning scheme for an initial planned movement using the environmental information (e.g., originally or initially obtained environmental information acquired before motion of the arm), and use a different, second planning scheme for revised planned movement using additional environmental information (e.g., environmental information obtain during movement of the arm or after an initial movement of the arm). For example, a first planning scheme may plan a motion to an intermediate point short of the target at which the arm stops, additional environmental information acquired, and the remaining motion toward the target may be planned using a second planning scheme. As another example, a first planning scheme may be used to plan an original motion; however, an obstacle may be discovered during movement, or the target may be determined to move during the motion of the arm, and a second planning scheme used to re-plan the motion. For instance, in one example scenario, an initial motion is planned using a point-to-point in joint space planning scheme. However, an obstacle may be discovered while the arm is in motion, and the motion may be re-planned using linear trajectory planning in Cartesian space to avoid the obstacle. In some embodiments, the re-planned motion in Cartesian space may be displayed to an operator for approval or modification.
As discussed herein, the depicted processing unit is operably coupled to the arm and the visual acquisition unit. For example, the processing unit may provide control signals to and receive feedback signals from the arm, and may receive information (e.g., environmental information regarding the positioning of the target, the arm, and/or other aspects of an environment proximate to the arm and/or target) from the visual acquisition unit. In the illustrated embodiment, the processing unit is disposed onboard the robotic system (e.g., on-board the base); however, in some embodiments the processing unit or a portion thereof may be located off-board. For example, all or a portion of the robotic system may be controlled wirelessly by a remotely located processor (or processors). The processing unit may be operably coupled to an input unit (not shown) configured to allow an operator to provide information to the robotic system, for example to identify or describe a task to be performed.
The depicted processing unit includes a control module 142, a perception module 144, a planning module 146, and a memory 148. Arrangements of units or sub-units of the processing unit may be employed in various embodiments, and that other types, numbers, or combinations of modules may be employed in alternate embodiments, and/or various aspects of modules described herein may be utilized in connection with different modules additionally or alternatively based at least in part on application specific criteria. The various aspects of the processing unit act individually or cooperatively with other aspects to perform one or more aspects of the methods, steps, or processes discussed herein. The processing unit may include processing circuitry configured to perform one or more tasks, functions, or steps discussed herein. The term processing unit is not intended to necessarily be limited to a single processor or computer. For example, the processing unit may include multiple processors and/or computers, which may be integrated in a common housing or unit, or which may be distributed among various units or housings.
The depicted control module may use inputs from the planning module to control movement of the arm the control module can provide control signals to the arm (e.g., to one or more motors or other actuators associated with one or more portions of the arm). The depicted perception module can acquire environmental information from the visual acquisition unit, and to generate an environmental model using the environmental information as discussed herein. The perception module in the illustrated embodiment provides information to the planning module for use in planning motion of the arm. The depicted planning module can select one or more planning schemes for planning motion of the arm as discussed herein. After selection of one or more planning schemes, the depicted planning module plans the motion of the arm using the one or more planning schemes and provides the planned motion to the control module for implementation.
The memory may include one or more tangible and non-transitory computer readable storage media. The memory, for example, may be used to store information corresponding to a task to be performed, a target, control information (e.g., planned motions), or the like. Also, the memory may store the various planning schemes from which the planning module develops a motion plan. Further, the process flows and/or flowcharts discussed herein (or aspects thereof) may represent one or more sets of instructions that are stored in the memory for direction of operations of the robotic system.
At 202, a robot (e.g., autonomous vehicle or robotic system 300) may be positioned near a target (e.g., target). The target, for example, may be a switch to be contacted by the robot. The robot may be configured to manipulate the target or a portion thereof after being place in contact with or proximate the target. The robot may include an arm configured to extend toward the target. In the illustrated embodiment, the robot at 202 is positioned within a range of the target defined by the reach of the arm of the robot.
At 204, environmental information is acquired. In various embodiments, the environmental information is acquired with a visual acquisition unit (e.g., visual acquisition unit, arm-mounted visual acquisition unit 132, base-mounted visual acquisition unit 134). The environmental information corresponds to at least one of the arm or the target to which the arm can be moved toward. For example, the environmental information may describe or correspond to a volume that includes the target and an arm (or portion thereof, such as a distal end) as well as any objected interposed between the arm and target or otherwise potentially contacted by a motion of the arm toward the object.
At 206, an environmental model may be generated using the environmental information acquired at 204. The environmental model, for example, may be composed of a grid of uniform cubes forming a sphere-like volume.
At 208, at least one planning scheme is selected. The at least one planning scheme can be used to plan a motion of the arm toward the target and may be selected in the illustrated embodiment using the environmental model. A planning scheme may be defined by a path type or shape (e.g., linear, point-to-point) and a coordinate system (e.g., Cartesian, joint space). In various embodiments the at least one planning scheme is selected from among a group of planning schemes including a first planning scheme that utilizes a first coordinate system (e.g., a Cartesian coordinate system) and a second planning scheme that utilizes a different coordinate system (e.g., a joint space coordinate system). In various embodiments, the selected at least one planning scheme includes a sequence of planning schemes, with each planning scheme in the sequence used to plan movement for a particular portion or segment of the motion toward the target.
At 210, movement of the arm is planned. The movement of the arm is planned using the planning scheme (or sequence of planning schemes) selected at 208. At 212, the arm is controlled to move toward the object. The arm is controlled using the plan developed at 210 using the at least one scheme selected at 208.
In the illustrated embodiment, at 214, the arm is moved in a series of stages. In some embodiments, the selected planning schemes include a first planning scheme that is used for at least one of the stages and a different, second planning scheme that is used for at least one other of the stages.
In the depicted embodiment, as the arm is moved toward the target (e.g., during actual motion of the arm and/or during a pause in motion after an initial movement toward the target), at 216, additional environmental information is acquired. For example, a visual acquisition unit (e.g., arm-based visual acquisition unit 132) is controlled to acquire additional environmental information. The additional environmental information may, for example, confirm a previously used position of the target, correct an error in a previous estimate of position of the target, or provide additional information regarding movement of the target.
At 218, movement of the arm is dynamically re-planned using the additional information. As one example, if the target has moved, the movement of the arm may be re-planned to account for the change in target location. In some embodiments, the same planning scheme used for an initial or previous motion plan may be used for the re-plan, while in other embodiments a different planning scheme may be used. For example, a first planning scheme may be used for an initial planned movement using environmental information acquired at 204, and a second, different planning scheme may be used for revised planned movement using the additional environmental information acquired 216. At 220, the arm is moved toward the target using the re-planned movement. While only one re-plan is shown in the illustrated embodiment, additional re-plans may be performed in various embodiments. Re-plans may be performed at planned or regular intervals, and/or responsive to detection of movement of the target and/or detection of a previously unidentified obstacle in or near the path between the arm and the target.
In the illustrated embodiment, the robotic system includes a base-mounted visual acquisition unit 320 mounted to the body 310. The depicted articulated arm 330 includes plural jointed sections 331, 333 interposed between a distal end 332 and the body 310. The distal end 332 is configured for contact with a target. In some embodiments, a gripper or other manipulator (not shown in
The robotic system or vehicle includes wheels 340 that can be driven by a motor and/or steered to move the robotic system about an area (e.g., a rail yard) when the robotic system is in a navigation mode. Additionally or alternatively, tracks, legs, or other mechanisms may be utilized to propel or move the robotic system. In the illustrated embodiment, the antenna 350 may be used to communicate with a base, other robots, or the like.
The particular arrangement of components (e.g., the number, types, placement, or the like) of the illustrated embodiments may be modified in various alternate embodiments. For example, in various embodiments, different numbers of a given module or unit may be employed, a different type or types of a given module or unit may be employed, a number of modules or units (or aspects thereof) may be combined, a given module or unit may be divided into plural modules (or sub-modules) or units (or sub-units), one or more aspects of one or more modules may be shared between modules, a given module or unit may be added, or a given module or unit may be omitted.
In one embodiment, the robotic system may include a propulsion unit that can move the robotic system between different locations, and/or a communication unit configured to allow the robotic system to communicate with a remote user, a central scheduling or dispatching system, or other robotic systems, among others. A suitable arm may be a manipulation device or other tool in one embodiment. In another embodiment, the arm may be a scoop on a backhoe or excavation equipment.
During travel of a robotic system along a route towards a target, in one embodiment, the visual acquisition device can generate image data representative of images and/or video of the field of view of the visual acquisition device(s). For example, the image data may be used to inspect the health of the route, status of wayside devices along the route being traveled on by the robotic system, or the like. The field of view of the visual acquisition device can encompass at least some of the route and/or wayside devices disposed ahead of the robotic system along a direction of travel of the robotic system. During movement of the robotic system along the route, the visual acquisition device can obtain image data representative of the route and/or the wayside devices for examination to determine if the route and/or wayside devices are functioning properly, are in the proper operational state, or have been damaged and need repair, and/or need manipulation or further examination.
The image data created by the visual acquisition device can be referred to as machine vision, as the image data represents what is seen by the system in the field of view of the visual acquisition device. The image data may constitute environmental information. One or more analysis processors 1404 of the system may examine the image data to identify conditions of the robotic system, the route, the target, and/or wayside devices. Optionally, the analysis processor can examine the terrain at, near, or surrounding the route and/or wayside devices to determine if the terrain has changed such that maintenance of the route, wayside devices, and/or terrain is needed. For example, the analysis processor can examine the image data to determine if vegetation (e.g., trees, vines, bushes, and the like) is growing over the route or a wayside device (such as a signal) such that travel over the route may be impeded and/or view of the wayside device may be obscured from an operator of the robotic system. The analysis processor can represent hardware circuits and/or circuitry that include and/or are connected with one or more processors, such as one or more computer microprocessors, controllers, or the like.
As another example, the analysis processor can examine the image data to determine if the terrain has eroded away from, onto, or toward the route and/or wayside device such that the eroded terrain is interfering with travel over the route, is interfering with operations of the wayside device, or poses a risk of interfering with operation of the route and/or wayside device. Thus, the terrain “near” the route and/or wayside device may include the terrain that is within the field of view of the visual acquisition device when the route and/or wayside device is within the field of view of the visual acquisition device, the terrain that encroaches onto or is disposed beneath the route and/or wayside device, and/or the terrain that is within a designated distance from the route and/or wayside device (e.g., two meters, five meters, ten meters, or another distance).
Acquisition of image data from the visual acquisition device can allow for the analysis processor 1404 to have access to sufficient information to examine individual video frames, individual still images, several video frames, or the like, and determine the condition of the route, the wayside devices, and/or terrain at or near the wayside device. The image data optionally can allow for the analysis processor to have access to sufficient information to examine individual video frames, individual still images, several video frames, or the like, and determine the condition of the route. The condition of the route can represent the health of the route, such as a state of damage to one or more rails of a track, the presence of foreign objects on the route, overgrowth of vegetation onto the route, and the like. As used herein, the term “damage” can include physical damage to the route (e.g., a break in the route, pitting of the route, or the like), movement of the route from a prior or designated location, growth of vegetation toward and/or onto the route, deterioration in the supporting material (e.g., ballast material) beneath the route, or the like. For example, the analysis processor may examine the image data to determine if one or more rails are bent, twisted, broken, or otherwise damaged. Optionally, the analysis processor can measure distances between the rails to determine if the spacing between the rails differs from a designated distance (e.g., a gauge or other measurement of the route). The analysis of the image data by the analysis processor can be performed using one or more image and/or video processing algorithms, such as edge detection, pixel metrics, comparisons to benchmark images, object detection, gradient determination, or the like.
A communication system 1406 of the system represents hardware circuits or circuitry that include and/or are connected with one or more processors (e.g., microprocessors, controllers, or the like) and communication devices (e.g., wireless antenna 1408 and/or wired connections 1410) that operate as transmitters and/or transceivers for communicating signals with one or more locations. For example the communication system may wirelessly communicate signals via the antenna and/or communicate the signals over the wired connection (e.g., a cable, bus, or wire such as a multiple unit cable, train line, or the like) to a facility and/or another vehicle system, or the like.
The image analysis system optionally may examine the image data obtained by the visual acquisition device to identify features of interest and/or designated targets or objects in the image data. By way of example, the features of interest can include gauge distances between two or more portions of the route. With respect to automobiles, the features of interest may include roadway markings. With respect to mining equipment, the features of interest may be ruts or hardscrabble pathways. With respect to rail vehicles, the features of interest that are identified from the image data can include gauge distances between rails of the route. The designated objects can include wayside assets, such as safety equipment, signs, signals, switches, inspection equipment, or the like. The image data can be inspected automatically by the route examination systems to determine changes in the features of interest, designated objects that are missing, designated objects that are damaged or malfunctioning, and/or to determine locations of the designated objects. This automatic inspection may be performed without operator intervention. Alternatively, the automatic inspection may be performed with the aid and/or at the request of an operator.
The image analysis system can use analysis of the image data to detect the route. The robotic system can be alerted to implement one or more responsive actions for obstacles, such as by slowing down and/or stopping the robotic system. When an obstacle is identified, one or more other responsive actions may be initiated. For example, a warning signal may be communicated (e.g., transmitted or broadcast) to one or more other robotic systems to warn the other robotic systems, a warning signal may be communicated to one or more wayside devices disposed at or near the route so that the wayside devices can communicate the warning signals to one or more other robotic systems, a warning signal can be communicated to an off-board facility that can arrange for the repair and/or further examination of the route, or the like.
In another embodiment, the image analysis system can examine the image data to identify text, signs, or the like, along the route. For example, information printed or displayed on signs, display devices, indicators from other robotic systems, and the like. These may indicate speed limits, locations, warnings, upcoming obstacles, identities of other robotic systems, or the like, and may be autonomously read by the image analysis system. The image analysis system can identify information by the detection and reading of information on signs. In one aspect, the image analysis processor can detect information (e.g., text, images, or the like) based on intensities of pixels in the image data, based on wireframe model data generated based on the image data, or the like. The image analysis processor can identify the information and store the information in the memory device. The image analysis processor can examine the information, such as by using optical character recognition to identify the letters, numbers, symbols, or the like, that are included in the image data. This information may be used to autonomously and/or remotely control the robotic system, such as by communicating a warning signal to the control unit of a robotic system, which can slow the robotic system in response to reading a sign that indicates a speed limit that is slower than a current actual speed of the robotic system. As another example, this information may be used to identify the robotic system and/or cargo carried by the robotic system by reading the information printed or displayed on the robotic system.
In another example, the image analysis system can examine the image data to ensure that safety equipment on the route is functioning as intended or designed. For example, the image analysis processor, can analyze image data that shows crossing equipment. The image analysis processor can examine this data to determine if the crossing equipment is functioning to notify other robotic systems at a crossing (e.g., an intersection between the route and another route, such as a road for automobiles) of the passage of the robotic system through the crossing.
In another example, the image analysis system can examine the image data to predict when repair or maintenance of one or more objects shown in the image data is needed. For example, a history of the image data can be inspected to determine if the object exhibits a pattern of degradation over time. Based on this pattern, a services team (e.g., a group of one or more personnel and/or equipment) can identify which portions of the object are trending toward a bad condition or already are in bad condition, and then may proactively perform repair and/or maintenance on those portions of the object. The image data from multiple different visual acquisition devices acquired at different times of the same objects can be examined to determine changes in the condition of the object. The image data obtained at different times of the same object can be examined in order to filter out external factors or conditions, such as the impact of precipitation (e.g., rain, snow, ice, or the like) on the appearance of the object, from examination of the object. This can be performed by converting the image data into wireframe model data.
In one aspect, the analysis processor of the image analysis system can examine and compare image data acquired by visual acquisition devices to detect hazards and obstacles ahead of the robotic system, such as obstacles in front of the robotic system along the route, detect damaged segments of the route, identify the target, identify a path to the target, and the like. For example, robotic system can include a forward-facing visual acquisition device that generates image data representative of a field of view ahead of the robotic system along the direction of travel 1600, a sideways-facing visual acquisition device that generates image data representative of a field of view around the robotic system, and a rearward-facing camera that generates image data representative of a field of view behind the robotic system (e.g., opposite to the direction of travel of the robotic system). The robotic system optionally may include two or more visual acquisition devices, such as forward-facing, downward-facing, and/or rearward-facing visual acquisition devices that generate image data. Multi-camera systems may be useful in generating depth sensitive imagery and 3D models.
In one embodiment, the image data from the various visual acquisition devices can be compared to benchmark visual profiles of the route by the image analysis processor to detect obstacles on the route, damage to the route (e.g., breaks and/or bending in rails of the route), or other hazards.
The image analysis processor can select one or more benchmark visual profiles from among several such profiles stored in a computer readable memory, such as the memory device. The memory device can include or represent one or more memory devices, such as a computer hard drive, a CD-ROM, DVD ROM, a removable flash memory card, a magnetic tape, or the like. The memory device can store the image data obtained by the visual acquisition devices and the benchmark visual profiles associated with a trip of the robotic system.
The benchmark visual profiles represent designated layouts of the route that the route is to have at different locations. For example, the benchmark visual profiles can represent the positions, arrangements, relative locations, of rails or opposite edges of the route when the rails or route were installed, repaired, last passed an inspection, or otherwise.
In one aspect, a benchmark visual profile is a designated gauge (e.g., distance between rails of a track, width of a road, or the like) of the route. Alternatively, a benchmark visual profile can be a previous image of the route at a selected location. In another example, a benchmark visual profile can be a definition of where the route is expected to be located in an image of the route. For example, different benchmark visual profiles can represent different shapes of the rails or edges of a road at different locations along a trip of the robotic system from one location to another.
The processor can determine which benchmark visual profile to select in the memory device based on a location of the robotic system when the image data is obtained by visual acquisition devices disposed onboard the robotic system. The processor can select the benchmark visual profile from the memory device that is associated with and represents a designated layout or arrangement of the route at the location of the robotic system when the image data is obtained. This designated layout or arrangement can represent the shape, spacing, arrangement, or the like, that the route is to have for safe travel of the robotic system. For example, the benchmark visual profile can represent the gauge and alignment of the rails of the track when the track was installed or last inspected.
In one aspect, the image analysis processor can measure a gauge of the segment of the route shown in the image data to determine if the route is misaligned.
The image analysis processor can measure a straight line or linear distance between one or more pixels in the image data that are identified as representing one rail, side, edge, or other component of the route to one or more other pixels identified as representing another rail, side, edge, or other component of the route, as shown in
The measured gauge distance can be compared to a designated gauge distance stored in the memory device onboard the robotic system (or elsewhere) for the imaged section of the route. The designated gauge distance can be a benchmark visual profile of the route, as this distance represents a designated arrangement or spacing of the rails, sides, edges, or the like, of the route. If the measured gauge distance differs from the designated gauge distance by more than a designated threshold or tolerance, then the image analysis processor can determine that the segment of the route that is shown in the image data is misaligned. For example, the designated gauge distance can represent the distance or gauge of the route when the rails of a track were installed or last passed an inspection. If the measured gauge distance deviates too much from this designated gauge distance, then this deviation can represent a changing or modified gauge distance of the route.
Optionally, the image analysis processor may determine the gauge distance several times as the robotic system travels over the route, and monitor the measured gauge distances for changes. If the gauge distances change by more than a designated amount, then the image analysis processor can identify the upcoming segment of the route as being potentially misaligned. As described below, however, the change in the measured gauge distance alternatively may represent a switch in the route that the robotic system is traveling toward.
Measuring the gauge distances of the route can allow the image analysis processor to determine when one or more of the rails in the route are misaligned, even when the segment of the route includes a curve. Because the gauge distance should be constant or substantially constant (e.g., within manufacturing tolerances, such as where the gauge distances do not vary by more than 1%, 3%, 5%, or another value), the gauge distance should not significantly change in curved or straight sections of the route, unless the route is misaligned.
In one embodiment, the image analysis processor can track the gauge distances to determine if the gauge distances exhibit designated trends within a designated distance and/or amount of time. For example, if the gauge distances increase over at least a first designated time period or distance and then decrease over at least a second designated time period, or decrease over at least the first designated time period or distance and then increase over a least the second designated time period, then the image analysis processor may determine that the rails are misaligned. Optionally, the image analysis processor may determine that the rails are misaligned responsive to the gauge distances increasing then decreasing, or decreasing then increasing, as described above, within a designated detection time or distance limit.
Optionally, the benchmark visual profile may represent a former image of the route obtained by a visual acquisition device on the same or a different robotic system. For example, the benchmark visual profile may be an image or image data obtained from a visual acquisition device onboard the robotic system and the environmental information proximate acquired by a visual acquisition device disposed off-board the robotic system can be compared to the benchmark visual profile. The designated areas can represent the locations of the pixels in the former image that have been identified as representing components of the route (e.g., rails, edges, sides, or the like, of the route).
In one aspect, the image analysis processor can map the pixels representative of components of the route to the benchmark visual profile or can map the designated areas of the benchmark visual profile to the pixels representative of the route. This mapping may include determining if the locations of the pixels representative of the components of the route in the image are in the same locations as the designated areas of the benchmark visual profile.
If the image analysis processor determines that at least a designated amount of the pixels representing one or more components of the route are outside of the designated areas in the benchmark visual profile, then the image analysis processor can identify the segment of the route that is shown in the image data as being misaligned. For example, the image analysis processor can identify groups 1902, 1904, 1906 of the pixels 1702 that represent one or more components of route as being outside of the designated areas. If the number, fraction, percentage, or other measurement of the pixels that are representative of the components of the route and that are outside the designated areas exceeds a designated threshold (e.g., 10%, 20%, 30%, or another amount), then the segment of the route shown in the image data is identified as representing a hazard or obstacle (e.g., the route is misaligned, bent, or otherwise damaged). On the other hand, if the number, fraction, percentage, or other measurement of the pixels that are representative of components the route and that are outside the designated areas does not exceed the threshold, then the segment of the route shown in the image data is not identified as representing a hazard or obstacle.
The image analysis processor then determines a relationship between these pixels. For example, the image analysis processor may identify a line between the pixels in the image for each rail, side, edge, or other component or the route. These lines can represent the benchmark visual profiles shown in
In one aspect, the image analysis processor can use a combination of techniques described herein for examining the route. For example, if both rails of the route are bent or misaligned from previous positions, but are still parallel or substantially parallel to each other, then the gauge distance between the rails may remain the same or substantially the same, and/or may not substantially differ from the designated gauge distance of the route. As a result, only looking at the gauge distance in the image data may result in the image analysis processor failing to identify damage (e.g., bending) to the rails. In order to avoid this situation, the image analysis processor additionally or alternatively can generate the benchmark visual profiles using the image data and compare these profiles to the image data of the rails, as described above. Bending or other misalignment of the rails may then be identified when the bending in the rails deviates from the benchmark visual profile created from the image data.
In one embodiment, responsive to the image analysis processor determining that the image data represents an upcoming obstacle on the route, the image analysis processor may direct generate a warning signal to notify the operator of the robotic system of the upcoming obstacle. For example, the image analysis processor can direct the control unit of the robotic system to display a warning message and/or display the image data. The robotic system then may move through the safe braking distance described above to make a decision as to whether to ignore the warning or to stop movement of the robotic system. If the obstacle is detected within the safe braking distance based on the image data obtained from one or more visual acquisition devices disposed onboard the robotic system, then the robotic system may be notified by the image analysis processor of the obstacle, thereby allowing reaction time to try and mitigate the obstacle, such as by stopping or slowing movement of the robotic system.
The image analysis system can receive image data from one or more visual acquisition devices disposed onboard one or more robotic systems, convert the image data into wireframe model data, and examine changes in the wireframe model data over time and/or compare wireframe model data from image data obtained by different visual acquisition to identify obstacles in the route, predict when the route will need maintenance and/or repair, etc. The image data can be converted into the wireframe model data by identifying pixels or other locations in the image data that are representative of the same or common edges, surfaces, or the like, of objects in the image data. The pixels or other locations in the image data that represent the same objects, surfaces, edges, or the like, may be identified by the image analysis system by determining which pixels or other locations in the image data have similar image characteristics and associating those pixels or other locations having the same or similar image characteristics with each other.
The image characteristics can include the colors, intensities, luminance, locations, or other information of the pixels or locations in the image data. Those pixels or locations in the image data having colors (e.g., wavelengths), intensities, and/or luminance that are within a designated range of each other and/or that are within a designated distance from each other in the image data may be associated with each other by the image analysis system. The image analysis system can group these pixels or locations with each other because the pixels or locations in the image data likely represent the same object (e.g., a rail of a track being traveled by a rail vehicle, sides of a road, or the like).
The pixels or other locations that are associated with each other can be used to create a wireframe model of the image data, such as an image that represents the associated pixels or locations with lines of the same or similar colors, and other pixels or location with a different color. The image analysis system can generate different wireframe models of the same segment of a route from different sets of image data acquired by different visual acquisition devices and/or at different times. The image analysis system can compare these different wireframe models and, depending on the differences between the wireframe models that are identified, identify and/or predict obstacles and whether the articulable arm may be needed to manipulate the target.
In one aspect, the image analysis system may have different predicted amounts of difficulty surmount an obstacle on the route associated with different changes in the wireframe data. For example, detection of a bend or other misalignment in the route based on changes in the wireframe model data may be associated with more damage to the route than other types of changes in the wireframe model data. As another example, the changing of a solid line in earlier wireframe model data to a segmented line in later wireframe model data can be associated with different degrees of damage to the route based on the number of segments in the segmented line, the size of the segments and/or gaps between the segments in the segmented line, the frequency of the segments and/or gaps, or the like. Based on the degree of damage identified from changes in the wireframe model data, the image analysis system may automatically re-route or stop the robotic system.
At 2204, the image data may be communicated to the robotic system from the off-board device. For example, the image data may be communicated to a transportation system receiver on the robotic system. The image data can be wirelessly communicated. The image data can be communicated as the image data is obtained, or may be communicated responsive to the robotic system entering into or leaving a designated area, such as a geofence. For example, the visual acquisition device on the wayside device may communicate image data to the robotic system upon the robotic system entering a communication range of a communication device of the visual acquisition device and/or upon the wayside device receiving a data transmission request from the robotic system. In an embodiment in which the aerial device leads or trails the robotic system, the image data from the visual acquisition device may be communicated continuously or at least periodically as the image data is obtained and the robotic system moves along the route.
At 2206, the image data is examined for one or more purposes. These purposes may be to control or limit control of the robotic system, to control operation of the visual acquisition device, to identify damage to the robotic system, to assess the route ahead of the robotic system, to assess the space between the arm and the target, and the like, and/or to identify obstacles in the way of the robotic system. The image data may be used to generate environmental information, and optionally an environmental model. This may be useful in selecting a movement plan for the arm to contact the target.
Further, in one embodiment, if the visual acquisition device is disposed onboard an aerial device flying ahead of the robotic system, then the image data can be analyzed to determine whether an obstacle exists ahead of the robotic system along the direction of travel of the robotic system and/or between the arm and the target. The image data may be examined using one or more image analysis processors onboard the robotic system and/or onboard the aerial device. For example, in an embodiment, the aerial device includes the one or more image analysis processors, and, responsive to identifying an obstacle in an upcoming segment of the route, the aerial device can communicate a warning signal and/or a control signal to the robotic system in the form of environmental information. The warning signal may notify an operator or controller of the robotic system of the obstacle. The control signal can interact with a vehicle control system, such as a Positive Train Control (PTC) system to automatically or autonomously slow the movement of the robotic system or even bring it to a stop. The robotic system's propulsion and navigation systems may maneuver the robotic system around the obstacle to arrive near enough to the target that the arm can be moved into contact therewith.
An image analysis system can examine the image data and, if it is determined that one or more obstacles are disposed ahead of the robotic system, then the image analysis system can generate a warning or control signal that is communicated to the control unit of the robotic system. This signal can be received by the control unit and, responsive to receipt of this control signal, the control unit can slow or prevent movement of the robotic system. For example, the control unit may disregard movement of controls by an onboard operator to move the robotic system, the control unit may engage brakes and/or disengage a propulsion system of the robotic system (e.g., turn off or otherwise deactivate an engine, motor, or other propulsion-generating component of the robotic system). In one aspect, the image analysis system can examine the image data to determine if the route is damaged (e.g., the rails on which a robotic system is traveling are broken, bent, or otherwise damaged), if obstacles are on the route ahead of the robotic system (e.g., there is another robotic system or object on the route), if the switches or signals at an intersection are operating properly, and the like.
Optionally, the method may include controlling the aerial device to fly relative to the robotic system. For example, the aerial device may be controlled to fly a designated distance from the robotic system along a path of the route such that the aerial device maintains the designated distance from the robotic system as the robotic system moves along the route. In another example, the aerial device may be controlled to fly to a designated location along the route ahead of the robotic system and to remain stationary in the air at the designated location for a period of time as the robotic system approaches the designated location.
One or more embodiments herein are directed to providing image data of a route to a robotic system on the route from a mobile platform or a fixed platform remote from the robotic system to enhance the awareness and information available to the operator of the robotic system. The mobile platform may be an aerial device that flies above the route, such as a quad rotor robot that is assigned to the robotic system. The aerial device is controlled by the crew or controller of the robotic system, such as to maintain a specified distance ahead of the robotic system, to travel to a specified location ahead of the robotic system, to maintain a specified height above the route, to use a specified sensor (e.g., infrared camera versus a camera in the visible wavelength spectrum) to capture the image data, and the like. The aerial device may be able to follow a path of the route based on known features of the route in order to provide idealized viewing points (which could be modified by the controller or the crew during the flight of the aerial device). The aerial device may dock on the robotic system, and the aerial device may return to the device for recharging automatically in response to a battery level falling below a designated threshold.
A suitable fixed platform may include permanent wayside equipment. For example, each grade crossing and/or other designated locations along the route could include the visual acquisition device that captures image data of the route used to detect obstacles. The image data from each visual acquisition device could become automatically accessible to the crew of a robotic system on the route as the robotic system enters a determined or predefined range of the wayside equipment (e.g. within directional Wi-Fi distance, within stopping distance plus some margin of error, etc.).
Optionally, the aerial devices and/or wayside devices that hold the visual acquisition devices may include onboard processing capability (e.g., one or more processors) that may be configured to detect anomalies in the captured segments of the route. The aerial devices and/or wayside devices may be configured to send notifications to the associated robotic system, to other nearby, non-associated robotic systems on the route, to a dispatch location, or the like. Furthermore, the aerial devices and/or wayside devices may include two-way audio capability such that the devices may provide audible warnings of the approaching robotic system on the route, such as at a route crossing. In addition, the aerial devices and/or wayside devices may allow the operator and/or crew of the associated robotic system to communicate, via the aerial device or wayside device, potential obstacles to other nearby robotic systems on the route in the form of still images, video, audio messages, text messages, or the like.
In one embodiment, a system (e.g., an off-board camera system) includes a camera and a communication device. The camera is configured to be disposed on an off-board device remotely located from a robotic system as the robotic system moves along a route. The camera is configured to generate image data representative of an upcoming segment of the route relative to a direction of travel of the robotic system. The communication device is configured to be disposed on the off-board device and to wirelessly communicate the image data to the robotic system during movement of the robotic system along the route.
A processing unit, processor, or computer that is “configured to” perform a task or operation may be particularly structured or programmed to perform the task or operation (e.g., having one or more programs or instructions stored thereon or used in conjunction therewith tailored or intended to perform the task or operation, and/or having an arrangement of processing circuitry tailored or intended to perform the task or operation).
As used herein, the term “computer,” “controller,” and “module” may each include any processor-based or microprocessor-based system including systems using microcontrollers, reduced instruction set computers (RISC), application specific integrated circuits (ASICs), logic circuits, GPUs, FPGAs, and any other circuit or processor capable of executing the functions described herein. The above examples are exemplary only, and are thus not intended to limit in any way the definition and/or meaning of the term “module” or “computer.” Embodiments may be implemented in hardware, software or a combination thereof. The various embodiments and/or components, for example, the modules, or components and controllers therein, also may be implemented as part of one or more computers or processors. The computer or processor may include a computing device, an input device, a display unit and an interface, for example, for accessing the Internet. The computer or processor may include a microprocessor. The microprocessor may be connected to a communication bus. The computer or processor may also include a memory. The memory may include Random Access Memory (RAM) and Read Only Memory (ROM). The computer or processor further may include a storage device, which may be a hard disk drive or a removable storage drive such as a solid state drive, optic drive, and the like. The storage device may be other similar means for loading computer programs or other instructions into the computer or processor. The computer, module, or processor executes a set of instructions that are stored in one or more storage elements, in order to process input data. The storage elements may also store data or other information as desired or needed. The storage element may be in the form of an information source or a physical memory element within a processing machine.
Various embodiments will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of various embodiments, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., processors, controllers or memories) may be implemented in a single piece of hardware (e.g., a general purpose signal processor or random access memory, hard disk, or the like) or multiple pieces of hardware. Similarly, any programs may be stand-alone programs, may be incorporated as subroutines in an operating system, may be functions in an installed software package, and the like. The various embodiments are not limited to the arrangements and instrumentality shown in the drawings.
As used herein, the terms “system,” “unit,” or “module” may include a hardware and/or software system that operates to perform one or more functions. For example, a module, unit, or system may include a computer processor, controller, or other logic-based device that performs operations based on instructions stored on a tangible and non-transitory computer readable storage medium, such as a computer memory. Alternatively, a module, unit, or system may include a hard-wired device that performs operations based on hard-wired logic of the device. The modules or units shown in the attached figures may represent the hardware that operates based on software or hardwired instructions, the software that directs hardware to perform the operations, or a combination thereof. The hardware may include electronic circuits that include and/or are connected to one or more logic-based devices, such as microprocessors, processors, controllers, or the like. These devices may be off-the-shelf devices that are appropriately programmed or instructed to perform operations described herein from the instructions described above. Additionally or alternatively, one or more of these devices may be hard-wired with logic circuits to perform these operations.
As used herein, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural of said elements or steps, unless such exclusion is explicitly stated. Furthermore, references to “one embodiment” are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments “comprising” or “having” an element or a plurality of elements having a particular property may include additional such elements not having that property.
The set of instructions may include various commands that instruct the computer, module, or processor as a processing machine to perform specific operations such as the methods and processes of the various embodiments described and/or illustrated herein. The set of instructions may be in the form of a software program. The software may be in various forms such as system software or application software and which may be embodied as a tangible and non-transitory computer readable medium. Further, the software may be in the form of a collection of separate programs or modules, a program module within a larger program or a portion of a program module. The software also may include modular programming in the form of object-oriented programming. The processing of input data by the processing machine may be in response to operator commands, or in response to results of previous processing, or in response to a request made by another processing machine.
As used herein, the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a computer, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory. The above memory types are exemplary only, and are thus not limiting as to the types of memory usable for storage of a computer program. The individual components of the various embodiments may be virtualized and hosted by a cloud type computational environment, for example to allow for dynamic allocation of computational power, without requiring the user concerning the location, configuration, and/or specific hardware of the computer system.
The above description is illustrative and not restrictive. For example, the above-described embodiments (and/or aspects thereof) may be used in combination with each other. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. Dimensions, types of materials, orientations of the various components, and the number and positions of the various components described herein are intended to define parameters of certain embodiments, and are by no means limiting and are merely exemplary embodiments. Many other embodiments and modifications within the spirit and scope of the claims will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Further, the limitations of the following claims are not written in means-plus-function format and are not intended to be interpreted based on 35 U.S.C. § 112(f) unless and until such claim limitations expressly use the phrase “means for” followed by a statement of function void of further structure.
This written description uses examples to disclose the various embodiments, and also to enable a person having ordinary skill in the art to practice the various embodiments, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the various embodiments is defined by the claims, and may include other examples that occur to those of ordinary skill in the relevant art. Such other examples are intended to be within the scope of the claims if the examples have structural elements that do not differ from the literal language of the claims, or the examples include equivalent structural elements with insubstantial differences from the literal language of the claims.
This application is a continuation-in-part and claims priority to U.S. patent application Ser. No. 15/293,905, filed 14 Oct. 2016, which claims priority to U.S. Provisional Application No. 62/343,375, filed 31 May 2016. This application also is a continuation-in-part and claims priority to U.S. patent application Ser. No. 15/061,129, filed 3 Mar. 2016. The entire contents of the foregoing application are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62343375 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15293905 | Oct 2016 | US |
Child | 16596679 | US | |
Parent | 15061129 | Mar 2016 | US |
Child | 15293905 | US |