Not applicable.
Not applicable.
This relates to a system that facilitates positioning a tool in a work space or at a worksite, such as for example a construction site. When the interior of a building is being finished, connectors, anchors and the like are attached to the floors, ceilings and other structures in the building and cuts are made and holes drilled using power saws and drills. All of this must be accomplished using special power tools at predetermined locations, requiring that the tools be operated at numerous precisely defined positions in the building. For example, nail guns, power saws, powder anchor tools, and the like are used to nail, cut, install fasteners, and perform other operations at predetermined points within the building with little error. In any building, a large number of electrical, plumbing, and HVAC components must be properly sited and installed, usually with power tools. Additionally, finishing a building interior also requires that a number of different tools that are not powered be operated at precisely defined positions, such as for example reinforcement bar scanners. Positioning both power tools and non-power tools must be accomplished quickly and with some precision with respect to the surrounding walls, ceilings and floors as they are roughed in. Typically, it has required a significant amount of labor to lay out various construction points at such a construction site. Teams of workers have been needed to measure and mark predetermined locations. It will be appreciated that this process has been subject to errors, resulting from measurement mistakes and from accumulated errors. Further, the cost of this layout process and the time needed to perform the layout process have both been significant.
Various location determining systems, including systems that incorporate one or more robotic total stations, have been used for building layout. The total station in such a system, positioned at a fixed, known location, directs a beam of laser light at a retroreflective target. As the target moves, robotics in the total station cause the beam of light to track the target. By measuring the time of travel of the beam from the total station to the retroreflective target and then back to the total station, the distance to the target is determined. The directional orientation of the beam to the target is also measured. Since the dimensional coordinates of the total station are known, the dimensional coordinates of the retroreflective target can easily be determined. Based on the measured position of the retroreflective target, and the desired position of some building feature, such as a drilled hole, or a fastener, the operator can move the reflector to the desired position, and mark the position.
Although position determination systems, such as ranging radio systems and robotic total station systems, can facilitate and speed the layout process, nevertheless the layout process has continued to be lengthy, tedious, and expensive.
A system for assisting an operator in positioning an operating element of any of a plurality of tools at a desired location at a worksite includes a plurality of fixed position video imaging devices located at known positions at the worksite. Each of the imaging devices has a known field of view. The tool has an operating element. A processor is responsive to the plurality of fixed position video imaging devices for determining the tool being viewed, and the position and orientation of the tool and the operating element of the tool. A radio transmitter is responsive to the processor for transmitting the position and orientation of the tool and the operating element and the desired position and orientation of the tool and the operating element to a receiver with the tool operator. A radio receiver and a display, responsive to the radio receiver, are carried by the tool operator, such that the operator is assisted in moving the operating element and the tool to a desired position. A memory may have a database of the digital image and dimensions of each of the plurality of tools, and a database of the desired locations at the worksite for operating the tool. A moveable video imaging device may also be used.
A method of assisting in the use by an operator of the operating element comprises the steps of providing a tool at the worksite, providing a plurality of video imaging devices at known positions with known fields of view in the work space, at least two of the video imaging devices providing an image of the tool at the worksite, and providing a database specifying the image and dimensions of the tool, including the operating element. The location and orientation of the tool are then determined based on the images of the tool from the at least two video imaging devices.
The method of assisting an operator in the use of the operating element of a tool at desired locations at a worksite may further include the steps of identifying a desired location at the worksite at which the tool is to be used, and displaying to the operator of the tool the position and orientation of the tool and the desired location at the worksite.
Reference is made to
The system for assisting an operator in positioning an operating element of a tool includes a plurality of fixed position video imaging devices, shown as digital video imaging devices 18, 20, 22, and 24. Each of the imaging devices has been leveled, is located at a known position at the worksite 11, and faces in a known direction. As a consequence, each of the imaging devices has a known field of view. As shown in
As shown in
(X2−X1)/L2=(X1−XP)/L1 and
XP=X1+(L1/L2)(X1−X2).
Similarly,
YP=Y1+(L1/L2)(Y1−Y2), and
ZP=Z1+(L1/L2)(Z1−Z2).
If L1=L2, then these relationships simplify even further to
XP=2X1−X2,
YP=2Y1−Y2, and
ZP=2Z1−Z2.
Thus, if the three-dimensional coordinates of the points 26 and 28 are determined, the three-dimensional coordinates of the point 23 is also known.
The coordinates of points 23, 26 and 28 are determined by use of the vector information provided by the video devices 24. The system further includes a processor 40 (
The processor 40 may receive the coordinates of each of the video devices 24 through a manual input at 42, or by any other appropriate means. Alternatively, the system may initially determine these coordinates based on the vectors from the devices 42 to targets 50. Targets 50 are positioned at known locations at the worksite and permit each of the video devices 24 to be deployed with a minimum of effort.
In any event, the position of point 23 is determined either by direct observation by devices 24 or in relation to points 26 and 28, and compared with data stored in memory 52. Memory 52 has data stored specifying a building information model (BIM) for the building 11. The memory 52 further has data specifying one or more desired locations for operation of the tool at the worksite. The memory 52 also has data stored specifying the appearance and dimensions of the tool 10, as well as other tools that may be used at the worksite. The processor 40 determines the location and orientation of the tool 10 in response to the images of the tool 10 provided by the plurality of video imaging devices 24 and compares this with a desired point of operation and tool orientation. Additionally, the processor may automatically determine which of the various tools that may be used at the worksite is then in the field of view of the video devices 24. This information is displayed on a display 46, as well as supplied to a radio transmitter 60 for transmission to the tool operator. The information is received by receiver 19 and then displayed on display 21 so that the operator may move the tool to the desired location at the worksite for operation.
The video devices 24 are shown in
In use, the video devices 24 are positioned at the worksite, leveled manually or automatically, and their three-dimensional coordinates noted either manually or automatically by reference to precisely located targets. The tool 10 is then moved by the operator so that the operating element of the tool is at a desired location, as indicated on display 21. The tool is then operated, and the tool moved to the next point of operation. When the tool is properly positioned and operated, a switch associated with trigger 15 may be actuated, permitting the system to keep track of the desired locations where the tool has been operated.
Reference is now made to
where, f is the camera focal length, xo, yo are the image space coordinates of the principal point, XL, XL, ZL, are object space coordinates of the exposure station (L), XA, YA, ZA are the object space coordinates of the arbitrary point A, and xa, ya are the image space coordinates of the point A. “m's” are the elements of the rotation matrix which can be calculated from three rotation angles (ω,φ,κ)
The nonlinear collinearity equations are linearized by using Taylor's theorem. In linearizing them, the collinearity equations are rewritten as following:
where
q=m31(XA−XL)+m32(YA−YL)+m33(ZA−ZL)
r=m11(XA−XL)+m12(YA−YL)+m13(ZA−ZL)
s=m21(XA−XL)+m22(YA−YL)+m23(ZA−ZL)
According to Taylor's theorem, the collinearity equation may be expressed in linearized form by taking partial derivatives with respect to the unknowns. In this case, the unknowns are the object coordinates of the arbitrary point A. The followings equations are simplified forms of the linearized collinearity equation to estimate the object coordinates of the arbitrary point A.
b14dXA+b15dYA+b16dZA=J+vx
b24dXA+b25dYA+b26dZA=K+vy
where,
Assume that there are three cameras, C1, C2, and C3, with known interior and exterior orientation parameters. The unknowns are the object coordinates of two arbitrary points A and B. Then the matrix form of the above linearized collinearity equation is written as following.
The least squares solution of above equation can be acquired by the following equation
X=(ATA)−1ATL
The above equation provides three dimensional object space coordinates of arbitrary points A and B. Therefore, the three dimensional orientation of a vector between the two points can be directly determined.
Reference is made to
As discussed, above, the system memory 52 may includes data defining the digital image and dimensions of each of a plurality of tools, so that the system can distinguish among the various tools that may be used by a worker. The system may then determine location based on a recognition of the overall three dimensional image of the tool, or based on the certain features of the tool, whether those features were added to the tool for this specific purpose or built into the tool for this or another purpose.
The system is capable of finding and tracking various construction tools as they move about the building construction site. It will be appreciated that each video imaging device viewing the construction site will provide a huge amount of video data for analysis. This analysis task may be simplified by only looking at those portions of each digital image where movement is sensed. While tools will be stationary from time to time, when they are in use by a workman, much of the time they will be moving from point to point. Once a tool and its features are located, the system can then continue to monitor the position of the tool, even when the tool is stationary.
If desired, a system may be configured to determine the coordinates of a point of interest in two dimensional space. Such a two dimensional system may be used, for example, to lay out positions on the floor of a building for operation of tools. Only one video device need be used for two dimensional operation, although using additional video devices increases accuracy and reduces the risk that the tool will be moved to a location at the worksite where there is no coverage by a video device.
Other variations in the system depicted in
Other arrangements can be used to determine the position and orientation of the tool.
Number | Name | Date | Kind |
---|---|---|---|
4942539 | McGee et al. | Jul 1990 | A |
6536536 | Gass | Mar 2003 | B1 |
6671058 | Braunecker et al. | Dec 2003 | B1 |
6782644 | Fujishima et al. | Aug 2004 | B2 |
6959868 | Tsikos et al. | Nov 2005 | B2 |
7540334 | Gass et al. | Jun 2009 | B2 |
8229595 | Seelinger et al. | Jul 2012 | B2 |
20030038179 | Tsikos | Feb 2003 | A1 |
20030147727 | Fujishima | Aug 2003 | A1 |
20080047170 | Nichols | Feb 2008 | A1 |
20080196912 | Gass | Aug 2008 | A1 |
20100046791 | Glickman | Feb 2010 | A1 |
20100066676 | Kramer | Mar 2010 | A1 |
20100234993 | Seelinger | Sep 2010 | A1 |
20120136475 | Kahle | May 2012 | A1 |
20130137079 | Kahle et al. | May 2013 | A1 |
20130250117 | Pixley | Sep 2013 | A1 |
Entry |
---|
“PCT/US2014/025073 PCT Search Report and Written Opinion”, Nov. 6, 2014, 18 Pages. |
Gong, et al., “An Object Recognition, Tracking and Contextual Reasoning-based Video Interpretation Method for Rapid Productivity Analysis of Construction Operations”, Automation in Construction, Elsevier Science Publishers, Amsterdam, NL., May 9, 2011, 1121-1226. |
3D reconstruction from multiple images, http://en.wikipedia.org/wiki/3D—reconsruction—from—multiple—images, printed Oct. 8, 2012, pp. 1-6. |
Photogrammetry, http://en.wikipedia.org/wiki/Photogrammetry, printed Oct. 9, 2012, pp. 1-3. |
Number | Date | Country | |
---|---|---|---|
20140267685 A1 | Sep 2014 | US |