The invention relates to mobile instruments for gathering data.
Various instruments are available for gathering image and/or spatial data. Two such instruments are described in the Applicant's U.S. Pat. No. 7,647,197 and PCT application PCT/NZ2011/000257.
Reference to any prior art in this specification does not constitute an admission that such prior art forms part of the common general knowledge.
It is an object of the invention to provide an improved mobile instrument and/or associated method or at least to provide the public with a useful choice.
Each object is to be read disjunctively with the object of at least providing the public with a useful choice.
In a first aspect the invention provides a mobile handheld instrument including:
a camera configured to capture an image;
a display configured to display the image;
a processor configured to determine an orientation of a surface within the image;
a user interface configured to receive a user selection of a region on the surface;
wherein the user selection of the region is forced into alignment with the determined orientation of the surface.
Preferably the display is configured to display the selected region overlaid on the image.
The region may be a one dimensional region. Preferably the user selection of the one dimensional region is forced into alignment with a true space horizontal or true space vertical based on the determined orientation of the surface.
Preferably the region is a two dimensional region. Preferably the user selection of the region is forced into alignment with a true space horizontal and a true space vertical based on the determined orientation of the surface.
Preferably the region is a true space rectangle. Preferably the user selection of the region consists of the user selecting a first corner of the rectangle and a second diagonally opposite corner of the rectangle. Preferably selecting the first and second corners consists of the user dragging a pointer from the first corner to the second corner.
Alternatively the region may be a true space circle. Preferably the user selection of the region consists of the user selecting first and second points defining the circle.
Preferably determining the orientation of the surface includes identifying one or more sets of parallel lines in the image and analyzing the vanishing point of each set of parallel lines.
Alternatively determining the orientation of the surface includes identifying the positions of three or more points on the surface and fitting a surface to those points.
Alternatively determining the orientation of the surface includes identifying one or more shapes on the surface and determining an orientation of the surface based on knowledge or assumptions relating to the true space properties of those shapes.
The mobile handheld instrument may be configured to receive a user copy instruction and to create a copy of the user selection in response to the user copy instruction and to display the copy of the user selection on the display.
The mobile handheld instrument may be configured to receive a user instruction to move the copy of the user selection, to move the displayed copy of the user selection, wherein the true space dimensions of the copy of the user selection are retained during movement of the copy of the user selection, with the displayed dimensions of the copy of the user selection being adjusted accordingly during movement of the copy of the user selection.
The mobile handheld instrument may be configured to detect like image regions based on comparison of image data within the user selection with image data elsewhere on the surface and to replicate the user selection at each like image region.
Preferably each replica user selection has the same true space dimensions and orientation as the user selection.
Preferably the surface is a plane.
The mobile handheld instrument may be configured to receive a user instruction to adjust the determined orientation of the surface or the forced alignment of the user selection and to adjust the determined orientation or forced alignment accordingly.
The mobile handheld instrument may be configured to determine one or more true space measurements and to display those measurements.
Preferably the display and user interface are both provided by a touch screen.
Preferably the mobile handheld instrument includes a rangefinder.
Preferably the mobile handheld instrument includes a positioning sensor.
Preferably the mobile handheld instrument includes one or more orientation sensors.
In a further aspect the invention provides a method of data collection in a mobile handheld instrument including:
a camera;
a display;
a processor; and
a user interface
the method including the steps of:
receiving a capture instruction from a user;
in response to the capture instruction, capturing an image using the camera;
displaying the captured image on the display;
the processor determining an orientation of a surface within the image;
the user interface receiving a user selection of a region on the surface;
forcing the user selection of the region into alignment with the determined orientation of the surface; and
displaying the user selection on the display.
The region may be a one dimensional region. Preferably the step of forcing the user selection of the region into alignment with the determined orientation of the surface comprises forcing the user selection into alignment with a true space horizontal or true space vertical based on the determined orientation of the surface.
Preferably the region is a two dimensional region.
Preferably the step of forcing the user selection of the region into alignment with the determined orientation of the surface comprises forcing the user selection into alignment with a true space horizontal and a true space vertical based on the determined orientation of the surface.
Preferably the region is a true space rectangle. Preferably the step of receiving a user selection of a region on the surface consists of receiving a user identification of a first corner of the rectangle and a second diagonally opposite corner of the rectangle.
Preferably the step of receiving a user selection of a region on the surface consists of receiving a user identification of a first corner of the rectangle and a second diagonally opposite corner of the rectangle by dragging a pointer from the first corner to the second corner.
Alternatively the region is a true space circle. Preferably the step of receiving a user selection of a region on the surface consists of receiving a user identification of first and second points defining the circle.
Preferably determining the orientation of the surface includes identifying one or more sets of parallel lines in the image and analyzing the vanishing point of the each set of parallel lines.
Alternatively determining the orientation of the surface includes identifying the positions of three or more points on the surface and fitting a surface to those points.
Alternatively determining the orientation of the surface includes identifying one or more shapes on the surface and determining an orientation of the surface based on knowledge or assumptions relating to the true space properties of those shapes.
Preferably the method includes:
Preferably the method includes:
Preferably the method includes:
Preferably each replica user selection has the same true space dimensions and orientation as the user selection.
Preferably the surface is a plane.
Preferably the method includes:
Preferably the method includes:
Preferably the display and user interface are both provided by a touch screen.
Preferably the instrument further includes a rangefinder, the method further including:
Preferably the instrument further includes a rangefinder, a positioning sensor and one or more orientation sensors, the method further including:
In a further aspect the invention provides a mobile handheld instrument including: a camera having a camera field of view and configured to provide a camera feed; one or more spatial sensors configured to capture substantially continuously a plurality of data sets, each data set being related to a target position within the camera field of view;
a display configured to display the camera feed in real time;
a processor configured to overlay a plurality of markers on the displayed camera feed, each marker being overlaid at a target position for which the one or more spatial sensors have already captured data.
Preferably the plurality of markers are displayed as a plurality of distinct marker symbols.
Preferably the plurality of markers are displayed as a continuous line or path.
Preferably the processor is configured to overlay the plurality of markers on the displayed camera feed in real time.
In another aspect the invention provides a method of data collection in a mobile handheld instrument including:
a camera;
a display;
one or more spatial sensors;
a processor; and
capturing image data using the camera and providing the image data as a real time camera feed to the display;
displaying the camera feed in real time;
capturing substantially continuously a plurality of spatial data sets from the one or more spatial sensors, each data set being related to a target position within the camera field of view;
overlaying a plurality of markers on the displayed camera feed, each marker being overlaid at a target position for which the one or more spatial sensors have already captured data.
Preferably the plurality of markers are displayed as a plurality of distinct marker symbols.
Preferably the plurality of markers are displayed as a continuous line or path.
Preferably the plurality of markers are overlaid on the displayed camera feed in real time.
In a further aspect the invention provides a mobile handheld instrument including:
a camera having a camera field of view and configured to provide a camera feed;
a user interface configured for user selection of one of a plurality of target categories;
one or more spatial sensors configured to capture a plurality of data sets, each data set being related to a target position within the camera field of view;
a display configured to display the camera feed in real time; and
a processor configured to associate the captured data sets with the selected target categories.
Preferably the target categories include a ground category.
Preferably the target categories include one or more of: a skyline category, an edge category, a surface category, and an object category.
The mobile handheld instrument may be configured to allow user definition of one or more target categories.
Preferably the one or more spatial sensors are configured to capture substantially continuously the plurality of data sets.
Preferably the processor is configured to use the captured data sets together with the target categories associated with the captured data sets to form a three dimensional model.
Preferably the processor is configured to overlay a plurality of markers on the displayed camera feed, each marker being overlaid at a target position for which the one or more spatial sensors have already captured data, wherein each displayed marker has one or more display properties that associate that marker with one of the target categories.
Preferably the display properties include one or more of: marker symbol, colour, size, pattern and style.
In a further aspect the invention provides a method of data collection in a mobile handheld instrument including:
a camera;
a display;
one or more spatial sensors; and
a user interface;
capturing image data using the camera and providing the image data as a real time camera feed to the display;
displaying the camera feed in real time;
capturing substantially continuously a plurality of spatial data sets from the one or more spatial sensors, each data set being related to a target position within the camera field of view;
a user selecting one of a plurality of target categories; and
associating the captured data sets with the selected target categories.
Preferably the target categories include a ground category.
Preferably the target categories include one or more of: a skyline category, an edge category, a surface category, and an object category.
The mobile handheld instrument may be configured to allow user definition of one or more target categories.
Preferably the one or more spatial sensors are configured to capture substantially continuously the plurality of data sets.
Preferably the processor is configured to use the captured data sets together with the target categories associated with the captured data sets to form a three dimensional model.
Preferably the processor is configured to overlay a plurality of markers on the displayed camera feed, each marker being overlaid at a target position for which the one or more spatial sensors have already captured data, wherein each displayed marker has one or more display properties that associate that marker with one of the target categories.
Preferably the display properties include one or more of: marker symbol, colour, size, pattern and style.
In another aspect the invention provides a mobile handheld instrument including: a camera having a camera field of view and configured to provide a camera feed; one or more spatial sensors configured to capture data related to a target point within the camera field of view;
a display configured to display the camera feed in real time;
a processor configured to overlay one or more measurements on the displayed camera feed, each measurement being calculated from the captured data for two or more target points and being overlaid in a position associated with at least one of those two or more target points.
Preferably each measurement is overlaid in a position associated with a line or area defined by the two or more target points.
In a further aspect the invention provides a mobile handheld instrument including:
a camera having a camera field of view;
one or more spatial sensors configured to capture data related to a target point within the camera field of view;
an inertial measurement unit; and
a processor;
the instrument is configured to capture an image from the camera and a spatial data set from the one or more spatial sensors in response to each of a plurality of user capture instructions;
the inertial measurement unit is configured to detect movement of the instrument between the plurality of user capture instructions; and
the processor is configured to process the spatial data sets to correct for the detected movement of the instrument and to stitch the captured images to form an image file having a larger coverage than the camera field of view.
Preferably the instrument is further configured to automatically collect image data independent of the user capture instructions.
Preferably, if the images captured in response to user capture instructions provide an incomplete coverage of a region extending between the target points, the processor is configured to stitch the captured images and the automatically collected image data to form the image file.
Preferably the automatically collected image data includes a plurality of periodically collected image frames.
Preferably the processor is configured to determine when the detected movement of the instrument away from a position at which image data was last collected or captured, and to automatically collect image data when that detected movement exceeds the threshold.
Preferably the processor is configured to determine when the detected movement of the instrument away from a position at which image data was last automatically collected, and to automatically collect further image data when that detected movement exceeds the threshold.
Preferably the processor is configured to stitch the image data based at least partly on analysis of the image data.
Preferably the processor is configured to stitch the image data based at least partly on the detected movement of the instrument.
In another aspect the invention provides a mobile handheld instrument including:
a camera having a camera field of view;
one or more spatial sensors configured to capture data related to a target point within the camera field of view;
an inertial measurement unit; and
a processor;
the instrument is configured to capture an image from the camera and a spatial data set from the one or more spatial sensors in response to each of a plurality of user capture instructions;
the inertial measurement unit is configured to detect movement of the instrument between the plurality of user capture instructions; and
the processor is configured to process the spatial data sets to correct for the detected movement of the instrument and to determine one or more of a distance between two target points or relative positions of two target points,
the two target points subtending an angle at the instrument greater than the camera field of view.
In a further aspect the invention provides a mobile handheld instrument including:
a camera having a camera field of view;
one or more spatial sensors;
an inertial measurement unit;
a display;
a processor; and
a user interface;
the instrument is configured to capture a plurality of spatial data sets from the one or more spatial sensors, the plurality of spatial data sets corresponding to a set of target points, and each spatial data set being captured in response to a user capture instruction;
the inertial measurement unit is configured to detect movement of the instrument between the plurality of user capture instructions; and
the processor is configured to:
The mobile handheld instrument may be configured to display a marker overlaid on the displayed image data at each target point.
Preferably the user instruction to alter the set of target points is an instruction to do one or more of the following: delete a target point, add a target point, move a target point, change the order of the target points, define a subset of the target points.
The mobile handheld instrument may be configured to capture one or more further spatial data sets in response to one or more further user capture instructions, after alteration of the set of target points.
The mobile handheld instrument may include a rangefinder module for physical attachment to a handheld user device.
Preferably the handheld user device is a smartphone, tablet or similar device.
In another aspect the invention provides a mobile handheld instrument including:
a first camera;
an inertial measurement unit including at least a second camera; and
a processor;
wherein:
the instrument is configured to capture an image from the first camera in response to each of a plurality of user capture instructions; and
the inertial measurement unit is configured to detect movement of the instrument between the plurality of user capture instructions.
Preferably the mobile handheld instrument includes a display, wherein the second camera is a back-facing camera with its optical axis substantially perpendicular to the display such that the second camera, in use, is directed towards the user's face.
Preferably the inertial measurement unit is configured to detect changes in dimensions or scale factor in image data obtained from the second camera, and to detect movement of the instrument at least in part through these detected changes.
Preferably the mobile handheld instrument includes one or more spatial sensors, the instrument being configured to capture a spatial data set from the one or more spatial sensors in response to each of the user capture instructions.
In a further aspect the invention provides a mobile handheld instrument including:
one or more sensors configured to capture spatial and/or image data in response to each of a plurality of user capture instructions;
an inertial measurement unit including at least a back-facing camera which, in use, is directed towards the user, the inertial measurement unit being configured to detect movement of the instrument between the plurality of user capture instructions.
Preferably the mobile handheld instrument includes a display, wherein the second camera is a back-facing camera with its optical axis substantially perpendicular to the display such that the second camera, in use, is directed towards the user's face.
Preferably the inertial measurement unit is configured to detect changes in dimensions or scale factor in image data obtained from the second camera, and to detect movement of the instrument at least in part through these detected changes.
In another aspect the invention provides a mobile handheld instrument including:
one or more sensors configured to capture spatial and/or image data in response to each of a plurality of user capture instructions;
an inertial measurement unit configured to detect movement of the instrument between the plurality of user capture instructions, the detection of movement being based at least in part on expected movements of the user's body.
Preferably the detection of movement is based at least in part on a restriction of possible movements to a surface model that is based on expected movements of the user's body.
In a further aspect the invention provides a method of data collection in a mobile handheld instrument including:
a camera;
a display; and
a processor;
the method including the steps of:
displaying an image captured by the camera on the display;
the processor determining an orientation of a surface within the image;
overlaying a graphic of a virtual or real object on the displayed image; and forcing the displayed graphic into alignment with the determined orientation of the surface.
Preferably the method includes storing an image including the captured image and the overlaid, aligned graphic in response to a user capture instruction.
Preferably the method includes the processor determining a scale associated with a region within the image, the graphic being overlaid on that region, the object having associated dimensions, wherein the dimensions of the overlaid graphic correspond to the dimensions associated with the object and the scale associated with the region.
In another aspect the invention provides a method of data collection in a mobile handheld instrument including:
a camera;
a display; and
a processor;
the method including the steps of:
displaying an image captured by the camera on the display;
the processor determining a scale associated with a region within the image;
overlaying a graphic representing a virtual or real object on the region of the displayed image, the object having associated dimensions, wherein the dimensions of the overlaid graphic correspond to the dimensions associated with the object and the scale associated with the region.
Preferably the graphic represents a virtual object and the method further includes a user adjusting the dimensions of the overlaid graphic, and the processor determining adjusted dimensions associated with the object based on the dimensions of the overlaid graphic and the scale associated with the region.
This aspect extends to a method of manufacturing an object, including: determining dimensions associated with the object by the method set out above and manufacturing the object according to those dimensions.
The invention will now be described by way of example only, with reference to the accompanying drawings, in which:
The instrument 1 includes a portable device 2, which may be a smartphone, tablet or similar device.
25
The portable device 2 may also be a portable GPS device. Such devices are available from suppliers such as Trimble, and may include a camera, display and GPS receiver.
The portable device is preferably a readily available item. The portable device 2 may include a camera 3 and a display 4 mounted in a housing 5. The portable device may also include a processor 7 and memory 8, and preferably includes one or more local communications modules 9, such as Bluetooth or USB communications modules. The portable device 2 may include other sensors, such as a positioning (e.g. GPS) module 10 and one or more orientation sensors 11. The orientation sensors 11 may include any suitable combination of direction-finding devices (e.g. magnetic or GPS compasses), tilt sensors and gyroscopes. The portable device preferably also includes a suitable user input arrangement, which may be a button, keypad, touchscreen, voice recognition, mouse or any other suitable input arrangement. The display and user input arrangement may both be provided by a suitable touchscreen.
The instrument 1 may also include a rangefinder module 15. The rangefinder module 15 includes a laser rangefinder 16 mounted in a housing 17. In order to achieve a compact form, the rangefinder is oriented along the housing with one or more mirrors or similar reflectors 18 redirecting the rangefinder, such that laser light is emitted and received through window 19. In general the rangefinder will be aligned along a rangefinder axis that extends from the rangefinder to a target. The reflectors 18 substantially align the rangefinder axis with the camera optical axis, with further alignment possible as discussed in PCT/NZ2011/000257.
This arrangement provides a thin or low profile rangefinder module that substantially retains the form factor of the portable device, such that the instrument 1 can be held in the same way.
The rangefinder module 15 may include other sensors 20, which may include positioning and orientation sensors. The rangefinder module preferably has a battery 22 to reduce the load on the portable device battery 23, as the rangefinder and other sensors in the rangefinder module will consume substantial energy. The rangefinder module may have a suitable port for connection of a battery charger, or the rangefinder module may draw power from a connection to the portable device.
The rangefinder module also includes a communications module 25 (such as a
Bluetooth or USB module) for communicating over a communications link with the communications module 9 of the portable device 2.
In general the rangefinder module 15 may provide any desired set of sensors to augment the sensors provided by the portable device 2. Even if the portable device includes a particular sensor, a further or more accurate sensor of the same kind may be provided in the rangefinder module.
The rangefinder module 15 may be mounted to the portable device 2 using any suitable mechanism, as discussed in PCT/NZ2011/000257.
The rangefinder module 15 has two windows 26, 27. The rangefinder beam is emitted through the first window 26 and the laser signal reflected or scattered from the target is received through the second window 27.
The rangefinder module 15 includes batteries 22, which may be standard AAA or AA batteries. The rangefinder module includes a reflector arrangement 18, which is formed by two reflectors, one for the emitted laser beam and one for the received laser beam. The rangefinder module includes a laser emitter which projects a laser beam towards the first reflector where the beam is redirected to exit the rangefinder module via the first window 26. The rangefinder module also includes a laser receiver, which measures laser light that is reflected or scattered from a target, received through the second window 27 and redirected by the second reflector towards the laser receiver.
The Applicant's rangefinder module is readily mounted to a standard consumer electronics device, such as a Smartphone (e.g. iPhone, Blackberry etc) or any suitable device having a camera, including portable GPS units or the like. This results in reduced cost over a dedicated instrument because many users will already have such devices, or many users will be able to justify the cost of such a device for the other functions it provides.
Connections between the components are omitted in
The instrument 1 has a housing 30 which contains a personal digital assistant (“PDA”), handheld computer device or similar device 31, which may have a touch-sensitive display screen 32, keypad 33, antenna 34 and USB port 35. The PDA 31 includes a central processing platform 37 (shown in
The instrument 1 may include a laser distance meter 47, compass 48, positioning (e.g. GPS) antenna 49, camera 50, microphone 51 and speaker 52 (not shown in
The instrument of
The device of either
When the user provides a capture instruction, the instrument will capture an image using the camera 3, 50 and a spatial data set. The spatial data set may include data obtained from the laser rangefinder 16, 47, the positioning device 10, 49 and/or orientation sensors 11, 20, 48. Substantially simultaneous data capture from all sensors can still be achieved by a suitable switching arrangement, such as described in the Applicant's U.S. Pat. No. 7,647,197.
Thus, the Applicant's invention allows intuitive and accurate aiming of the instrument 1.
The data capture process is shown in more detail in
In
The Applicant's instrument allows a user to select regions on a surface within the image and to have that user selection automatically correctly aligned for the perspective of the image.
As shown in
Having selected the rectangle tool 85, the user now selects a true space rectangle in the image. As a preliminary step, the user may select a surface 90 on which the region of interest lies. Alternatively, the instrument may automatically identify the surface 90 based on user selection of the region of interest. The instrument determines the orientation of the surface 90. This may be achieved in any suitable manner. For example, the orientation may be determined using a “vanishing point” method such as described below with reference to
In order to select a rectangular region of interest 91 (for example a door), the user selects diagonally opposite corners of the displayed, skewed rectangle. As indicated in
As the selection is made, the instrument automatically aligns the selection based on the determined orientation of the surface 90. The selection may be indicated by a selection outline 95. Note that the skewed shape of the selection outline 95 matches the perspective of the image. In true space the selection outline defines a region of the surface 90 that is a true space rectangle.
Thus, the user selection of the region is forced into alignment with the determined orientation of the surface.
The user selection may be of any desired two dimensional region. The user selection of a two dimensional region may be forced into alignment with a true space horizontal and a true space vertical based on the determined orientation of the surface.
Where the region is a true space circle, the user may select the region by selecting first and second points defining the circle, for example by identifying each point separately, or by dragging from one point to the other. The two points may be the centre and a point on the circumference, or two diametrically opposite points on the circumference.
In some embodiments the region may be one dimensional (i.e. a line on the surface). A one dimensional region may be selected by identifying each end of the region, for example by identifying each end separately, or by dragging from one end to the other. The user selection of a one dimensional region may be forced into alignment with a true space horizontal or true space vertical based on the determined orientation of the surface.
As shown in
A user may initially select a first region 97 on a surface 98. In the example shown, the user has selected a window 99. By comparison of the image data within the first region 99 with image data elsewhere on the surface 98, the instrument detects further regions having similar properties to the first region. A second window 100 is detected and found to have similar image properties and true space properties to the first window 99. A replica of the user selection is created and aligned with the second window 100. Note that the true space size of the replica selection is the same as the true space size of the original user selection. The displayed size of the original and replica user selections will however be different due to the perspective of the image.
As indicated by the dashed lines 101 and arrow 102 in
Multiple selections 107, 108 may be made by the user, and measurements or dimensions 109 associated with each may be displayed. Further, selections may be positioned partially behind, or even completely obscured by, the vehicle 106.
The instrument may also allow the user to adjust the determined orientation of the surface or the forced alignment of the user selection. This may be useful where the surface is irregular, or for some other reason it is difficult to determine its orientation accurately by any of the methods described in this specification.
In true space three dimensional coordinates, the location of the mobile handheld instrument may be taken to define the origin (0,0,0) of the local 3D coordinate system. When a user captures a target point, the target point is thus defined as (0,0,d), where d represents the distance between the mobile handheld instrument and the target point as measured by the laser rangefinder.
At step 110, a user drags a diagonal in image space on the user interface, defining the bottom left and the top right corners of the irregular quadrilateral (representing the real world rectangle) that the user wishes to measure. Preferably, the two corners i1, i2 represented by the diagonal are aligned with two corners (e.g. the bottom left and top right corners) of a real-world object (for example a window) as closely as possible.
At step 111, rays L1 and L2 are cast from the optical centre of the pinhole camera with 3D position (0,0,0) through i1 and i2 on the image plane. The rays L1 and L2 can be defined using linear equations.
A target plane 105 can be mathematically defined by the normal of the plane and a point lying on the plane. Methods for determining the orientation of a target plane are discussed in more detail elsewhere in this specification. Referring to step 112, real world 3D coordinate P1 corresponding to 2D image coordinate i1 can thus be found by calculating the projection of L1 into the real-world plane 121. Similarly the real-world coordinate P2 of image coordinate i2 is calculated by the projection of L2 into real-world plane 121.
At step 113, the remaining corners of the rectangle are extrapolated from P1, P2 and the calculated orientation of the real world plane. For example if P1 has real world 3D coordinates (x1,y1,z), and P2 has real world coordinates (x2,y2,z), P3 and P4 will have real world coordinates (x1,y2,z) and (x2,y1,z) respectively.
At step 114 P3 and P4 are then projected back through the optical centre to the image plane to find 2D points i3 and i4.
At step 115, a quadrilateral connecting i1-i4 is displayed on the user interface. A quadrilateral representing a real-world rectangle is shown in its correct perspective, as defined by the perspective of the image plane. Thus, a user can easily define a rectangle on the target plane which is forced into the proper alignment.
A homography matrix can be defined which represents the projective transformation matrix between the image plane and the real-world target plane. This simplifies the projection of further pixel coordinates onto their corresponding plane coordinates.
In any method described in this document, the orientation of a plane or surface may be determined by one of several suitable methods. The method of
Many surfaces will include numbers of edges or lines, not just at their intersections with other surfaces, but also at joins between panels or other building structures, windows, doors etc. In many cases there will be numerous edges or lines, and many of these will be either horizontal or vertical in true space.
The edges or lines present in the image may be detected by any suitable known edge extraction techniques. Such techniques are well known and need not be discussed further in this specification.
The system may make assumptions about the orientations of the surfaces. For many purposes, we can assume that the two surfaces 145 and 146 are vertical surfaces. In some instances of captured image data there may be surfaces (e.g.
floors, ceilings etc) that can be assumed to be horizontal surfaces. These assumptions may be overridden by a user where necessary.
In other embodiments, the user may identify properties of surfaces, for example by identifying a surface as a vertical, horizontal, planar or cylindrical surface.
These assumptions and or user-input information may be used to aid the determination of surface orientation.
The extracted lines 147, 148, 149, 150 may be used to determine orientation using a “vanishing point” method, which will now be described with reference to
A plane in a three dimensional coordinate space can be defined by a point S lying on the plane and a normal vector n orthogonal to the plane. The normal vector n orthogonal to the plane defines the orientation of the plane in 3D space.
At step 170, a target point 174 lying on the target plane 179 is captured using the mobile handheld instrument.
At step 171, the vertices of a quadrilateral defining the target plane are defined.
The normal of the target plane can be calculated using the vanishing points of the two sets of orthogonal parallel lines. A vanishing point is a point where two or more lines that are parallel in true space appear to meet in image space. At step 172, vanishing points v1 and v2 are calculated by finding the intersection of the sets of parallel lines from the quadrilateral defined by vertices 174-178.
At step 173, the normal 180 of the plane is calculated by the cross product of a first vector from an origin point to the first vanishing point v1 and a second vector from the origin to the second vanishing point v2. The origin point may be taken as the target point where the laser rangefinder strikes the target plane. Alternatively the origin point may be taken as the instrument centre, since the vanishing points will generally lie at infinity so the distance between the device and the target is not significant for the definition of the normal direction to the plane.
The target point 174 captured by the laser rangefinder lies on the target plane, with three dimensional coordinates (0,0,d), with d representing the distance to the target point from the location of the mobile handheld device. Thus, from the equation of the normal and a point lying on the plane, the target plane can be mathematically defined.
In
Further, in
Further, in
Further, in
Desirably, the final data set will include an image file encompassing the various points measured. This can be achieved by capturing a sufficient number of images as the instrument is moved. This may be done by capturing an image at each data capture point P1, P2. However, a greater number of images may be needed, or a smaller number of images may be sufficient, depending on the positions of the data capture points. It may not be necessary to capture an image for every point in the point to point mode. Alternatively, it may be necessary to capture further image data outside of the user-instigated capture process. This can be achieved by suitable methods described below.
At step 232, the user aims the instrument at a further point P2, and issues a further capture instruction. In response to the further capture instruction, the instrument captures a spatial data set associated with the further point P2.
Optionally, at this point the user may have the option of editing the data set at step 233. For example, the user may be permitted to delete one or more data capture points from the data set at step 234. Other editing steps include reordering the data points, and/or moving one or more data points.
When the user has finished editing, or if the user does not wish to edit the data points, the user may return to step 232 and capture further data points until it is determined at step 235 that the data set is complete 236. At this point, the user may be given an opportunity at step 237 to edit the data set, by deleting, reordering, moving, or adding a target point, or defining a subset of the target points. The user may also be given the option to return to the data capture process, to add further points to the data set after this editing step.
By deleting and/or reordering the data points, the user changes the connections between points. The displayed data preferably automatically updates to reflect these changes.
The measurement data 240, 241, 242, 243 similar to that of
In general, the measurements calculated from the captured spatial data may be overlaid in any suitable position. Preferably the overlaid data is displayed in a position associated with at least one of the relevant target points. Where the calculated distance is a distance between two points, it may be overlaid near a line connecting those points.
The point to point method of
The Applicant's instrument therefore preferably includes an arrangement for sensing the local movement of the instrument. In one embodiment this may be an inertial measurement unit (“IMU”) 210 (
The local movement sensing arrangement, or IMU, may include any arrangement of devices suitable to provide an accurate assessment of the instrument's movement between data captures. These devices may include accelerometers, gyroscopes and/or magnetometers. The IMU may have its own processor, or may rely on the processing capability of the processor already present in the instrument 1. Further, although shown in the drawings as a separate device, the IMU may draw on other devices in the instrument 1, including the orientation sensors 11, 48 for example. Further, the IMU may draw on the output of other sensors as inputs to its movement determination, or as a cross-check of its movement determination. For example, image data from one or more cameras, GPS data, further accelerometer data, compass data and barometric data may be used as inputs to the IMU, or as cross-checks against the IMU's movement determinations.
IMU's are commercially available and the workings of these devices need not be further discussed in this document.
The IMU capability may also be used in determination of a surface orientation, rather than relying on the vanishing point method described above. Where a user knows at the time of capturing data that a particular surface is of interest, the user may capture spatial data for three or more points on that surface. So long as the three or more points do not lie along a line, this will be sufficient data to define a plane.
For non-planar surfaces, a greater number of data points may be captured in this way and a surface fitted through those points.
In either case, the data points may be captured in a point by point mode where the user instructs each data capture. Alternatively, in a multipoint mode two or more data points may be captured automatically in response to a single user instruction (e.g. using the device to “paint” a surface with data points, capturing spatial data for each). The IMU may be used to monitor the device movement between data captures for any desired capture mode.
In these methods, the instrument may again use the IMU to track the instrument's movement between data captures.
In this multipoint method, data is preferably gathered substantially continuously by the instrument without the user having to instruct each data capture. In practice, this means that data will be periodically captured. Data may be gathered at any desired rate to give a suitable density of data points in a reasonable capture period. For example, around 1 to 20 data points may be captured per second. In some embodiments the capture rate may be adjustable by the user.
The user may issue a single capture instructions using the “record” button 283 and the instrument preferably continues to gather data until the user stops the data recording, for example by pressing the “record” button a second time.
In some embodiments similar data sets may be captured, with the user issuing a capture instruction for capture of each data set.
In one embodiment the user sets a category (e.g. “ground) and then uses the instrument to capture data points corresponding to that category. The user then changes the category (e.g. to “dirt pile”) and captures data points corresponding to that category. This continues until data has been captured for each desired category.
In another embodiment, the user may capture data before manually defining regions of an image file and selecting different categories for those regions.
In the example shown, the displayed image has three data categories—skyline, ground and dirt pile. This categorisation may help the instrument to display the data in a more helpful way (for example by colour coding the different categories).
Further, categorisation before data capture may aid the instrument in determining the boundaries between features captured in the image. The captured data sets together with the target categories associated with the captured data sets may be used to form a three dimensional model.
Further, in any of the methods disclosed herein, the displayed marker may have one or more display properties that associate that marker with one of the target categories. For example, the marker may be colour coded for a particular category, or each category may be associated with a different marker symbol, size, pattern or style.
The automatically collected image data is preferably collected independent of the user capture instructions. It may include image frames collected periodically (e.g. continuously collected video data). Alternatively, to reduce the amount of data required, image data may be automatically collected when the movement of the instrument away from a position at which image data was last collected or captured exceeds a threshold. For example, the instrument may capture a first image at a first position, either automatically or in response to a user capture instruction. A further image should be captured before the instrument is moved such that the camera field of view does not overlap with that first image. A suitable threshold may be set at 20% overlap, or some other suitable level. When that threshold is exceeded a further image may be captured.
In any of the above methods requiring the instrument's position and/or orientation to be tracked between measurements, further inputs and/or assumptions may be used to enhance the performance of the IMU.
Performance of the IMU may be further improved by instructing users not to move their feet or bodies between measurements and to move the instrument by moving their arm about the shoulder, with no or minimal changes in extension at the elbow. This will result in user movements closer to the assumed movement.
In use, the IMU will provide data such as orientation and acceleration. The accuracy of the position data can be augmented by restricting allowable instrument positions to the surface 312, or allowable instrument movements to movements on the surface 312.
The performance of the IMU may be improved using data captured using a second camera facing back towards the user. Some Smartphones are now sold with a second, back-facing, camera (such as, e.g. the back-facing camera 319 shown in
These relative changes in dimension or scale may be used as inputs to augment performance of the IMU. Movements of the device towards or away from the user's face may indicate that the user is moving the device in a non-ideal manner that departs from the surface 312 of
In this embodiment the user has access to a model 353 of a desired billboard. The model 353 may be any suitable model, including a graphics file saved on the instrument. The graphics file may be generated by a designer or may be captured by photographing a physical image or object, or may be obtained or generated in any other suitable manner.
The real world dimensions of model 353 may be determined based on the position of the model within the displayed image, and using one or more of the instrument's spatial sensors to determine an appropriate scale to be associated with the model 353. For example, in
The real world dimensions of the model 353 may be adjusted manually by a user. For example, a user may drag the corners of the model 353 to resize the model as shown in
In variations of this embodiment the instrument may make an initial fit of the model to the available space, for example by automatically detecting edges defined by rooflines, wall edges and windows, and resizing the model to allow a predefined standard spacing between the model and the detected edges. This initial fit may then be adjusted by the user.
Further, the position of the site in which the billboard or other object will be installed may be simultaneously captured, together with the image of the site and the required dimensions of the billboard or other object. The position may be determined by any of the methods described above, or disclosed in the Applicant's U.S. Pat. No. 7,647,197 or PCT/NZ2011/000257.
The determined dimensions 356 of the model may then be used in fabrication of the required billboard or other object. The dimensions may be taken manually from the instrument or may be sent automatically from the instrument to a fabrication system.
The model 360 may be fixed to the centre of the frame, with the image moving as the user moves the instrument. Alternatively, the user may be permitted to move the model within the frame, for example by dragging the model.
In preferred embodiments the rotational alignment of the model 360 may be adjusted, as indicated in
The instrument 1 is handheld and portable. It can therefore be conveniently carried and used.
Computer instructions for instructing the above methods may be stored on any suitable computer-readable medium, including hard-drives, flash memory, optical memory devices, compact discs or any other suitable medium.
While the invention has been described with reference to GPS technology, the term GPS should be interpreted to encompass any similar satellite positioning system.
The skilled reader will understand that the above embodiments may be combined where compatible.
While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in detail, it is not the intention of the Applicant to restrict or in any way limit the scope of the appended claims to such detail. Further, the above embodiments may be implemented individually, or may be combined where compatible. Additional advantages and modifications, including combinations of the above embodiments, will readily appear to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details, representative apparatus and methods, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of the Applicant's general inventive concept.
This application claims benefit of U.S. Provisional Ser. No. 61/978,350, filed 11 Apr. 2014 and U.S. Provisional Ser. No. 62/095,245, filed 22 Dec. 2014 and which applications are incorporated herein by reference. To the extent appropriate, a claim of priority is made to each of the above disclosed applications.
Number | Date | Country | |
---|---|---|---|
61978350 | Apr 2014 | US | |
62095245 | Dec 2014 | US |