The present invention generally relates to optical measuring systems, more specifically to optical measuring systems for gathering geometric data relative to two dimension of objects from photographic data from a range of possible camera viewing angles and distance of the two dimensional surface of the object from the camera.
The present invention relates generally to measurement systems and more specifically it relates to an image data capture and processing system, consisting of a digital imaging device, computer and software that generates correction data sets from which real world coordinate information with planarity, scale, aspect, and innate dimensional qualities can be extracted from the captured image in order to extract real dimensional data from the imaged objects.
In the following specification, we use the name Non-Orthographic Measurement System or Quantified Image Measurement System to refer to a system that extracts real world coordinate accurate 2-dimensional data from non-orthographically imaged objects. This includes not just extracting accurate real world measurements but also data that can be used to determine any measurement of a 2-dimensions surface of an object within the orthographically or non-orthographically imaged portion of the object's two dimensional surface.
This invention eliminates a key problem of electronic distance measurement tools currently in the market: the need for the measurement taker to transcribe measurements and create manual associations with photos, drawings, blueprints, or sketches. Additionally these same devices typically only capture measurements one at a time and do not have the ability to share the information easily or seamlessly with other systems that can use the measurement data for additional processing. With the advent of mobile devices equipped with megapixel digital cameras, this invention provides a means to automatically calculate accurate physical measurements between any of the pixels or sets of pixels within the photo. The system preferably can use nearly any image format including but not limited to JPEG, TIFF, BMP, PDF, GIF, PNG, EXIF and enhances the image file with measurement data and data transformation information that enables the creation of any type of geometrical or dimensional measurement from the stored photograph. This file containing the original digital image along with the supplemental data is referred to as a Quantified Image File (“QIF”).
The QIF can be shared with other systems via email, cloud syncing or other types of sharing technology. Once shared, existing systems such as CAD applications or web/cloud servers can use the QIF and the associated QIF processing software routines to extract physical measurement data and use the data for subsequent processing or building geometrically accurate models of the objects or scene in the image. Additionally smart phones and other portable devices can use the QIF to make measurements on the spot or share between portable devices. While some similar systems may purport to extract measurements from image files, they differ from the present invention by requiring the user to capture the picture from a particular viewpoint, most commonly from the (orthographic) viewpoint that is perpendicular to the scene or objects to be measured. The Quantified Image Measurement System of this invention eliminates the need for capturing the image from any particular viewpoint by using multiple reference points and software algorithms to correct for any off-angle distortions.
There is a need for an improved optical system for measuring tools which extract dimensional information of objects imaged from non-orthographical viewing angle(s) and allows for later extraction of additional measurements without reimaging of the object.
The invention generally relates to a 2 dimensional textures with applied transforms which includes a digital imaging sensor, a reference object or reference template, a calibration system, a computing device, and software to process the digital imaging data.
There has thus been outlined, rather broadly, some of the features of the invention in order that the detailed description thereof may be better understood, and in order that the present contribution to the art may be better appreciated. There are additional features of the invention that will be described hereinafter.
In this respect, before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not limited in its application to the details of construction or to the arrangements of the components set forth in the following description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced and carried out in various ways. Also, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting.
An object is to provide a Quantified Image Measurement System for an image data capture and processing system, consisting of a digital imaging device, reference object or reference template, computer and software that generates data sets for correction of non-orthographic distortions, with planarity, scale, aspect, and innate dimensional qualities.
Another object is to provide a Quantified Image Measurement System that allows a digital camera or imager data to be corrected for a variety of lens distortions by using a software system.
Another object is to provide a Quantified Image Measurement System that has a computer and software system that integrates digital image data with a known reference object or reference template to create a set of correction data for the information in a 2-dimensional non-orthographic image with corrected planarity and distortion rectified information.
Another object is to provide a Quantified Image Measurement System that has a computer and software system that integrates digital image data with reference object or reference template data, to mathematically determine the properties of the scene from an orthographic point of view.
Another object is to provide a Quantified Image Measurement System that has a software system that integrates the planarity, scalar, and aspect information, to create a set of mathematical data that can be used to extract accurate real world measurements and that can be exported in a variety of common file formats.
Another object is to provide a Quantified Image Measurement System that has a software system that creates additional descriptive notation in or with the common file format, to describe the image pixel scalar, dimension and aspect values, at a point of planarity.
Another object is to provide a Quantified Image Measurement System that has a software system that displays correct geometrical measurements superimposed or adjacent to the original image.
Another object is to provide a Quantified Image Measurement System that has a software system can export the set of mathematical data that can be used to extract accurate real world measurements and additional descriptive notation.
Other objects and advantages of the present invention will become obvious to the reader and it is intended that these objects and advantages are within the scope of the present invention. To the accomplishment of the above and related objects, this invention may be embodied in the form illustrated in the accompanying drawings, attention being called to the fact, however, that the drawings are illustrative only, and that changes may be made in the specific construction illustrated and described within the scope of this application.
Another object is to provide a system for determining QIF dimensional characterization data to be stored with (or imbedded in) the image data for later use in extracting actual dimensional data of objects imaged in the image.
Another object is to provide a active image projections method alternative to the passive method of placing a reference pattern from which QIF characterization data can be determined and then used to extract dimensional data of objects in the image.
For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings in which like reference numerals indicate like features and wherein:
Preferred embodiments of the present invention are illustrated in the FIGUREs, like numerals being used to refer to like and corresponding parts of the various drawings.
The present invention generally relates to an improved optical system for extracting measurement information from an image of the 2-dimensional surface of an object taken from a non-orthographic viewing angle and in the process extracting correction data sets for image distortions caused by a non-orthographic viewing angle. The system creates geometrically correct real world measurements coordinates from an image taken from an arbitrary viewing angle.
A. Overview
The components of the non-orthographic image measurement system 110 illustrated in
B. Camera
The camera 116 is optical data capture device, with the output being preferably having multiple color fields in a pattern or array, and is commonly known as a digital camera. The camera function is to capture the color image data within a scene, including the reference template data. In other embodiments a black and white camera would work, almost as well, as well or in some cases better than a color camera. In some embodiments of the orthographic image capture system, it may be desirable to employ a filter on the camera that enhances the image of the reference template for the optical data capture device.
The camera 116 is preferably a digital device that directly records and stores photographic images in digital form. Capture is usually accomplished by use of camera optics (not shown) which capture incoming light and a photosensor (not shown), which transforms the light intensity and frequency into colors. The photosensors are typically constructed in an array, that allows for multiple individual pixels to be generated, with each pixel having a unique area of light capture. The data from the multiple array of photosensors is then stored as an image. These stored images can be uploaded to a computer immediately, stored in the camera, or stored in a memory module.
The camera may be a digital camera, that stores images to memory, that transmits images, or otherwise makes image data available to a computing device. In some embodiments, the camera shares a housing with the computing device. In some embodiments, the camera includes a computer that performs preprocessing of data to generate and imbed information about the image that can later be used by the onboard computer and/or an external computer to which the image data is transmitted or otherwise made available.
A standard digital camera and an associated data processor platform where the digital camera and data processor may be integrated into a single device such as smart phone, tablet or other portable or stationary device with an integrated or accessory camera, or the data processor may be separate from the camera device such as processing the digital photo data on a standalone computer or using a cloud-based remote data processor.
C. Reference Template or Reference Object
There is great flexibility on the design of the reference template. It should be largely a 2D pattern, although 3D reference objects/templates are also acceptable (for example, an electrical power outlet). Generally, a minimum of four coplanar reference points or fiducials are required in the reference template in order to generate the correction data set that can correct for the non-orthographic camera angle and produce accurate measurements of the objects in the image. In one embodiment of this invention, a pattern of five bulls-eyes is used, arranged as one bulls-eye at each corner of a square and the fifth bulls-eye at the center of the square. The essential requirement is that the Quantified Image Measurement System has knowledge of the exact geometry of the reference template. In processing, the system will recognize and identify key features (fiducials) of the reference template in each image. Therefore, it is advantageous that the reference template pattern be chosen for providing speed and ease of recognition.
The design of the reference template requires a set of fiducial markers that comport with the detection algorithm to enable range and accuracy. One embodiment makes use of circular fiducial markers to enable localization with a Hough circle detection algorithm. Another embodiment uses a graded bow-tie corner which allows robust sub-pixel corner detection while minimizing false corner detection. These components, and others, can be combined to facilitate a multi-tier detection strategy for optimal robustness.
As previously mentioned
In the embodiment shown, the UID may provide the user and camera other information about the image or related images. For example, the UID may provide the user or camera with information about the product, such as pantone colors, weight, manufacturer, model number variations available etc. It may also provide the camera with information as to the type and size of pattern used which will expedite automated discovery of the registration points 621, 622, 623, 624 and 625.
In alternative embodiments the fiducial points in the pattern may be printed or may be electromagnetic emitting devices such as light emitting diodes (LEDs). Similarly, the reference template may be a printed pattern, a reference pattern presented on a dynamic display such as a flat-panel display, or the reference pattern may be printed on glass or plastic or similar partially transparent medium with a backlight illuminating it from behind. In other words, the reference template may be purely passive or it may be light emitting. A light emitting reference template may aid in automatic detection of the reference template in certain ambient light conditions such as low light. In either case the wavelength of the fiducials in the pattern may be selected so that via digital filters, the fiducial locations may be easier and quicker to identify in a digital image. For example the wavelength may be of a green or a red color. Then a band pass digital filter can be used so that the fiducials will stand out in the filtered image.
Active Illumination Reference Pattern: In another embodiment of this invention, the physical reference template that is placed into the scene to be measured is replaced by an active illumination reference pattern projected onto the scene. The light pattern projector is attached to the camera in a fixed and known manner such that reference pattern is projected onto the scene at a particular position and angle within to the camera's field of view. As with the passive reference template, the reference pattern projected by the active illumination projector contains a set of at least four fiducials and the processing system has complete knowledge of the reference pattern and the details of the fiducials. In a preferred embodiment, the light pattern projector consists of a laser beam and a diffractive optical element (DOE). Generally the light pattern projector can be made with any light source technology, including LED, incandescent lamp, arc lamp, fluorescent lamp, or laser, coupled with an optical imaging system usually comprised of lenses, and a pattern generating element which may be a DOE, a slide or transparency, a pattern of light emitters, or any other refractive, reflective, or diffractive component that has been configured to generate the desired reference pattern.
The Camera(s), Active Illumination device(s), and Software, may be integrated with the computer, software, and software controllers, within a single electro mechanical device such as a laptop, tablet, phone, PDA.
The triggering of the Active illumination may be synchronized with the panoramic view image capturing to capture multiple planar surfaces in a panoramic scene such as all of the walls of a room.
D. Computer
Other major components of the Quantified Image Measurement System 110 are a computer and computer instruction sets (software) which perform processing of the image data collected by the camera 116. In the embodiment illustrated in
In the embodiment shown, all of the processing is handled by the CPU (not shown) in the on-board computer 200. However in other embodiments the processing tasks may be partially or totally performed by firmware or software programmed processors. In other embodiments, the onboard processors may perform some tasks and outside processors may perform other tasks. For example, the onboard processors may identify the locations of reference template pattern in the picture, calculate corrections due to the non-orthographic image, save the information, and send it to another computer or data processors to complete other data processing tasks.
The Quantified Image Measurement System 110 requires that data processing tasks be performed. Regardless of the location of the data processing components or how the tasks are divided, data processing tasks must be accomplished. In the embodiment shown, an onboard computer 200, no external processing is required. However, the data can be exported to another digital device 224 which can perform the same or additional data processing tasks. For these purposes, a computer is a programmable machine designed to automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily, allowing the computer to solve more than one kind of problem.
E. Data Processing
This is a process system, that allows for information or data to be manipulated in a desired fashion, via a programmable interface, with inputs, and results. The software system controls calibration, operation, timing, camera and active illumination control, data capture, data processing, data display and export.
Computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it.
An example is illustrated in
1. Passive Reference Template Processing
The process for locating the centers of the fiducials is as follows:
a. Stage 1: Recognize and locate the reference template as a whole within the camera image. This is done using pattern recognition derived from a Haar Training Procedure as is available through the open source software routines such as OpenCV. The Haar Training procedure is presented with a series of positive samples that are to be recognized as the correct template pattern and is also presented with a series of negative samples that are to be rejected as not the template pattern. From this, pattern classifiers are generated that can be applied to the camera images to recognize the reference templates.
b. Stage 2: Hough Circle detection is used on the templates recognized in Stage 1 to find the circles 710 that contain the reference fiducials.
c. Stage 3: Corner detection is used within the circles detected in Stage 2 to locate the corner at the center 702 of the “bow-tie” that is contained within the circle 710. The reference fiducial position is defined as these center/corner locations. One embodiment of the corner detection used in this stage consists of a small number of local operators (for example, a 3×3 pixels mask or a 5×5 pixels mask) which detect the presence of two edges with a different directions. This masks are convolved with the image in the region of the circles found in Stage 2. The corner detection result has a sharp peak in its intensity at each location where a corner is detected. The gradient roll-off of the bow-tie density as it nears the circular boundary avoids creating false corners where the bow tie and circular boundary meet. In this way, only the corner at the center of the bow tie shape is detected by the corner detector. Corner detectors based on other approaches than the convolution mask described can also be used in this Stage.
Alternatively, the gradient roll-off of the bow tie density can be implemented in software during this corner detection operation. In this case, the physical reference template contains circular bow-tie patterns with no gradient roll-off. Once the circular fiducial region has been detected, it is multiplied by a mathematical circular mask of the same dimensions and position as the detected circular fiducial region. This mathematical mask has a value of 1.0 in the center area of the circle and rolls off gradually to a value of 0.0 at the outer edge of the circular region. The corner detector is then applied to the product of the original circular fiducial region and the circular mathematical mask. As with the physical graded bow-tie described in the previous paragraph, the product of the non-graded bow-tie with the mathematical mask results in an effective gradient bow-tie pattern input to the corner detection subsystem.
d. Stage 4: As mentioned above, the corner detector output increases as a corner is approached and reaches a peak at the corner location. Since the digital image is pixilated and since there may also be noise in the image, the intensity profile rising to a peak corresponding to a corner location may itself be noisy. By analyzing the neighborhood around the peak and calculating the centroid location, (or alternatively the median location, or other calculated representation of the center of a distribution) of the neighborhood of pixels containing the peak, the true location of the corner can be estimated with subpixel accuracy.
Alternatively, the subpixel location of a corner can be found by calculating the intersection point of lines tangent to the edges that make up the detected corner. These and other corner detectors are described in the literature and this invention does not rely on the use of any specific corner detection method. The open source computer vision library, OpenCV, offers multiple corner detectors that could perform this operation.
The template calibration can be further improved by increasing the number of fiducial targets beyond the minimum requirement to facilitate error detection and re-calibration strategies. In theory, four co-planar markers are sufficient to solve the homography mapping and enable measurement. One method for improving accuracy is to include additional fiducial markers in the template (>4) to test the alignment accuracy and trigger a 2nd tier re-calibration if necessary. Another uses the additional markers to drive an over-fit homography based on least squares, least median, random sample consensus, etc. algorithms. Such an approach minimizes error due to one/several poorly detected fiducial markers while broadening the usable range and detection angle and thus improving the robustness of the system.
2. Active Illumination Reference Pattern Processing:
The distortion(s) illustrated in
The distortion(s) illustrated in
In a further embodiment of the embodiment illustrated in
In the embodiment shown in
F. Operation of Preferred Embodiment
The user has an assembled or integrated Quantified Image Measurement System, consisting of all Cameras, Computer and Software elements, and sub-elements. The template pattern is a non-dynamic, fixed in geometry, and matches the pattern and geometry configuration used by the software to find reference points and to calculate non-orthographic distortion correction data and real world measurements.
The user aims the Quantified Image Measurement System in a pose that allows the Camera view and Reference Template to occupy the same physical space upon a selected predominantly planar surface, that is to be imaged. Computer and Software are then triggered by a software or hardware trigger, that sends instructions to Timing To Camera via Electrical And Command To Camera. The Camera may have a Filter System added or integral, which enables a more effective capture of the reference template, by reducing the background radiation, or limiting the radiation wavelengths that are captured by Camera for Software processing with reduced signal to noise ratios. The data capture procedure delivers information for processing into Raw Data. The Raw Data is processed, to generate Export Data and Display Data. The Export Data (QIF) and Display Data is a common file format image file displaying the image and any geometric measurements that have been generated, and with an embedded or attached correction data set whereby the distortion caused by non-orthographic camera angle or other distortion sources can be corrected to give accurate dimensions and 2D geometrical measurements. The Display Data also provides a user interface where the user can indicate key points or features in the image for which measurement data is to be generated by the QIF software.
The embodiments of a Quantified Image Measurement System described herein is structured to be used as either a passive or active measurement tool by combining known algorithms, reference points with scale and computer vision techniques. To create a new QIF, the user simply takes a photo within the quantified image measurement application, as they normally would with their portable device. The user then selects points or regions in an image from the QIF photo library on which to perform measurements by marking points within the photo using finger or stylus on a touch screen, mouse on a computer platform, or other methods, manual or automatic, to identify key locations within the photo. Software routines within the Quantified Image Measurement System calculate and display physical measurements for the points or regions so selected.
This invention is an improvement on what currently exists in the market as it has the ability to capture millions of measurement data points in a single digital picture. The accuracy associated with the measurement data is dependent on a) the reference points used, b) the pixel density (pixels/angular field of view) in the digital camera in the host device and c) how they are processed with the algorithms within the QIF framework.
In addition the quantified image measurement system automatically corrects or compensates for any off-angle distortions introduced by the camera position and orientation relative to the scene.
The output QIF is (with or without marked up measurements) is saved in an industry standard image file format such as JPEG, TIFF, BMP, PDF, GIF, PNG, EXIF or other standard format than can be easily shared. The QIF extended data and dimensional characteristics are appended to the image using existing fields within these image formats such as the metadata field, or the extended data field, or the comments field. Applications and/or services that are “aware” of dimensional information, such as CAD applications, can use the QIF extended data and dimensional characteristics and the associated QIF processing routines for additional processing. Even other mobile devices equipped with the quantified image measurement application can read and utilize this QIF extended data and dimensional characteristics for additional processing.
There are a wide array of applications that can take advantage of the quantified image measurement system technology including but not limited to: Medical wound management measurement system, automatic item or box size recognition system, cable measurement system, design to fit system, object recognition system, object size and gender recognition and search system, biometric system, distance measurement system, industrial measurement system, virtual reality enhancement system, game system, Automatic quality control system used in home building, industrial buildings by using multiple and timescale pictures of the progress in this way you can create a digital home manuals that have all information in one database and more.
The Quantified Image Measurement System combines a number of known theories and techniques and integrates them into an all-in-one application (passive) and/or integrated app-enabled accessory (active) in a action that most everyone knows how to do: push a button to take picture.
The passive Quantified Image measurement System is based on a passive reference template introduced into the scene or a known reference object in the scene, a camera, and a data processor running software algorithms that learn scene parameters from the reference template/object and apply the scene parameters so-learned to calculate physical measurements from the image data. The QIF extended data and dimensional characteristics can be enhanced with specific applications and integrated service that can provide customer specific information within the same QIF extended data.
The active system is based on an active reference pattern projected onto the scene to be captured, with subsequent analysis based on optical triangulation and image analysis operations. The data processing includes learning scene parameters from the image of the projected light pattern and applying the scene parameters so-learned to calculate physical measurements from the image data. The QIF extended data and dimensional characteristics generated by the system can be enhanced with specific applications and integrated service that can provide customer specific information within the same QIF extended data.
A typical user case in consumer side for this invention would be that a person is home and wants to paint a wall but doesn't know how much paint is needed to complete the job. Using the quantified image measurement active system, a person can simply take any standard digital camera or device with camera, place a reference template or object onto the scene and photograph the scene. When the picture is opened in the quantified image measurement application, the surface to be painted is measured to calculate how much paint is needed. Based on the underlying image analysis combined with integrated paint usage models, the application can give exact information of how much paint is needed.
In the previous example, the basic measurement capture application can be enhanced with value-add applets that can solve specific customer problems. Examples: How much Paint? Does it fit? What is the volume? What is the distance? What is the height? What is the width? Give me a quote to paint the area? Where to find a replacement cabinet?
Additionally: A typical user case in industrial side for this invention would be that a contractor is visiting job site and wants to design a new kitchen cabinets but don't know how many pre-designed or custom cabinets could fit in the wall, by using this quantified image measurement system, the contractor can acquire multiple photographs of the scene at the job site and later open the picture in the quantified image measurement application to create QIF's with the specific measurements of interest. Subsequently he can share that information with the home office CAD system and come up with a solution of how many pre-designed or custom cabinets are needed and what would be the best way to install the cabinets, ultimately providing the information to provide an accurate quote for the job.
Similarly, the quantified image measurement system can be applied to: Medical wound management measurement system, automatic item or box size recognition system, cable measurement system, design to fit system, object recognition system, object size and gender recognition and search system, biometric system, distance measurement system, industrial measurement system, virtual reality enhancement system, game system, Automatic quality control system used in home building, industrial buildings by using multiple and timescale pictures of the progress in this way you can create a digital home manual that have all information in one database.
While the disclosure has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments may be devised which do not depart from the scope of the disclosure as disclosed herein. The disclosure has been described in detail, it should be understood that various changes, substitutions and alterations can be made hereto without departing from the spirit and scope of the disclosure.
This application is a utility application claiming priority of U.S. provisional application Ser. No. 61/623,178 filed on 12 Apr. 2012 and Ser. No. 61/732,636 filed on 3 Dec. 2012 and U.S. Utility application Ser. No. 13/861,534 filed on 12 Apr. 2013; Ser. No. 13/861,685 filed on 12 Apr. 2013; Ser. No. 14/308,874 filed on 19 Jun. 2013; Ser. No. 14/452,937 filed on 6 Aug. 2013; and Ser. No. 14/539,924 filed 12 Nov. 2014.