The present invention relates to a mobile and portable networked system to automate and optimize the capture of images in three dimensions for three-dimensional (3D) printing of articles on site or on a distributed printer network.
Inkjet printers have revolutionized the printing industry with the introduction of new substrates, colors, finishes and special textures. The introduction of 3D inkjet printers has added to this transformation the ability to print solid parts based on input received from computer-aided design (CAD) and other forms of computer graphics engineered for creating digital files with information suitable for 3D printing.
3D inkjet printers work by directing jets of discrete ink droplets to a substrate which maintains an image-wise relationship with the printing head. In the particular case of 3D printing the image-wise relationship is maintained in the X-Y planes of the substrate material as well as building up the Z-direction in a sort of topographic map manner. This permits the creation of three-dimensional objects containing shape, form, texture and color information transmitted from an electronic file. These files must be created or they must be somehow recorded from real 3D objects.
The present invention relates to a method and apparatus for capturing image data from real objects, processing the data and rendering 3D replicas via 3D printing in particular. It will be understood to anyone skilled in the art that there are numerous forms of creating 3D or solid objects including, but not limited to the process of creating a mold based on the digital data followed by numerous forms of casting.
Although 3D printing has been taken to unprecedented levels of sophistication, and the ability to print technically complex objects continues to be improved there remain several problems. One particular problem is the capture of images of 3D objects or forms. That is images of real objects that are not computer generated. Various methods are available comprising the use of a multiplicity of cameras and specialized image stitching software. These methods are expensive due in part to the large number of cameras (henceforth referred to as image capture devices) required and the difficulty of accurate image stitching as the perspectives change. They are impractical for mobile or portable applications due to the mechanical and physical complexity of the systems. They are also extremely delicate in terms of the calibration of the multiple cameras and the large amounts of data that must be handled. Another complication of these methods is that data reduction, including compression, stitching, smoothing and other must be done on the capture site prior to any attempts to transmit the data wirelessly via WIFI or via the cellular telephone network to a central 3D printing and finishing location.
One particular method exists, fractal interpolation of images and volumes, of taking a reduced number of images of known subjects and interpolating the image data to create a synthetic volumetric data file for printing. In one specific case this can be done with human faces to create computer-generated files of the corresponding head. This method takes advantage of known symmetries and expected constituents of the human head to make the interpolation and the corresponding data manipulation somewhat more abbreviated. The results suffer of inaccuracies, visible artifacts, the result of short cuts, and may not be appropriate for certain applications. In addition complex algorithms and sophisticated data manipulating software and apparatus are required.
One caveat to consider is that although 3D image files may be computer generated, or partially computer generated combining image capture and interpolation, their intended use may be more appropriate and practical for 2D display from one side of the object rather than for 3D printing from all sides of the object. One major issue is then how to optimize a combination of optical or real image capture in conjunction with synthetic or computed images to create electronic files acceptable for 3D printing within constraints of cost, portability, image quality and overall convenience.
One particular problem is the need of portable and mobile systems to serve the image capture needs of places, objects and events that by their very nature cannot be moved to a central image capture location. In one particular example museum pieces may not be transportable on account of security and insurance issues, fragility, and due to size and mobility limitations. It is desired to capture 3 dimensional images on site so that they may be archived for study, online viewing, 3D replicas or 3D molds to be casted for sale or distribution. In another example images at events like weddings, sports events, concerts, amusement parks, movie theaters (including 3D showings) cannot be transported to central capture locations, making an irrevocable need for a portable and mobile capture system.
Let it be understood that the term mobile used in the context of 3D image capture is a subset of mobile computing as defined according to accepted terminology in the art as: “taking a computer and all necessary files and software out into the field”. In this particular case it comprises all capture equipment, its corresponding optics, image data managing computing devices, memory, electronic business engines, and multi-channel communication capabilities.
Various embodiments of the present invention provide for a mobile and portable networked system to automate and optimize the capture of images in three dimensions for three-dimensional printing of articles on site or on a distributed printer network said system comprising an image capture platform, at least one mirror, at least one image capture device, a processing apparatus, and at least one communications module. The system creates a three dimensional (3D) model to allow for the 3D printing of articles and also two dimensional and three dimensional avatar generation.
In one embodiment of the present invention a set of image capture devices arranged so that they capture a perspective are aligned on an image-wise relationship with a subject wherein each image capture device sensor views and captures two perspectives of the subject from different angles simultaneously by an optical arrangement of mirrors aligned to reflect at least two views of the subject from different views. As those knowledgeable in the field are aware, only features visible in each perspective can discretely become rectified points in the captured image. Therefore, the accuracy of the 3D model improves with the number of perspectives captured.
Another embodiment of the present invention accomplishes the task of capturing the image by at least one linear image sensor, alternatively two parallel linear sensors, or a combination of such, selected to have a frame capture rate optimized to achieve the correct density of pixel information in accordance to the requirements of the 3D printer. Said linear image sensor or sensors is/are coupled in an image-wise relationship with at least one mirror in an optical relationship such that when the camera is rotated about an axis an area scan of the subject is produced from which the image data may be manipulated.
Another embodiment of the present invention accomplishes the capture of simultaneous perspectives using a variety of applications available on smartphones, tablet computers, and the like whence the processing is done in part or in totality within the processing capabilities of the smartphone or tablet computer. Conversely the processing may be done in the cloud or via a web-based application and then submitted for 3D printing. Typically a smartphone equipped with 3D technology will have a parallax barrier display or variation thereof. One particular example comprises four front-facing cameras to create a 3D perspective. In one particular use a person may capture a so-called “selfie” to be used as a source for 3D printing.
The system of the present invention may use any readily available image capture arrangement for capturing image data from real objects to create the 3D image data. Some specific optical arrangements for capturing image data from real objects are abstracted below:
The invention further relates to a method of reducing electronic file sizes by reducing the number of image capture devices involved in the process. This is accomplished in part by the optimal positioning of, and the minimization of, the number of image capture devices used to capture perspective information. One particular preferred embodiment makes use of a telecentric lens in optical relationship with an image capture device.
The invention further provides for retouching or otherwise enhancing images via algorithms like Photoshop™ (manufactured by Adobe Inc.) or a multiplicity of image enhancing software prior to reviewing and or prior to 3D printing.
In yet another embodiment of the present invention, a specialized type of 3D printer would ideally be utilized using a (r, theta, h) cylindrical coordinate system and adding material similar to cutting a part on a lathe, but adding material around a core with variations of radius occurring at different angles around the part and different locations along the lathe bed. In the context of this invention, a cylindrical coordinate system is utilized with the vertical axis through the center of the object space is called r, or the radius to object features from the vertical axis; theta, or the angle from a starting zero angle to 360 degrees, a full revolution about the vertical axis; and h or the height above a platform upon which the object may be placed. When producing replicas of the object, a 3D printer capable of rendering features in the same type of cylindrical coordinate system is therefore optimal, because while conversion from the cylindrical coordinate system (r, theta, h) to a three axis coordinate system (x, y, z) is easy from a mathematical point of view, that adds complication to the programming of the 3D Printer.
In yet another aspect of the present invention a cylinder marked with registration marks serves as the standard for calibrating the capture system against the various image perspectives taken.
Another embodiment of the present invention relates to the optimization of image capture for the purpose of 3D printing comprising the steps of maximizing the capture of areas that are more relevant to the resulting end product. In particular when capturing the image of a human head, more emphasis would be put on capturing data, even if redundant, of the face. This may be achieved by any of several methods including:
Another embodiment of the present invention comprises the use of smart phone cameras selected to capture a multiplicity of images of the subject's periphery. This is accomplished by one of various means.
In one particular embodiment the subject is stationary while the smartphone camera is rotated about the subject by a motorized system moving the camera about an axis where the subject is at the center.
In another embodiment the smartphone camera may be rotated about the subject by an operator while maintaining the height of the camera at a relative constant level. In another embodiment of the present invention the orbit of the image capture device may be other than circular to obtain the optimal set of capture points for 3D printing. In one particular embodiment the image capture device moves in a hyperbolic orbit or trajectory about the subject(s). In another particular embodiment the image capture device moves in an elliptical orbit or trajectory about the subject(s).
In one particular example a Raspberry Pi™ single board computer drives a digital camera facilitating portability and mobility and on-site data manipulation.
In yet another embodiment of the present invention the subject is placed on a platform that rotates at a certain number of revolutions per minute (RPM) whereas the camera remains stationary.
In yet another embodiment the rotating platform contains concentric markers so that more that one subject may be placed on the markers for optimal image capture.
Another embodiment of the present invention comprises batch processing the images so that histograms are adjusted as to eliminate over and underexposure at least in some frames. This step helps create “structure” in the image in parts that may be over or underexposed thus helping ensure that the 3D printer interprets correctly the image data and does not print swatches where there is 3D over or underexposure.
Another embodiment of the present invention comprises using structured lighting to help ensure that the 3D printer properly interprets the image data. As those skilled in the art are aware, structured lighting is a process of projecting a known pattern (often grids or horizontal bars) onto an object or objects located in a scene. The way that the grid lines deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene, as is commonly used in structured lighting 3D scanners. The grid lines of the structured lighting may be visible or invisible. Invisible grid lines may either use lines which are outside the normal range of visible light (such as infrared or ultraviolet light) or project patterns of light at extremely high frame rates. In one particular example “structured lighting” may be applied as a means to reduce the number of capture devices and reduce cost and improve 3D capture information.
Another embodiment of the present invention covers the setting of the mechanisms of image capture to produce what is called in the photographic art “bracketing”. Bracketing may be conducted by an individual capture device or camera or by a multiplicity or array of cameras selected to accomplish the best set of bracketing settings to produce a set of images of superior quality for 3D printing. As those skilled in the art are aware, examples of how this bracketing may be achieved include:
Embodiments of the present invention will be described by reference to the following drawings, in which like numerals refer to like elements, and in which:
The following is a listing of parts presented in the drawings:
The following is a listing of parts presented in the drawings:
The present invention relates to a mobile and portable networked system to automate and optimize the capture of images in three dimensions for three-dimensional printing of articles on site or on a distributed or cloud printer network.
In
After the image is captured, an image processing step 100 consists of determining from zones of overlapping features, the variations in a radial direction about a central reference axis (of the object) of the height of features on the object. Points that are identified from different perspectives are called rectified points and locations in space established because the geometry of the capture devices relative to the object space is known and static. Only points visible in two different views can become rectified, (so at least two perspectives of the object will be captured) but prior knowledge of the object features will ease the process. To establish this geometry, the physical location of the capture devices relative to the central reference axis can either be measured or fixed, or alternately determined by a calibration process 112 of capturing images from a known object or objects that have overlaid grid patterns for easy rectification of grid points. An image file is created which contains all the appropriate features of the captured 3D image. This image file includes a 3 dimensional printing model generated by using the depth information obtained from the rectified points.
After image processing 100, the image file ID 110 comprises a database of identified rectified points from at least two different perspectives and their locations relative to the reference grid images taken during the calibration process 112. The data would preferably be in the form of (r, theta, h) where r is the radius from the reference axis to the rectified point, theta is the rotational position of the rectified point, and h is the height of the rectified point to a base plane perpendicular to the object space reference axis as those skilled in the art are aware. Referring again to
The database created in the image file ID 110 is stored in a storage 120 which comprises saving the database in a computer file by any number of ways to correlate the database with object identification so that a replica can eventually made from the data and can be correctly identified with the original object and circumstances of the image capture, such as illumination, time of exposure, retakes. Data stored in storage 120 may have additional data added to the image file data and this data may be processed further in the Image File ID step 110 as and if additional data is collected or created in the image processing step 100.
After the image file data has been stored, a print job file is created. Print job file creation 130 consists of the process of setting up a replica making machine with access to the database and recording the object name and other capture variables along with at least one serial number or icon that may be printed directly on a replica for later correlation to the order that promulgated the 3D image capture process 300 to begin with.
After the 3D replica print job file has been created, a 3D replica or avatar is printed via a 3D printing device. 3D Print Operations 140 consist of laying down materials onto a suitable substrate to recreate the (radius, theta, height) variations recorded in the database computer file in a proportional manner on the substrate. The proportional manner refers to the volumetric scale of the replica relative to the original object. The suitable substrate may be a form roughly equivalent to the form of the object, so that amount of materials needed for 3D printing and the time to complete the 3D printing is reduced. It is anticipated that under certain circumstances a complete 360-degree replica is not intended or made, but that a rotational angle of less than 360 degrees is 3D printed even though full 360 degrees data is potentially available in database file. This provides the advantage of allowing the image capture device to be hardwired to the processing unit.
The 3D printing may produce either a replica of the object or a mold of the object from the rectified points.
Upon printing a replica, the replica is finalized. Finishing steps 150 consist of polishing the raw printed object prior to printing a final layer on the 3D replica to impart color and density information and a potential protective layer, for example to fix the color, density layer or encapsulate it with waterproofing or ultraviolet protection to prevent the replica from becoming brittle. This may be determined from a fewer number of perspectives than used to determine the rectified points or from different image capture devices. The polishing process may consist of readily known processes such as ablation via compressed fluid be it air, water, sand or any other suitable fluid. In addition the polishing may be done by laser finishing process such as ablation. The post printing processing may include the steps of laser ablation to smooth the printed object. In one particular example the laser ablation apparatus is computer driven using data received from the image capture step as input. As those skilled in the art are aware, laser ablation is the process of removing material from a solid (or occasionally liquid) surface by irradiating it with a laser beam. At low laser flux, the material is heated by the absorbed laser energy and evaporates or sublimates. The laser ablation may be performed on either an avatar to be created or a mold to be used to create the avatar.
After the replica or avatar is finished it gets delivered to the end user. Remote delivery 160 refers to the potential that the image capture device and the 3D printing device may be in entirely separate physical locations. For example, the image capture device may be designed to be portable and taken to a remote location, while the 3D printing device is centrally located serving more than one capture device. Therefore, the delivery of the replica from the 3D printing device may be achieved with services of physical mail or package delivery, while conversely the image capture information from Step 300 is digitally transmitted to a central computer database processor by any number of means including telephone, the internet web, or satellite transmission, or combinations thereof as examples.
Referring to
After the image is captured via a device such as a 3D capable iPhone 6 (manufactured by Apple Computer, Inc.) in Step 351, an image processing step 100 consists of determining from zones of overlapping features, the variations in a radial direction about a central reference axis (of the object) of the height of features on the object. Points that are identified from different perspectives are called rectified points and locations in space established because the geometry of the capture devices relative to the object space is known and static. Only points visible in two different views can become rectified, but prior knowledge of the object features will ease the process. To establish this geometry, the physical location of the capture devices relative to the central reference axis can either be measured or fixed, or alternately determined by a calibration process 112 of capturing images from a known object or objects that have overlaid grid patterns for easy rectification of grid points. An image file is created which contains all the appropriate features of the captured 3D image.
After image processing 100, an image file ID 110 comprises a database of image file data comprising identified rectified points from at least two different perspectives and their locations relative to the reference grid images taken during the calibration process 112. The data would preferably be in the form of (r, theta, h) where r is the radius from the reference axis to the rectified point, theta is the rotational position of the rectified point, and h is the height of the rectified point to a base plane perpendicular to the object space reference axis. Referring again to
The database created in the image file ID 110 is stored in a storage step 121 which comprises saving the database in a storage unit such as a computer file by any number of ways to correlate the database with object identification so that a replica can eventually be made from the data and can be correctly identified with the original object and circumstances of the image capture, such as illumination, time of exposure, retakes and the like. The storage unit comprises any of various means be they on a hard drive, magnetic tape, recordable compact disc (CDR), or cloud storage. Data stored in storage step 121 may have additional data added to the image file data and this data may be processed further in the Image File ID step 110 as and if additional data is collected or created in the image processing step 100.
After the image file data has been stored, a print job file is created. Cloud or printer network print job file creation 131 consists of the process of setting up the replica making machine with access to the database and recording the object name and other capture variables along with at least one serial number or icon that may be printed directly on the replica for later correlation to the order that promulgated the 3D image capture process 351 to begin with. The printer is given the instructions to print on a networked system and may be present in any of many potential physical locations.
After the 3D replica print job file has been created, a 3D replica or avatar is printed. 3D print operations 141 consists of laying down materials onto a suitable substrate to recreate the (radius, theta, height) variations recorded in the database computer file in a proportional manner on the substrate. The proportional manner refers to the volumetric scale of the replica relative to the original object. The suitable substrate may be a form roughly equivalent to the form of the object, so that amount of materials needed for 3D printing and the time to complete the 3D printing is reduced. It is anticipated that under certain circumstances a complete 360-degree replica is not intended or made, but that a rotational angle of less than 360 degrees is 3D printed even though full 360 degrees data is potentially available in database file. This provides the advantage of allowing the image capture device to be hardwired to the processing unit.
It will be understood by anyone skilled in the art that a multiplicity of materials such as resin, plastic polymers, rubber, and the like may be used for the 3D printing task. In another aspect of the present invention 3D capture of a body part is followed by 3D printing of a replica of said body part with bio-ink or a combination of biocompatible materials leading to a 3D printed prosthesis. In one particular example of 3D prosthesis printing a 3D capture of a nipple is followed by color manipulation and retouching and 3D printing with a bio ink or biocompatible combination of inks and polymers for subsequent placement onto a breast that has undergone mastectomy and where the nipple has been surgically removed.
It will be understood by anyone skilled in the art that the invention is not limited to nipples but to other body parts including but not limited to ears, noses, fingers, toes and such. It will be also understood that certain parts like ears possess mirror image symmetry thereby requiring mirror image treatment of the image data prior to 3D printing in order to duplicate the image pair.
Upon printing the replica, the replica is finalized. Finishing steps 150 consist of polishing the raw printed object prior to printing a final layer on the 3D replica to impart color and density information and a potential protective layer, for example to fix the color, density layer or encapsulate it with waterproofing or ultraviolet protection to prevent the replica from becoming brittle. This may be determined from a fewer number of perspectives than used to determine the rectified points or from different image capture devices. The polishing process may consist of readily known processes such as ablation via compressed fluid be it air, water, sand or any other suitable fluid. In addition the polishing may be done by laser ablation. The laser ablation may be performed on either an avatar to be created or a mold to be used to create the avatar.
After the replica or avatar is finished it gets delivered to the end user. Remote delivery 160 refers to the potential that the image capture device and the 3D printing device may be in entirely separate physical locations. For example, the image capture device may be designed to be portable and taken to a remote location, while the 3D printing device is centrally located serving more than one capture device. Maintaining the centrally located 3D printing devices allows for economies of scale to be used. Remote printing devices may be used to allow for economies of delivery to be realized. So, if a picture is taken in one country but the avatar to be created is to be delivered in a second country then remote printers would be appropriate. Therefore, the delivery of the replica from the 3D printing device may be achieved with services of physical mail or package delivery, while conversely the image capture information from step 351 is digitally transmitted to a central computer database processor by a number of means including telephone, the internet web, or satellite transmission, or combinations thereof, as examples.
Referring now to
Upon capture of a 3D image in step 300 image data demographic address 102 which refers to the correlation data of database identification with the circumstances of the original image capture is added to the data collected in the 3D image capture 300. In a commercial scenario, this consists of a customer's order for the replica with other information pertinent to the initiation of the image capture. This demographic file would eventually be needed to properly deliver the replica made to the person who ordered it. In addition names and addresses may be obtained and linked to the order number.
Upon capturing the customer's demographic data, the customer is provided an opportunity to review the image and product to be created from the image. Image and product image review 104 consists of a proofing method to obtain a customer's approval of the expected replica's appearance before the further processing of data from the image capture is begun. For example, a series of photographic views from various perspectives surrounding the object may be sufficient, or stereographic pairs could be generated and displayed with 3D glasses to impart depth at one or more perspectives. In another setting images may be available on a smart phone display comprising a simple single view or a stereoscopic pair, which may be viewed by appropriate lenses. The purpose of this review is to verify that what was captured is a satisfactory replica suitable for printing.
The replica that is generated in step 104 may be either a two dimensional or a three dimensional representation of the original object. The image may be sent to an appropriate universal resource locator (URL) so that the end user can view the images prior to printing the replica.
Upon reviewing the images and products, the customer is provided an option to order the product. Product order 106 is the decision process to proceed with the replication process. If it is yes 108 as shown then the steps as described above for further image process beginning with print job file creation 130, including database creation, and 3D print operations 140 to begin replica manufacture are enacted leading to finishing steps 150 and ultimately remote delivery 160 of the replica to the customer. Alternatively, if the customer does not wish to place a product order 109, the process may be terminated, or a new image capture scheduled.
Θ=360/n (1)
The app also determines e-commerce parameters to allow the subject or a representative of the subject to select and order an avatar of their choosing. After subject or his representative chooses to place an order, the app creates the order and arranges for payment and settlement of payment. The app may make use of existing e-commerce platforms such as PayPal to complete the transaction. The optimized image and order information are combined into a data file and the data file is uploaded to the cloud. From the cloud, a remote 3D printing station is accessed and the order may be printed and the 3D avatar delivered to the subject's preferred destination.
Although several embodiments of the present invention, methods to use said, and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. The various embodiments used to describe the principles of the present invention are by way of illustration only and should not be construed in any way to limit the scope of the invention. Those skilled in the art will understand that the principles of the present invention may be implemented in any suitably arranged device.
Priority for this patent application is based upon provisional patent application 62/257,112 (filed on Nov. 18, 2015). The disclosure of this United States patent application is hereby incorporated by reference into this specification.
Number | Date | Country | |
---|---|---|---|
62257112 | Nov 2015 | US |