Volume dimensioning systems and methods

Information

  • Patent Grant
  • 10467806
  • Patent Number
    10,467,806
  • Date Filed
    Thursday, September 28, 2017
    7 years ago
  • Date Issued
    Tuesday, November 5, 2019
    4 years ago
Abstract
Systems and methods for volume dimensioning packages are provided. A method of operating a volume dimensioning system may include the receipt of image data of an area at least a first three-dimensional object to be dimensioned from a first point of view as captured using at least one image sensor. The system can determine from the received image data a number of features in three dimensions of the first three-dimensional object. Based at least on part on the determined features of the first three-dimensional object, the system can fit a first three-dimensional packaging wireframe model about the first three-dimensional object. The system can display of an image of the first three-dimensional packaging wireframe model fitted about an image of the first three-dimensional object on a display device.
Description
CROSS-REFERENCE TO RELATED APPLICATION

The present application claims the benefit of U.S. patent application Ser. No. 13/464,799 for Volume Dimensioning Systems and Methods filed May 4, 2012 (and published Nov. 7, 2013 as U.S. Patent Publication No. 2013/0293539), now U.S. Pat. No. 9,779,546. Each of the foregoing patent application, patent publication, and patent is hereby incorporated by reference in its entirety.


International Application No. PCT/US13/39438 for Volume Dimensioning Systems and Methods filed May 3, 2013 (and published Nov. 7, 2013 as WIPO Publication No. WO 2013/166368) also claims the benefit of U.S. patent application Ser. No. 13/464,799. Each of the foregoing patent application and patent publication is hereby incorporated by reference in its entirety.


FIELD

This disclosure generally relates to non-contact systems and methods for determining dimensions and volume of one or more objects.


BACKGROUND

Volume dimensioning systems are useful for providing dimensional and volumetric data related to three-dimensional objects disposed within the point of view of the volume dimensioning system. Such dimensional and volumetric information is useful for example, in providing users with accurate shipping rates based on the actual size and volume of the object being shipped. Additionally, the volume dimensioning system's ability to transmit parcel data immediately to a carrier can assist the carrier in selecting and scheduling appropriately sized vehicles based on measured cargo volume and dimensions. Finally, the ready availability of dimensional and volumetric information for all the objects within a carrier's network assists the carrier in ensuring optimal use of available space in the many different vehicles and containers used in local, interstate, and international commerce.


Automating the volume dimensioning process can speed parcel intake, improve the overall level of billing accuracy, and increase the efficiency of cargo handling. Unfortunately, parcels are not confined to a standard size or shape, and may, in fact, have virtually any size or shape. Additionally, parcels may also have specialized handling instructions such as a fragile side that must be protected during shipping or a side that must remain up throughout shipping. Automated systems may struggle with assigning accurate dimensions and volumes to irregularly shaped objects, with a single object that may be represented as a combination of two objects (e.g., a guitar) or with multiple objects that may be better represented as a single object (e.g., a pallet holding multiple boxes that will be shrink-wrapped for transit). Automated systems may also struggle with identifying a particular portion of an object as being “fragile” or a particular portion of an object that should remain “up” while in transit.


Providing users with the ability to identify and/or confirm the shape and/or numbers of either single objects or individual objects within a group or stack of objects and to identify the boundaries of irregularly shaped objects benefits the user in providing cartage rates that are proportionate to the actual size and/or volume of the parcel being shipped. Involving the user in providing accurate shape and/or volume data for a parcel or in providing an accurate outline of an irregularly shaped parcel also benefits the carrier by providing data that can be used in optimizing transport coordination and planning. Providing the user with the ability to designate one or more special handling instructions provides the user with a sense of security that the parcel will be handled in accordance with their wishes, that fragile objects will be protected, and that “up” sides will be maintained on the “top” of the parcel during transport. The special handling instructions also benefit the transporter by providing information that can be useful in load planning (ensuring, for example, “fragile” sides remain protected and “up” sides remain “up” in load planning) and in reducing liability for mishandled parcels that are damaged in transit.


SUMMARY

A method of operation of a volume dimensioning system may be summarized as including receiving image data of an area from a first point of view by at least one nontransitory processor-readable medium from at least one image sensor, the area including at least a first three-dimensional object to be dimensioned; determining from the received image data a number of features in three dimensions of the first three-dimensional object by at least one processor communicatively coupled to the at least one nontransitory processor-readable medium; based at least on part on the determined features of the first three-dimensional object, fitting a first three-dimensional packaging wireframe model about the first three-dimensional object by the at least one processor; and causing a displaying of an image of the first three-dimensional packaging wireframe model fitted about an image of the first three-dimensional object on a display on which the image of the first three-dimensional object is displayed.


The method may further include receiving at least one user input via a user interface, the user input indicative of a change in a position of at least a portion of the displayed image of the first three-dimensional packaging wireframe model relative to the displayed image of the first three-dimensional object; and causing a displaying of an updated image of the first three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display. The method may further include receiving at least one user input via a user interface, the user input indicative of a change in a position of at least a portion of the displayed image of the three-dimensional packaging wireframe model relative to the displayed image of the first three-dimensional object; based at least in part on the received user input, fitting a second three-dimensional packaging wireframe model about the first three-dimensional object by the at least one processor, the second three-dimensional packaging wireframe model having a different geometrical shape than the first three-dimensional wireframe model; and causing a displaying of an image of the second three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display. The method may further include receiving at least one user input via a user interface, the user input indicative of an identification of a second three-dimensional object, the second three-dimensional object different from the first three-dimensional object; based at least in part on the received user input, fitting a second three-dimensional packaging wireframe model about the second three-dimensional object by the at least one processor, the second three-dimensional wireframe model; and causing a displaying of an image of the second three-dimensional packaging wireframe model fitted about the image of the second three-dimensional object on the display. The at least one processor may cause the concurrent displaying of the image of the first three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display and the image of the second three-dimensional packaging wireframe model fitted about the image of the second three-dimensional object on the display. The method may further include receiving at least one user input via a user interface, the user input indicative of an identification of at least one portion of the first three-dimensional object; based at least in part on the received user input, fitting one three-dimensional packaging wireframe model about a first portion of the first three-dimensional object by the at least one processor; based at least in part on the received user input, fitting one three-dimensional packaging wireframe model about a second portion of the first three-dimensional object by the at least one processor; and causing a concurrent displaying of an image of the three-dimensional wireframe models respectively fitted about the image of the first and the second portions of the first three-dimensional object on the display. The at least one processor may cause the displaying of the image of the first three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display to rotate about an axis. The method may further include receiving image data of the area from a second point of view by at least one nontransitory processor-readable medium from at least one image sensor, the second point of view different from the first point of view; determining from the received image data at least one additional feature in three dimensions of the first three-dimensional object by at least one processor; based on the determined features of the first three-dimensional object, at least one of adjusting the first three-dimensional packaging wireframe model or fitting a second three-dimensional packaging wireframe model about the first three-dimensional object by the at least one processor; and causing a displaying of an image of at least one of the adjusted first three-dimensional packaging wireframe model or the second three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display. Fitting a first three-dimensional packaging wireframe model about the first three-dimensional object by the at least one processor may include selecting from a number of defined geometric primitives that define respective volumes and sizing at least one dimension of the selected geometric primitive based on a corresponding dimension of the first three-dimensional object such that the first three-dimensional object is completely encompassed by the selected and sized geometric primitive. The method may further include producing a wireframe model of the first three-dimensional object; and causing a concurrently displaying of the wireframe model of the first three-dimensional object along with the three-dimensional packaging wireframe model. The method may further include receiving at least one user input via a user interface, the user input indicative of a geometric primitive of the first three-dimensional object; and selecting the first three-dimensional object from a plurality of three-dimensional objects represented in the image data by at least one processor, based at least in part on the user input indicative of the geometric primitive of the first three-dimensional object. Selecting the first three-dimensional object from a plurality of three-dimensional objects represented in the image data based at least in part on the user input indicative of the geometric primitive of the first three-dimensional object includes determining which of the three-dimensional objects has a geometric primitive that most closely matches the geometric primitive indicated by the received user input. The method may further include receiving at least one user input via a user interface, the user input indicative of an acceptance of the first three-dimensional packaging wireframe model; and performing at least a volumetric calculation using a number of dimensions of the selected three-dimensional packaging wireframe model. The method may further include receiving at least one user input via a user interface, the user input indicative of a rejection of the first three-dimensional packaging wireframe model; and in response to the received user input, fitting a second three-dimensional packaging wireframe model about the first three-dimensional object by the at least one processor, the second three-dimensional packaging wireframe model having a different geometric primitive than the first three-dimensional wireframe model; and causing a displaying of an image of the second three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display. The method may further include receiving at least one user input via a user interface, the user input indicative of a second three-dimensional packaging wireframe model, the second three-dimensional packaging wireframe model having a different geometric primitive than the first three-dimensional wireframe model; in response to the received user input, fitting the second three-dimensional packaging wireframe model about the first three-dimensional object by the at least one processor; and causing a displaying of an image of the second three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display by the at least one processor. The method may further include causing by the at least one processor a displaying of a plurality of user selectable icons, each corresponding to a respective one of a plurality of three-dimensional packaging wireframe model and selectable by a user to be fitted to the first three-dimensional object. The method may further include receiving at least one user input via a user interface, the user input indicative of a region of interest of the displayed image of the first three-dimensional object; and in response to the received user input, causing by the at least one processor a displaying of an enlarged image of a portion of the first three-dimensional object corresponding to the region of interest by the display. The method may further include causing by the at least one processor a displaying of a plurality of user selectable icons, each corresponding to a respective one of a plurality of three-dimensional packaging wireframe model and selectable by a user to be fitted to the first three-dimensional object. 19. The method of claim 1 wherein the volume dimensioning system comprises a computer having a first processor, a camera and the display, and the volume dimensioning system further comprises a volume dimensioning system having a second processor, the volume dimensioning system selectively detachably coupleable to the computer, and causing a displaying of an image of the first three-dimensional packaging wireframe model fitted about an image of the first three-dimensional object on a display on which the image of the first three-dimensional object is displayed includes the second processor causing the first processor to display the image of the first three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display of the first computer.


A volume dimensioning system may be summarized as including at least one image sensor communicably coupled to at least one nontransitory processor-readable medium; at least one processor communicably coupled to the at least one nontransitory processor-readable medium; a machine executable instruction set stored within at least one nontransitory processor-readable medium, that when executed by the at least one processor causes the at least one processor to: read image data from the at least one nontransitory processor-readable medium, the image data associated with a first point of view of an area sensed by the at least one image sensor, the area including at least a first three-dimensional object to be dimensioned; determine from the received image data a number of features in three dimensions of the first three-dimensional object; based at least on part on the determined features of the first three-dimensional object, fit a first three-dimensional packaging wireframe model about the first three-dimensional object; and cause a display of an image of the first three-dimensional packaging wireframe model fitted about an image of the first three-dimensional object on a display device.


The machine executable instruction set may further include instructions, that when executed by the at least one processor cause the at least one processor to: select from a number of defined geometric primitives that define respective volumes and sizing at least one dimension of the selected geometric primitive based on a corresponding dimension of the first three-dimensional object such that the first three-dimensional object is completely encompassed by the selected and sized geometric primitive; produce a wireframe model of the first three-dimensional object; and cause a concurrent display of the wireframe model of the first three-dimensional object along with the three-dimensional packaging wireframe model. The machine executable instruction set stored within at least one nontransitory processor-readable medium may further include instructions, that when executed by the at least one processor cause the at least one processor to: responsive to a user input received by the at least one processor, change a position of at least a portion of the displayed image of the first three-dimensional packaging wireframe model relative to the displayed image of the first three-dimensional object; and cause a display of an updated image of the first three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display device. The machine executable instruction set stored within at least one nontransitory processor-readable medium may further include instructions, that when executed by the at least one processor cause the at least one processor to: responsive to a user input received by the at least one processor, change a position of at least a portion of the displayed image of the three-dimensional packaging wireframe model relative to the displayed image of the first three-dimensional object; responsive to a user input received by the at least one processor, fit a second three-dimensional packaging wireframe model about the first three-dimensional object, the second three-dimensional packaging wireframe model having a different geometrical shape than the first three-dimensional wireframe model; and cause a display of an image of the second three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display device. The machine executable instruction set stored within at least one nontransitory processor-readable medium may further include instructions, that when executed by the at least one processor cause the at least one processor to: responsive to a user input received by the at least one processor, the user input indicative of an identification of a second three-dimensional object different from the first three-dimensional object, fit a second three-dimensional packaging wireframe model about the second three-dimensional object; and cause a display of an image of the second three-dimensional packaging wireframe model fitted about the image of the second three-dimensional object on the display. The machine executable instruction set stored within at least one nontransitory processor-readable medium may further include instructions, that when executed by the at least one processor cause the at least one processor to: responsive to a user input received by the at least one processor, the user input indicative of an identification of at least one portion of the first three-dimensional object, fit a three-dimensional packaging wireframe model about a first portion of the first three-dimensional object; responsive to a user input received by the at least one processor, the user input indicative of an identification of at least one portion of the first three-dimensional object, fit a three-dimensional packaging wireframe model about a second portion of the first three-dimensional object; and cause a display of an image of the three-dimensional wireframe models fitted about the image of the first and the second portions of the first three-dimensional object on the display device. The machine executable instruction set stored within at least one nontransitory processor-readable medium may further include instructions, that when executed by the at least one processor cause the at least one processor to: responsive to a user input received by the at least one processor, the user input indicative of a second three-dimensional packaging wireframe model having a different geometric primitive than the first three-dimensional wireframe model, fit the second three-dimensional packaging wireframe model about the first three-dimensional object by the at least one processor; and cause a display of an image of the second three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display. The machine executable instruction set stored within at least one nontransitory processor-readable medium may further include instructions, that when executed by the at least one processor cause the at least one processor to: cause a display of a plurality of user selectable icons on the display device, each user selectable icon corresponding to a respective one of a plurality of three-dimensional packaging wireframe models and selectable by a user to be fitted to the first three-dimensional object.


A method of operation of a volume dimensioning system may be summarized as including receiving image data of an area from a first point of view by at least one nontransitory processor-readable medium from at least one image sensor, the area including at least a first three-dimensional object to be dimensioned; determining from the received image data a number of features in three dimensions of the first three-dimensional object by at least one processor communicatively coupled to the at least one nontransitory processor-readable medium; based at least in part on the determined features of the first three-dimensional object, identifying a first portion and at least a second portion of the first three-dimensional object by the at least one processor; based on the determined features of the first three-dimensional object, fitting a first three-dimensional packaging wireframe model about the first portion of the first three-dimensional object by the at least one processor; based on the determined features of the first three-dimensional object, fitting a second three-dimensional packaging wireframe model about the second portion of the first three-dimensional object by the at least one processor; and causing a concurrent displaying of an image of the first and the second three-dimensional wireframe models respectively fitted about the image of the first and the second portions of the first three-dimensional object on the display.


The method may further include receiving at least one user input via a user interface, the user input indicative of a change in a position of at least a portion of the displayed image of at least one of the first three-dimensional packaging wireframe model or the second three-dimensional packaging wireframe model relative to the displayed image of the first and second portions of the first three-dimensional object, respectively; and causing a displaying of an updated image of the first and second three-dimensional packaging wireframe models fitted about the image of the first and second portions of the first three-dimensional object on the display. The method may further include receiving at least one user input via a user interface, the user input indicative of a change in a position of at least a portion of the displayed image of at least one of the first three-dimensional packaging wireframe model or the second three-dimensional packaging wireframe model relative to the displayed image of the first three-dimensional object; based at least in part on the received user input, fitting a replacement three-dimensional packaging wireframe model about at least one of the first or second portions of the first three-dimensional object by the at least one processor, the replacement three-dimensional packaging wireframe model having a different geometric primitive than the first or second three-dimensional wireframe model that it replaces; and causing a displaying of an image of at least the replacement three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display. The at least one processor may cause the displaying of the image of the first and the second three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display to rotate about an axis. The method may further include receiving image data of the area from a second point of view by at least one nontransitory processor-readable medium from at least one image sensor, the second point of view different from the first point of view; determining from the received image data at least one additional feature in three dimensions of the first three-dimensional object by at least one processor; based on the determined features of the first three-dimensional object, performing at least one of adjusting the first or second three-dimensional packaging wireframe model or fitting a third three-dimensional packaging wireframe model about at least a portion of the first three-dimensional object not discernible from the first point of view by the at least one processor; and causing a displaying of an image of at least one of the adjusted first or second three-dimensional packaging wireframe model or the first, second, and third three-dimensional packaging wireframe models fitted about the image of the first three-dimensional object on the display. Fitting a first three-dimensional packaging wireframe model about the first three-dimensional object by the at least one processor may include selecting the first three-dimensional packaging wireframe model from a number of defined geometric primitives that define respective volumes and sizing at least one dimension of the selected geometric primitive based on a corresponding dimension of the first portion of the first three-dimensional object such that the first portion of the first three-dimensional object is completely encompassed by the selected and sized geometric primitive; and wherein fitting a second three-dimensional packaging wireframe model about the second portion of the first three-dimensional object by the at least one processor may include selecting the second three-dimensional packaging wireframe model from the number of defined geometric primitives that define respective volumes and sizing at least one dimension of the selected geometric primitive based on a corresponding dimension of the second portion of the first three-dimensional object such that the second portion of the first three-dimensional object is completely encompassed by the selected and sized geometric primitive. The method may further include producing a wireframe model of the first three-dimensional object; and causing a concurrently displaying of the wireframe model of the first three-dimensional object along with the first and second three-dimensional packaging wireframe models by the display. The method may further include receiving at least one user input via a user interface, the user input indicative of a geometric primitive of at least the first portion or the second portion of the first three-dimensional object; and selecting the first three-dimensional object from a plurality of three-dimensional objects represented in the image data by at least one processor, based at least in part on the user input indicative of the geometric primitive of at least a portion of the first three-dimensional object. Selecting the first three-dimensional object from a plurality of three-dimensional objects represented in the image data by at least one processor, based at least in part on the user input indicative of the geometric primitive of at least a portion of the first three-dimensional object may include determining which of the three-dimensional objects contains a portion having a geometric primitive that most closely matches the geometric primitive indicated by the received user input. The method may further include receiving at least one user input via a user interface, the user input indicative of an acceptance of the first three-dimensional packaging wireframe model and the second three-dimensional packaging wireframe model; and performing at least a volumetric calculation using a number of dimensions of the selected first and second three-dimensional packaging wireframe models. The method may further include receiving at least one user input via a user interface, the user input indicative of a rejection of at least one of the first three-dimensional packaging wireframe model or the second three-dimensional packaging wireframe model; and in response to the received user input, fitting a replacement three-dimensional packaging wireframe model about the first or second portion of the first three-dimensional object by the at least one processor, the replacement three-dimensional packaging wireframe model having a different geometric primitive than the first or second three-dimensional wireframe model that it replaces; and causing a displaying of an image of the replacement three-dimensional packaging wireframe model fitted about at least a portion of the image of the first three-dimensional object on the display. The method may further include receiving at least one user input via a user interface, the user input indicative of a replacement three-dimensional packaging wireframe model, the replacement three-dimensional packaging wireframe model having a different geometric primitive than at least one of the first three-dimensional wireframe model and the second three-dimensional wireframe model; in response to the received user input, fitting the replacement three-dimensional packaging wireframe model about either the first or second portion of the first three-dimensional object by the at least one processor; and causing a displaying of an image of the replacement three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display by the at least one processor. The method may further include causing by the at least one processor a displaying of a plurality of user selectable options, each user selectable option corresponding to a respective one of a plurality of three-dimensional packaging wireframe model and selectable by a user to be fitted to either the first or second portion of the first three-dimensional object.


A volume dimensioning system may be summarized as including at least one image sensor communicably coupled to at least one nontransitory processor-readable medium; at least one processor communicably coupled to the at least one nontransitory processor-readable medium; and a machine executable instruction set stored within at least one nontransitory processor-readable medium, that when executed by the at least one processor causes the at least one processor to: read image data from the at least one nontransitory processor-readable medium, the image data associated with a first point of view of an area sensed by the at least one image sensor, the area including at least a first three-dimensional object to be dimensioned; determine from the received image data a number of features in three dimensions of the first three-dimensional object; based at least in part on the determined features of the first three-dimensional object, identify a first portion and at least a second portion of the first three-dimensional object; based on the determined features of the first three-dimensional object, fit a first three-dimensional packaging wireframe model about the first portion of the first three-dimensional object; based on the determined features of the first three-dimensional object, fit a second three-dimensional packaging wireframe model about the second portion of the first three-dimensional object; and cause a concurrent display of an image of the first and the second three-dimensional wireframe models fitted about the image of the first and the second portions of the first three-dimensional object.


The first three-dimensional wireframe model may be a first geometric primitive; and wherein the second three-dimensional wireframe model may be a second geometric primitive.


A method of operation of a volume dimensioning system may be summarized as including receiving image data of an area from a first point of view by at least one nontransitory processor-readable medium from at least one image sensor, the area including at least a first three-dimensional object to be dimensioned; determining that there are insufficient features in the image data to determine a three-dimensional volume occupied by the first three-dimensional object; in response to the determination, generating an output to change at least one of a relative position or orientation of at least one image sensor with respect to at least the first three-dimensional object to obtain image data from a second point of view, the second point of view different from the first point of view.


Generating an output to change at least one of a relative position or orientation of at least one image sensor with respect to at least the first three-dimensional object to obtain image data from a second point of view may include generating at least one output, including at least one of an audio output or a visual output that is perceivable by a user. The at least one output may indicate to the user a direction of movement to change at least one of a relative position or orientation of the at least one sensor with respect to the first three-dimensional object. The method may further include causing a displaying of an image of a two-dimensional packaging wireframe model fitted about a portion of an image of the first three-dimensional object on a display on which the image of the first three-dimensional object is displayed. The causing of the displaying of the image of the two-dimensional packaging wireframe model fitted about the portion of the image of the first three-dimensional object may occur before generating the output.


A volume dimensioning system may be summarized as including at least one image sensor communicably coupled to at least one nontransitory processor-readable medium; at least one processor communicably coupled to the at least one nontransitory processor-readable medium; and a machine executable instruction set stored within at least one nontransitory processor-readable medium, that when executed by the at least one processor causes the at least one processor to: read image data from the at least one nontransitory processor-readable medium, the image data associated with a first point of view of an area sensed by the at least one image sensor, the area including at least a first three-dimensional object to be dimensioned; determine from the received image data that there are an insufficient number of features in the image data to determine a three-dimensional volume occupied by the first three-dimensional object; responsive to the determination of an insufficient number of features in the image data, generate an output to change at least one of a relative position or orientation of at least one image sensor with respect to at least the first three-dimensional object to obtain image data from a second point of view, the second point of view different from the first point of view.


The machine executable instruction set may further include instructions that when executed by the at least one processor further cause the at least one processor to: generate at least one output, including at least one of an audio output or a visual output that is perceivable by a user. The at least one output may indicate to the user a direction of movement to change at least one of a relative position or orientation of the at least one sensor with respect to the first three-dimensional object.


A method of operation of a volume dimensioning system may be summarized as including receiving image data of an area from a first point of view by at least one nontransitory processor-readable medium from at least one image sensor, the area including at least a first three-dimensional object to be dimensioned; receiving at least one user input via a user interface communicably coupled to at least one processor, the user input indicative of at least a portion of the three-dimensional packaging wireframe model of the first three-dimensional object; in response to the received user input, fitting the user inputted three-dimensional packaging wireframe model to at least a portion of one or more edges of the first three-dimensional object by the at least one processor; and causing a displaying of an image of the user inputted three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display by the at least one processor.


The at least one processor may cause the displaying of the image of the first three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display to rotate about an axis. The method may further include receiving image data of the area from a second point of view by at least one nontransitory processor-readable medium from at least one image sensor, the second point of view different from the first point of view; determining from the received image data at least one additional feature in three dimensions of the first three-dimensional object by at least one processor; based on the determined features of the first three-dimensional object, performing at least one of adjusting the three-dimensional packaging wireframe model by accepting additional user input via the user interface communicably coupled to at least one processor, the additional user input indicative the first three-dimensional packaging wireframe model; and causing a displaying of an image of at least one of the adjusted first three-dimensional packaging wireframe model fitted about the image of the first three-dimensional object on the display. The method may further include receiving at least one user input via a user interface, the user input indicative of an acceptance of the first three-dimensional packaging wireframe model; and performing at least a volumetric calculation using a number of dimensions of the selected three-dimensional packaging wireframe model.


A method of operation of a volume dimensioning system may be summarized as including receiving image data of an area from a first point of view by at least one nontransitory processor-readable medium from at least one image sensor, the area including at least a first three-dimensional void to be dimensioned; determining from the received image data a number of features in three dimensions of the first three-dimensional void by at least one processor communicatively coupled to the at least one nontransitory processor-readable medium; based at least on part on the determined features of the first three-dimensional void, fitting a first three-dimensional receiving wireframe model within the first three-dimensional void by the at least one processor; and causing a displaying of an image of the first three-dimensional receiving wireframe model fitted within an image of the first three-dimensional void on a display on which the image of the first three-dimensional void is displayed.


The method may further include calculating by the at least one processor, at least one of an available receiving dimension and an available receiving volume encompassed by the first three-dimensional receiving wireframe model. The method may further include receiving by the at least one nontransitory processor-readable medium at least one of dimensional data and volume data for each of a plurality of three-dimensional objects, the dimensional data and volume data determined based upon a respective three-dimensional packaging wireframe model fitted to each of the plurality of three-dimensional objects and corresponding to at least one of the respective dimensions and volume of each of the plurality of three-dimensional objects; and determining by the at least one processor communicably coupled to the at least one nontransitory processor-readable medium based at least in part on at least one of the available receiving dimension and an available receiving volume encompassed by the first three-dimensional receiving wireframe model at least one of a position and an orientation of at least a portion of the plurality of three-dimensional objects within the first three-dimensional void; wherein at least one of the position and the orientation of at least a portion of the plurality of three-dimensional objects within the first three-dimensional void minimizes at least one of: at least one dimension occupied by at least a portion of the plurality of three-dimensional objects within the first three-dimensional void, or a volume occupied by at least a portion of the plurality of three-dimensional objects within the first three-dimensional void. The method may further include indicating at least one of the position and the orientation of each of the three-dimensional packaging wireframes associated with each of the plurality of three-dimensional objects within the first three-dimensional void on the display.


A volume dimensioning system may be summarized as including at least one image sensor communicably coupled to at least one nontransitory processor-readable medium; at least one processor communicably coupled to the at least one nontransitory processor-readable medium; and a machine executable instruction set stored within at least one nontransitory processor-readable medium, that when executed by the at least one processor causes the at least one processor to: read image data from the at least one nontransitory processor-readable medium, the image data associated with a first point of view of an area sensed by the at least one image sensor, the area including at least a first three-dimensional void to be dimensioned; determine from the received image data a number of features in three dimensions of the first three-dimensional void; based at least on part on the determined features of the first three-dimensional void, fit a first three-dimensional receiving wireframe model within the first three-dimensional void; and cause a display of an image of the first three-dimensional receiving wireframe model fitted within an image of the first three-dimensional void on the display device.


The machine executable instruction set may further include instructions, that when executed by the at least one processor further cause the at least one processor to: determine at least one of an available receiving dimension and an available receiving volume encompassed by the first three-dimensional receiving wireframe model; receive from the at least one nontransitory processor-readable medium at least one of dimensional data and volume data for each of a plurality of three-dimensional objects, the dimensional data and volume data determined based upon a respective three-dimensional packaging wireframe model fitted to each of the plurality of three-dimensional objects and corresponding to at least one of the respective dimensions and volume of each of the plurality of three-dimensional objects; and determine based at least in part on at least one of the available receiving dimension and the available receiving volume encompassed by the first three-dimensional receiving wireframe model at least one of a position and an orientation of at least a portion of the plurality of three-dimensional objects within the first three-dimensional void; wherein at least one of the position and the orientation of at least a portion of the plurality of three-dimensional objects within the first three-dimensional void minimizes at least one of: at least one dimension occupied by at least a portion of the plurality of three-dimensional objects within the first three-dimensional void, or a volume occupied by at least a portion of the plurality of three-dimensional objects within the first three-dimensional void.


A method of operation of a volume dimensioning system may be summarized as including receiving image data of an area from a first point of view by at least one nontransitory processor-readable medium from at least one image sensor, the area including at least a first three-dimensional object to be dimensioned; determining from the received image data a number of features in three dimensions of the first three-dimensional object by at least one processor communicatively coupled to the at least one nontransitory processor-readable medium; based at least on part on the determined features of the first three-dimensional object, fitting a first three-dimensional packaging wireframe model selected from a wireframe library stored within the at least one nontransitory processor-readable medium about the first three-dimensional object by the at least one processor; receiving at least one user input via a user interface, the user input indicative of a change in a position of at least a portion of the displayed image of the first three-dimensional packaging wireframe model relative to the displayed image of the first three-dimensional object; associating via the processor, a plurality of points differentiating the changed first three-dimensional packaging wireframe model from all existing wireframe models within the wireframe library, and the storing the changed first three-dimensional packaging wireframe model in the wireframe library; and reviewing via the processor, the wireframe model stored within the wireframe library and associated with the changed first three-dimensional packaging wireframe model for subsequent fitting about a new three-dimensional object based at least in part on the plurality of points differentiating the changed first three-dimensional packaging wireframe model from all existing wireframe models within the wireframe library.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

In the drawings, identical reference numbers identify similar elements or acts. The sizes and relative positions of elements in the drawings are not necessarily drawn to scale. For example, the shapes of various elements and angles are not drawn to scale, and some of these elements are arbitrarily enlarged and positioned to improve drawing legibility. Further, the particular shapes of the elements as drawn, are not intended to convey any information regarding the actual shape of the particular elements, and have been solely selected for ease of recognition in the drawings.



FIG. 1A is a schematic diagram of an example volume dimensioning system coupled to a host computer, with two three-dimensional objects disposed within the field-of-view of the host system camera and the field-of-view of the volume dimensioning system image sensor.



FIG. 1B is a block diagram of the example volume dimensioning system and host computer depicted in FIG. 1A.



FIG. 2 is an example volume dimensioning method using a volume dimensioning system including an image sensor, a non-transitory, machine-readable storage, a processor, a camera, and a display device.



FIG. 3 is an example volume dimensioning method based on the method depicted in FIG. 2 and including receipt of a corrected first three-dimensional packaging wireframe model.



FIG. 4 is an example volume dimensioning method based on the method depicted in FIG. 2 and including selection of a second three-dimensional packaging wireframe model to replace the first three-dimensional packaging wireframe model.



FIG. 5 is an example volume dimensioning method based on the method depicted in FIG. 2 and including fitting a first three-dimensional packaging wireframe model about a first three-dimensional object and fitting a second three-dimensional packaging wireframe model about a second three-dimensional object.



FIG. 6 is an example volume dimensioning method based on the method depicted in FIG. 2 and including fitting a three-dimensional packaging wireframe model about a first portion of a first three-dimensional object and fitting a three-dimensional packaging wireframe model about a second portion of the first three-dimensional object.



FIG. 7 is an example volume dimensioning method based on the method depicted in FIG. 2 and including rotation of the first three-dimensional packaging wireframe model to detect the existence of additional three-dimensional features of the three-dimensional object and adjustment of the first three-dimensional packaging wireframe model or addition of a second three-dimensional packaging wireframe model to encompass the additional three-dimensional features.



FIG. 8 is an example volume dimensioning method based on the method depicted in FIG. 2 and including receipt of an input including a geometric primitive and selection of three-dimensional objects within the point of view of the image sensor that are substantially similar to or match the received geometric primitive input.



FIG. 9 is an example volume dimensioning method based on the method depicted in FIG. 2 and including acceptance of the fitted first three-dimensional packaging wireframe model and calculation of the dimensions and the volume of the first three-dimensional packaging wireframe model.



FIG. 10 is an example volume dimensioning method based on the method depicted in FIG. 2 and including receipt of an input rejecting the first three-dimensional packaging wireframe model fitted to the three-dimensional object and selection and fitting of a second three-dimensional packaging wireframe model to the first three-dimensional object.



FIG. 11 is an example volume dimensioning method based on the method depicted in FIG. 2 and including receipt of an input selecting a second three-dimensional packaging wireframe model and fitting of the second three-dimensional packaging wireframe model to the first three-dimensional object.



FIG. 12 is an example volume dimensioning method based on the method depicted in FIG. 2 and including receipt of an input indicating a region of interest within the first point of view and the display of an enlarged view of the region of interest.



FIG. 13 is an example volume dimensioning method including autonomous identification of first and second portions of a first three-dimensional object and fitting three-dimensional packaging wireframe models about each of the respective first and second portions of the three-dimensional object.



FIG. 14 is an example volume dimensioning method including the determination that an insufficient number of three-dimensional features are visible within the first point of view to permit the fitting of a first three-dimensional packaging wireframe model about the three-dimensional object.



FIG. 15 is a schematic diagram of an example volume dimensioning system coupled to a host computer, with a three-dimensional void disposed within the field-of-view of the host system camera and the field-of-view of the volume dimensioning system image sensor.



FIG. 16 is an example volume dimensioning method including the fitting of a first three-dimensional receiving wireframe model within a first three-dimensional void, for example an empty container to receive one or more three-dimensional objects.



FIG. 17 is an example volume dimensioning method based on the method depicted in FIG. 15 and including the receipt of dimensional or volumetric data associated with one or more three-dimensional packaging wireframe models and the determining of positions or orientations of the one or more three-dimensional packaging wireframe models within the three-dimensional void.



FIG. 18 is an example volume dimensioning method including the selection of a first geometric primitive based on a pattern of feature points, the rejection of the first three-dimensional packaging wireframe model, the selection of a second geometric primitive based on the pattern of feature points, and the future selection of the second geometric primitive for a similar pattern of feature points.





DETAILED DESCRIPTION

In the following description, certain specific details are set forth in order to provide a thorough understanding of various disclosed embodiments. However, one skilled in the relevant art will recognize that embodiments may be practiced without one or more of these specific details, or with other methods, components, materials, etc. In other instances, well-known structures associated with sources of electromagnetic energy, operative details concerning image sensors and cameras and detailed architecture and operation of the host computer system have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the embodiments.


Unless the context requires otherwise, throughout the specification and claims which follow, the word “comprise” and variations thereof, such as, “comprises” and “comprising” are to be construed in an open, inclusive sense, that is, as “including, but not limited to.”


Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.


As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the content clearly dictates otherwise. It should also be noted that the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise.


The headings and Abstract of the Disclosure provided herein are for convenience only and do not interpret the scope or meaning of the embodiments.


Volume dimensioning systems provide dimensional and volumetric data for one or more three-dimensional objects located within a given point of view without requiring the laborious and time-consuming task of hand measuring and calculating the volume of each individual object. Volume dimensioning systems typically employ one or more image sensors to obtain or otherwise capture an image containing the one or more three-dimensional objects located within the field-of-view of the image sensor. Based on the shape, overall complexity, or surface contours of each of the three-dimensional objects, the volume dimensioning system can select one or more geometric primitives from a library to serve as a model of the three-dimensional object. A wireframe packaging model based, at least in part, on the selected one or more geometric primitives can then be scaled or fitted to encompass the image of each respective three-dimensional object. The scaled and fitted wireframe provides a packaging wireframe that includes sufficient t space about the three-dimensional to include an estimate of the packaging, blocking, padding, and wrapping used to ship the three-dimensional object. Thus, the three-dimensional packaging wireframe model generated by the system can be used to provide shipping data such as the dimensions and volume of not just the three-dimensional object itself, but also any additional packaging or boxing necessary to ship the three-dimensional object.


For example, a box shaped three-dimensional object may result in the selection of a single, cubic, geometric primitive by the volume dimensioning system as approximating the packaging of the actual three-dimensional object. The three-dimensional packaging wireframe model associated with a cubic geometric primitive can then be scaled and fitted to the image of the actual three-dimensional object within the volume dimensioning system to provide a model approximating the size and shape of the packaging of the actual three-dimensional object. From the virtual representation of the three-dimensional object provided by the three-dimensional packaging wireframe model, the length, width, height, and volume of the packaging can be determined by the volume dimensioning system.


In a more complex example, an obelisk shaped three-dimensional object may result in the selection of two geometric primitives by the volume dimensioning system, a rectangular prism representing the body of the obelisk and a four-sided pyramid representing the top of the obelisk. The three-dimensional packaging wireframe models associated with each of these geometric primitives can then be scaled and fitted to the image of the actual three-dimensional object within the volume dimensioning system to provide a model approximating the size, shape, and proportions of the actual, packaged, three-dimensional object. From the virtual representation of the three-dimensional object provided by the three-dimensional packaging wireframe model, the length, width, height, and volume of the packaged obelisk can be determined by the volume dimensioning system. By fitting one or more geometric primitives about three-dimensional objects having even highly complex surface features can be encompassed by the one or more relatively simple geometric primitives to provide a three-dimensional packaging wireframe model of the packaged three-dimensional object that includes allowances for packing, padding, bracing, and boxing of the three-dimensional object.


Advantageously, the volume dimensioning system can permit a user to identify special handling instructions, fragile surfaces, shipping orientation, and the like on the three-dimensional packaging wireframe model. Such handling instructions can then be associated with a given object and where the volume dimensioning system is used to perform load planning, objects can be positioned and oriented within the load plan in accordance with the handling instructions.


Additionally, the interactive nature of the volume dimensioning system can advantageously permit a user to enter, select, or modify the three-dimensional packaging wireframe model fitted to a particular three-dimensional object to more closely follow the actual outline, shape, contours, or surfaces of the object. In some instances, the system can “learn” new geometric primitives or wireframe models based on received user input, for example user input altering or modifying the three-dimensional packaging wireframe model fitted by the volume dimensioning system about three-dimensional objects having a characteristic size or shape.



FIG. 1A depicts an illustrative volume dimensioning system 110 physically and communicably coupled to a host computer 150 using one or more data busses 112. The volume dimensioning system 110 is equipped with an image sensor 114 having a field-of-view 116. The host computer 150 is equipped with a camera 152 having a field-of-view 154 and a display device 156.


Two three-dimensional objects, a pyramidal three-dimensional object 102a and a cubic three-dimensional object 102b (collectively 102) appear within the field-of-view 116 of the image sensor 114 and the field-of-view 154 of the camera 152. The three-dimensional objects 102 are depicted as surrounded by a scaled and fitted pyramidal geometric primitive 104a and a scaled and fitted cubic geometric primitive 104b (collectively 104) as displayed upon on the one or more display devices 156. Scaled, fitted, three-dimensional packaging wireframe models 106a, 106b (collectively 106) are depicted as encompassing the scaled and fitted geometric primitives 104a, 104b, respectively.


The scaled, fitted three-dimensional packaging wireframe models 106 may be generated by the host computer 150 or, more preferably by the volume dimensioning system 110. The image on the display device 156 is a provided in part using the image data acquired by the camera 152 coupled to the host computer system 150 which provides the virtual representation of the three-dimensional objects 104, and in part using the scaled and fitted three-dimensional packaging wireframe models 106 provided by the volume dimensioning system 110. Data, including visible image data provided by the camera 152 and depth map data and intensity image data provided by the image sensor 114 is exchanged between the host computer 150 and the volume dimensioning system 110 via the one or more data busses 112. In some instances, the volume dimensioning system 110 and the host computer system 150 may be partially or completely incorporated within the same housing, for example a self service kiosk or a handheld computing device.



FIG. 1B depicts an operational level block diagram of the volume dimensioning system 110 and the host computer 150. The volume dimensioning system 110 can include the image sensor 114 communicably coupled to one or more non-transitory, machine-readable storage media 118 and one or more processors 120 that are also communicably coupled to the one or more non-transitory, machine-readable storage media 118. The one or more processors 120 includes an interface 122 used to exchange data between the volume dimensioning system 110 and the host computer system 150 via the one or more data busses 112. The interface 122 can include an I/O controller, serial port, a parallel port, or a network suitable for receipt of the one or more data busses 112. In one preferred embodiment, the interface 122 can be an I/O controller having at least one universal serial bus (“USB”) connector, and the one or more data busses 112 can be a USB cable. The volume dimensioning system 110 can be at least partially enclosed within a housing 124. In a preferred embodiment, the housing 124 can be detachably attached to the host computer system 150 using one or more attachment features on the exterior surface of the housing 124, the exterior surface of the host computer 150, or exterior surfaces of both the housing 124 and the host computer 150.


The host computer system 150 can include the camera 152 which is communicably coupled to a first bridge processor (e.g., a southbridge processor) 162 via one or more serial or parallel data buses, for example a universal serial bus (“USB”), a small computer serial interface (“SCSI”) bus, a peripheral component interconnect (“PCI”) bus, an integrated drive electronics (“IDE”) bus or similar. One or more local busses 164 communicably couple the first bridge processor 162 to a second bridge processor (e.g., a northbridge processor) 176. The one or more non-transitory, machine-readable storage medium 158 and central processing units (“CPUs”) 160 are communicably coupled to the second bridge processor 176 via one or more high-speed or high bandwidth busses 168. The one or more display devices 156 are coupled to the second bridge processor 176 via an interface 170 such as a Digital Visual Interface (“DVI”) or a High Definition Multimedia Interface (“HDMI”). In some instances, for example where the one or more display devices 156 include at least one touch-screen display device capable of receiving user input to the host computer 150, some or all of the one or more display devices 156 may also be communicably coupled to the first bridge processor 162 via one or more USB interfaces 172.


The volume dimensioning system 110 is communicably coupled to the host computer 150 via one or more communication or data interfaces, for example one or more USB interfaces coupled to a USB bus 174 within the host computer. The USB bus 174 may also be shared with other peripheral devices, such as one or more I/O devices 166, for example one or more keyboards, pointers, touchpads, trackballs, or the like. The host computer 150 can be of any size, structure, or form factor, including, but not limited to a rack mounted kiosk system, a desktop computer, a laptop computer, a netbook computer, a handheld computer, or a tablet computer. Although for clarity and brevity one specific host computer architecture was presented in detail, those of ordinary skill in the art will appreciate that any host computer architecture may be used or substituted with equal effectiveness.


Referring now in detail to the volume dimensioning system 110, the image sensor 114 includes any number of devices, systems, or apparatuses suitable for obtaining three-dimensional image data from the scene within the field-of-view 116 of the image sensor 114. Although referred to herein as a “three-dimensional image data” it should be understood by one of ordinary skill in the art that the term may apply to more than one three-dimensional image and therefore would equally apply to “three-dimensional video images” which may be considered to comprise a series or time-lapse sequence including a plurality of “three-dimensional images.” The three-dimensional image data acquired or captured by the image sensor 114 can include data collected using electromagnetic radiation either falling within the visible spectrum (e.g., wavelengths in the range of about 360 nm to about 750 nm) or falling outside of the visible spectrum (e.g., wavelengths below about 360 nm or above about 750 nm). For example, three-dimensional image data may be collected using infrared, near-infrared, ultraviolet, or near-ultraviolet light. The three-dimensional image data acquired or captured by the image sensor 114 can include data collected using laser or ultrasonic based imaging technology. In some embodiments, a visible, ultraviolet, or infrared supplemental lighting system (not shown) may be synchronized to and used in conjunction with the volume dimensioning system 100. For example, a supplemental lighting system providing one or more structured light patterns or a supplemental lighting system providing one or more gradient light patterns may be used to assist in acquiring, capturing, or deriving three-dimensional image data from the scene within the field-of-view 116 of the image sensor 114.


In a preferred embodiment, the image sensor 114 includes a single sensor capable of acquiring both depth data providing a three-dimensional depth map and intensity data providing an intensity image for objects within the field-of-view 116 of the image sensor 114. The acquisition of depth and intensity data using a single image sensor 114 advantageously eliminates parallax and provides a direct mapping between the depth map and the intensity image. The depth map and intensity image may be collected in an alternating sequence by the image sensor 114 and the resultant depth data and intensity data stored within the one or more non-transitory, machine-readable storage media 118.


The three-dimensional image data captured or acquired by the image sensor 114 may be in the form of an analog signal that is converted to digital data using one or more analog-to-digital (“A/D”) converters (not shown) within the image sensor 114 or within the volume dimensioning system 110 prior to storage within the one or more non-transitory, machine-readable, storage media 118. Alternatively, the three-dimensional image data captured or acquired by the image sensor 114 may be in the form of one or more digital data groups, structures, or files comprising digital data supplied directly by the image sensor 114.


The image sensor 114 can be formed from or contain any number of image capture elements, for example picture elements or “pixels.” For example, the image sensor 114 can have between 1,000,000 pixels (1 MP) and 100,000,000 pixels (100 MP). The image sensor 114 can include any number of current or future developed image sensing devices or systems, including, but not limited to, one or more complementary metal-oxide semiconductor (“CMOS”) sensors or one or more charge-coupled device (“CCD”) sensors.


In some embodiments, the three-dimensional image data captured by the image sensor 114 can include more than one type of data associated with or collected by each image capture element. For example, in some embodiments, the image sensor 114 may capture depth data related to a depth map of the three-dimensional objects within the point of view of the image sensor 114 and may also capture intensity data related to an intensity image of the three-dimensional objects in the field-of-view of the image sensor 114. Where the image sensor 114 captures or otherwise acquires more than one type of data, the data in the form of data groups, structures, files or the like may be captured either simultaneously or in an alternating sequence by the image sensor 114.


In some embodiments, the image sensor 114 may also provide visible image data capable of providing a visible black and white, grayscale, or color image of the three-dimensional objects 102 within the field-of-view 116 of the image sensor 114. Where the image sensor 114 is able to provide visible image data, the visible image data may be communicated to the host computer 150 for display on the one or more display devices 156. In some instances, where the image sensor 114 is able to provide visible image data, the host computer system camera 152 may be considered optional and may be eliminated.


Data is communicated from the image sensor 114 to the one or more non-transitory machine readable storage media 118 via one or more serial or parallel data busses 126. The one or more non-transitory, machine-readable storage media 118 can be any form of data storage device including, but not limited to, optical data storage, electrostatic data storage, electroresistive data storage, magnetic data storage, and molecular data storage. In some embodiments, all or a portion of the one or more non-transitory, machine-readable storage media 118 may be disposed within the one or more processors 120, for example in the form of a cache or similar non-transitory memory structure capable of storing data or machine-readable instructions executable by the one or more processors 120.


In at least some embodiments, the volume dimensioning system 110 including the image sensor 114, the communicably coupled one or more non-transitory, machine-readable storage media 118, and the communicably coupled one or more processors 120 are functionally combined to provide a system capable of selecting one or more geometric primitives 104 to virtually represent each of the one or more three-dimensional objects 102 appearing in the field-of-view 116 of the image sensor 114. Using the selected one or more geometric primitives 104, the system can then fit a three-dimensional packaging wireframe model 106 about each of the respective three-dimensional objects 102.


The one or more non-transitory, machine-readable storage media 118 can have any data storage capacity from about 1 megabyte (1 MB) to about 3 terabytes (3 TB). In some embodiments two or more devices or data structures may form all or a portion of the one or more non-transitory, machine-readable storage media 118. For example, in some embodiments, the one or more non-transitory, machine-readable storage media 118 can include an non-removable portion including a non-transitory, electrostatic, storage medium and a removable portion such as a Secure Digital (SD) card, a compact flash (CF) card, a Memory Stick, or a universal serial bus (“USB”) storage device.


The one or more processors 120 can execute one or more instruction sets that are stored in whole or in part in the one or more non-transitory, machine-readable storage media 118. The machine executable instruction set can include instructions related to basic functional aspects of the one or more processors 120, for example data transmission and storage protocols, communication protocols, input/output (“I/O”) protocols, USB protocols, and the like. Machine executable instruction sets related to all or a portion of the volume dimensioning functionality of the volume dimensioning system 110 and intended for execution by the one or more processors 120 may also be stored within the one or more non-transitory, machine-readable storage media 118, within the one or more processors 120, or within both the one or more non-transitory, machine-readable storage media 118 and the one or more processors 120. Additional volume dimensioning system 110 functionality may also be stored in the form of one or more machine executable instruction sets within the one or more non-transitory, machine-readable storage media 118. Such functionality may include system security settings, system configuration settings, language preferences, dimension and volume preferences, and the like.


The one or more non-transitory, machine-readable storage media 118 may also store a library containing a number of geometric primitives useful in the construction of three-dimensional packaging wireframe models by the one or more processors 120. As used herein, the term “geometric primitive” refers to a simple three-dimensional geometric shape such as a cube, cylinder, sphere, cone, pyramid, torus, prism, and the like that may be used individually or combined to provide a virtual representation of more complex three-dimensional geometric shapes or structures. The geometric primitives stored within the one or more non-transitory, machine-readable storage media 118 are selected by the one or more processors 120 as basic elements in the construction of a virtual representation 104 of each of the three-dimensional objects 102 appearing within the field-of-view 116 of the image sensor 114. The construction of the virtual representation 104 by the one or more processors 120 is useful in fitting properly scaled three-dimensional packaging wireframe models 106 to each of the three-dimensional objects 102 appearing in the field-of-view 116 of the image sensor 114. A properly scaled three-dimensional packaging wireframe model 106 permits the accurate determination of dimensional and volumetric data for each of the three-dimensional objects 102 appearing in the field-of-view 116 of the image sensor 114. A properly scaled and fitted three-dimensional packaging wireframe model 106 will fall on the boundaries of the geometric primitive 104 fitted to the three-dimensional object 102 by the one or more processors 120 as viewed on the one or more display devices 156 as depicted in FIG. 1A.


Data is transferred between the one or more non-transitory, machine-readable storage media 118 and the one or more processors 120 via one or more serial or parallel bi-directional data busses 128. The one or more processors 120 can include any device comprising one or more cores or independent central processing units that are capable of executing one or more machine executable instruction sets. The one or more processors 120 can, in some embodiments, include a general purpose processor such as a central processing unit (“CPU”) including, but not limited to, an Intel® Atom® processor, an Intel® Pentium®, Celeron®, or Core 2® processor, and the like. In other embodiments the one or more processors 120 can include a system-on-chip (“SoC”) architecture, including, but not limited to, the Intel® Atom® System on Chip (“Atom SoC”) and the like. In other embodiments, the one or more processors 120 can include a dedicated processor such as an application specific integrated circuit (“ASIC”), a programmable gate array (“PGA” or “FPGA”), a digital signal processor (“DSP”), or a reduced instruction set computer (“RISC”) based processor. Where the volume dimensioning system 110 is a battery-powered portable system, the one or more processors 120 can include one or more low power consumption processors, for example Intel® Pentium M®, or Celeron M® mobile system processors or the like, to extend the system battery life.


Data in the form of three-dimensional image data, three-dimensional packaging wireframe model data, instructions, input/output requests, and the like may be bi-directionally transferred from the volume dimensioning system 110 to the host computer 150 via the one or more data busses 112. Within the host computer 150, the three-dimensional packaging wireframe model 106 data can, for example, be combined with visual image data captured or acquired by the camera 152 to provide a display output including a visual image of one or more three-dimensional objects 102 appearing in both the camera 152 field-of-view 154 and the image sensor 114 field-of-view 116 encompassed by the geometric primitive 104 and the fitted three-dimensional packaging wireframe models 106 provided by the volume dimensioning system 110.


Referring now in detail to the host computer system 150, the camera 152 can acquire or capture visual image data of the scene within the field-of-view 154 of the camera 152. As a separate device that is discrete from the image sensor 114, the camera 152 will have a field-of-view 154 than differs from the image sensor 114 field-of-view 116. In at least some embodiments, the one or more CPUs 160, the one or more processors 120, or a combination of the one or more CPUs 160 and the one or more processors 120 will calibrate, align, map, or otherwise relate the field-of-view 154 of the camera 152 to the field-of-view 116 of the image sensor 114 thereby linking or spatially mapping in two-dimensional space or three-dimensional space the visual image data captured or acquired by the camera 152 to the three-dimensional image data captured or acquired by the image sensor 114. In a preferred embodiment, when the volume dimensioning system 110 is initially communicably coupled to the host computer 150, the one or more processors 120 in the volume dimensioning system 110 are used to calibrate, align, or spatially map in three-dimensions the field-of-view 116 of the image sensor 114 to the field-of-view 154 of the camera 152 such that three-dimensional objects 102 appearing in the field-of-view 116 of the image sensor 114 are spatially mapped or correlated in three-dimensions to the same three-dimensional objects 102 appearing in the field-of-view 154 of the camera 152.


The camera 152 can be formed from or contain any number of image capture elements, for example picture elements or “pixels.” For example, the camera 152 may have between 1,000,000 pixels (1 MP) and 100,000,000 pixels (100 MP). In some embodiments, the camera 152 may capture or acquire more than one type of data, for example the camera 152 may acquire visual image data related to the visual image of the scene within the field-of-view 154 of the camera 152 as well as infrared image data related to an infrared image of the scene within the field-of-view 154 of the camera 152. Where the camera 152 captures or otherwise acquires more than one type of image data, the data may be collected into one or more data groups, structures, files, or the like.


In some embodiments, the visual image data captured or acquired by the camera 152 may originate as an analog signal that is converted to digital visual image data using one or more internal or external analog-to-digital (“A/D”) converters (not shown). In other embodiments, the visual image data acquired by the camera 152 is acquired in the form of digital image data provided directly from one or more complementary metal-oxide semiconductor (“CMOS”) sensors or one or more charge-coupled device (“CCD”) sensors disposed at least partially within the camera 152. At least a portion of the visual image data from the camera 152 is stored in the one or more non-transitory, machine-readable storage media 158 in the form of one or more data groups, structures, or files.


Image data is transferred between the camera 152 and the one or more non-transitory, machine-readable storage media 158 via the first bridge processor 162, the second bridge processor 176, and one or more serial or parallel data buses 164, 168. The image data provided by the camera 152 can be stored within the one or more non-transitory, machine-readable storage media 158 in one or more data groups, structures, or files. The one or more non-transitory, machine-readable storage media 158 can have any data storage capacity from about 1 megabyte (1 MB) to about 3 terabytes (3 TB). In some embodiments two or more devices or data structures may form all or a portion of the one or more non-transitory, machine-readable storage media 158. For example, in some embodiments, the one or more non-transitory, machine-readable storage media 158 can include an non-removable portion including a non-transitory, electrostatic, storage medium and a removable portion such as a Secure Digital (SD) card, a compact flash (CF) card, a Memory Stick, or a universal serial bus (“USB”) storage device.


Data is transferred between the one or more non-transitory, machine-readable storage media 158 and the one or more CPUs 160 via the second bridge processor 176 and one or more serial or parallel bi-directional data busses 168. The one or more CPUs 160 can include any device comprising one or more cores or independent central processing units that are capable of executing one or more machine executable instruction sets. The one or more CPUs 160 can, in some embodiments, include a general purpose processor including, but not limited to, an Intel® Atom® processor, an Intel® Pentium®, Celeron®, or Core 2® processor, and the like. In other embodiments the one or more CPUs 160 can include a system-on-chip (“SoC”) architecture, including, but not limited to, the Intel® Atom® System on Chip (“Atom SoC”) and the like. In other embodiments, the one or more CPUs 160 can include a dedicated processor such as an application specific integrated circuit (“ASIC”), a programmable gate array (“PGA” or “FPGA”), a digital signal processor (“DSP”), or a reduced instruction set computer (“RISC”) based processor. Where the host computer 150 is a battery-powered portable system, the one or more CPUs 160 can include one or more low power consumption processors, for example Intel® Pentium M®, or Celeron M® mobile system processors or the like, to extend the system battery life.


Recall, the calibration or alignment process between the camera 152 and the image sensor 114 correlated, aligned, or spatially mapped the field-of-view 154 of the camera 152 with the field of view 116 of the image sensor 114 upon initial coupling of the volume dimensioning system 110 to the host computer 150. The image data captured or acquired by the camera 152 will therefore be spatially mapped, aligned, or correlated with the three-dimensional image data captured or acquired by the image sensor 114. Advantageously, the three-dimensional packaging wireframe models 106 fitted by the one or more processors 120 to the three-dimensional objects 102 in the field-of-view 116 of the image sensor 114 will align with the image of the three-dimensional objects 102 when viewed on the one or more display devices 156. Merging, overlaying, or otherwise combining the three-dimensional packaging wireframe models 106 provided by the one or more processors 120 with the image data captured or acquired by the camera 152 creates a display image on the one or more display devices 156 that contains both an image of the three-dimensional object 102 and the corresponding three-dimensional packaging wireframe model 106.


The host computer 150 may have one or more discrete graphical processing units (GPUs—not shown) or one or more GPUs integrated with the one or more CPUs 160. The one or more CPUs 160 or one or more GPUs can generate a display image output to provide a visible image on the one or more display devices 156. The display image output can be routed through the second bridge processors 176 to the one or more display devices 156 in the host computer system 150. The one or more display devices 156 include at least an output device capable of providing a visible image perceptible to the unaided human eye. In at least some embodiments, the one or more display devices 156 can include one or more input devices, for example a resistive or capacitive touch-screen. The one or more display devices 156 can include any current or future, analog or digital, two-dimensional or three-dimensional display technology, for example cathode ray tube (“CRT”), light emitting diode (“LED”), liquid crystal display (“LCD”), organic LED (“OLED”), digital light processing (“DLP”), eInk, and the like. In at least some embodiments, the one or more display devices 156 may be self-illuminating or provided with a backlight such as a white LED to facilitate use of the system 100 in low ambient light environments.


One or more peripheral I/O devices 166 may be communicably coupled to the host computer system 150 to facilitate the receipt of user input by the host computer 150 via a pointer, a keyboard, a touchpad, or the like. In at least some embodiments the one or more peripheral I/O devices 166 may be USB devices that are communicably coupled to the USB bus 174. In at least some embodiments, the one or more peripheral I/O devices 166 or the one or more display devices 156 may be used by the one or more processors 120 or one or more CPUs 160 to receive specialized shipping instructions associated with one or more three-dimensional objects 102 from a user. Such specialized instructions can include any data provided by the user that is relevant to how a particular three-dimensional object 102 should be handled, and can include, but is not limited to, designation of fragile areas, designation of proper shipping orientation, designation of top-load only or crushable contents, and the like. Upon receipt of the specialized shipping instructions, the one or more processors 120 or the one or more CPUs 160 can associate the instructions with a particular three-dimensional packaging wireframe model 106 which thereby links the instructions with a particular three-dimensional object 102.



FIG. 2 shows a method 200 of operation of an example illustrative volume dimensioning system such as the system depicted in FIGS. 1A and 1B. Data captured or acquired by the image sensor 114 is used by the one or more processors 120 to select one or more geometric primitives 104, for example from a library in the one or more non-transitory, machine-readable storage media 118. The one or more geometric primitives 104 selected by the one or more processors 120 are used to construct a virtual representation of the packaging about the one or more three-dimensional objects 102 appearing in the field-of-view 116 of the image sensor 114. The one or more processors 120 can therefore use the one or more selected geometric primitives 104 to construct a three-dimensional packaging wireframe model 106 that, when fitted to the three-dimensional object 102, provides a three-dimensional packaging wireframe model 106 that is scaled and fitted to encompass or otherwise contain the three-dimensional object 102.


The one or more processors 120 can use the plurality of features identified on the three-dimensional object 102 in selecting the one or more geometric primitives 104 from the library. The three-dimensional packaging wireframes 106 encompassing each three-dimensional object 102 within the volume dimensioning system 110 permit a reasonably accurate determination of the dimensions and volume of each three-dimensional object 102. The user benefits from accurate dimensional and volumetric information by receiving accurate shipping rates based on the object's true size and volume. Carriers benefit from accurate dimensional and volumetric information by having access to data needed to optimize the packing of the objects for transport and the subsequent routing of transportation assets based upon reasonably accurate load data.


At 202, the image sensor 114 captures or acquires three-dimensional image data which is communicated to the one or more non-transitory, machine-readable storage media 118 via one or more data busses 126. The three-dimensional image data captured by the image sensor 114 includes a first three-dimensional object 102 disposed within the field-of-view 116 of the image sensor 114. The three-dimensional image data captured by the image sensor 114 may include depth data providing a depth map and intensity data providing an intensity image of the field-of-view 116 of the image sensor 114. At least a portion of the three-dimensional image data received by the one or more non-transitory, machine-readable storage media 118 is communicated to or otherwise accessed by the one or more processors 120 in order to select one or more geometric primitives 104 preparatory to fitting a first three-dimensional packaging wireframe model 106 about all or a portion of the first three-dimensional object 102.


At 204, based in whole or in part on the three-dimensional image data received from the image sensor 114, the one or more processors 120 determine a number of features on the first three-dimensional object 102 contained in the three-dimensional image data. The features may include any point, edge, or other discernible structure on the first three-dimensional object 102 and detectable in the image represented by the three-dimensional image data. For example, one or more features may correspond to a three-dimensional point on the three-dimensional object 102 that is detectable in a depth map containing the first three-dimensional object, an intensity image in which the three-dimensional object, or both a depth map and an intensity image in which the first three-dimensional object 102 appears as is represented. The identified features may include boundaries or defining edges of the first three-dimensional object, for example corners, arcs, lines, edges, angles, radii, and similar characteristics that define all or a portion of the external boundary of the first three-dimensional object 102.


At 206, based at least in part on the features identified in 204 the one or more processors 120 selects one or more geometric primitives 104 from the library. The one or more processors 120 use the selected one or more geometric primitives 104 to roughly represent the packaging encompassing first three-dimensional object 102 making allowances for any specialized packing instructions (e.g., fragile surfaces, extra packing, unusual packing shapes, etc.) that may have been provided by the user. The one or more processors 120 fit a three-dimensional packaging wireframe model 106 about all or a portion of the first three-dimensional object 102 that encompasses substantially all of the processor identified features defining an external boundary of the first three-dimensional object 102 and reflecting any specialized packing instructions provided by the user.


For example, where the first three-dimensional object 102 is a cube, the one or more processors 120 may identify seven or more features (e.g., four defining the corners of one face of the cube, two additional defining the corners of a second face of the cube and one defining the fourth corner of the top of the cube). The user may have identified one surface of the cube as requiring “extra packaging.” Based on these indentified features and the user's specialized packing instructions, the one or more processors 120 may select a rectangular prismatic geometric primitive 104 accommodating the cubic three-dimensional object 102 and the extra packaging requirements identified by the user and use the selected rectangular prismatic geometric primitive 104 to fit a first three-dimensional packaging wireframe model 106 that substantially encompasses the cubic first three-dimensional object 102 and the associated packaging surrounding the object.


In another example, the first three-dimensional object 102 may be a cylinder and the one or more processors 120 may identify a number of features about the face and defining the height of the cylinder. Based on these identified features, the one or more processors 120 may select a cylindrical geometric primitive 104 and use the selected geometric primitive to fit a first three-dimensional packaging wireframe model 106 to the cylindrical first three-dimensional object 102 that substantially encompasses the object and includes an allowance for packaging materials about the cylindrical three-dimensional object 102.


Based at least in part on the identified features, the one or more processors 120 may search the library for one or more geometric primitives 104 having features, points, or nodes substantially similar to the spatial arrangement of the identified features, points, or nodes associated with the first three-dimensional object 102. In searching the library, the one or more processors may use one or more appearance-based or feature-based shape recognition or shape selection methods. For example a “large modelbases” appearance-based method using eigenfaces may be used to select geometric primitives 104 appropriate for fitting to the first three-dimensional object 102.


At 208, the one or more processors 120 can generate a video, image, or display output that includes data providing an image of the first three-dimensional packaging wireframe model 106 as fitted to the first three-dimensional object 102. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 that includes one or more images providing the concurrent or simultaneous depiction of the first three-dimensional object 102 using image data from the camera 152 and the fitted first three-dimensional packaging wireframe model 106. In some instances, an image concurrently or simultaneously depicting the geometric primitive 104 fitted to the first three-dimensional object 102 along with the first three-dimensional packaging wireframe model 106 fitted thereto may also be provided on the one or more display devices 156.



FIG. 3 shows a method 300 extending from the method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. For various reasons, the first three-dimensional packaging wireframe model 106 fitted by the one or more processors 120 may not properly encompass the first three-dimensional object 102. For example, where the first three-dimensional object is a box, one face of the first three-dimensional packaging wireframe model 106 generated by the one or more processors 120 may be situated in too close proximity to the three-dimensional object 102 to permit the insertion of adequate padding between the three-dimensional object 102 and the shipping box. Such incorrectly positioned or sized three-dimensional packaging wireframe models 106 may result in erroneous shipping rate information or erroneous packing information. Therefore, providing a process to correct the shape, size, or position of all or a portion of the three-dimensional packaging wireframe model 106 is advantageous to both the user and the carrier.


At 302, the one or more processors 120 receive an input indicative of a desired change at least a portion of the first three-dimensional packaging wireframe model 106. The change in position of the first three-dimensional packaging wireframe model 106 may include a change to a single point, multiple points, or even a scalar, arc, curve, face, or line linking two or more points used by the one or more processors 120 to fit the three-dimensional packaging wireframe model 106. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150. The use of a touch-screen display device 114 advantageously enables a user to visually align all or a portion of the first three-dimensional packaging wireframe model 106 with all or a corresponding portion of the image of first three-dimensional object 102 in an intuitive and easy to visualize manner. In some embodiments, a prior input received by the one or more processors 120 may be used to place the system 100 in a mode where a subsequent input indicating the desired change to the three-dimensional packaging wireframe model 106 will be provided to the one or more processors 120.


At 304, the one or more processors 120 can generate a video, image, or display data output that includes image data of the modified or updated first three-dimensional packaging wireframe model 106 as fitted to the first three-dimensional object 102. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 that includes an image contemporaneously or simultaneously depicting the first three-dimensional object 102 using image data from the camera 152 and the first three-dimensional packaging wireframe model 106 data as fitted by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting the first three-dimensional object 102 along with the one or more scaled and fitted geometric primitives 104 and the first three-dimensional packaging wireframe model 106 may also be provided on the one or more display devices 156.



FIG. 4 shows a method 400 extending from the method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. For various reasons, the first three-dimensional packaging wireframe model 106 fitted by the one or more processors 120 may not properly encompass the first three-dimensional object 102 and in fact, the first three-dimensional packaging wireframe model 106 as fitted by the one or more processors 120 may require significant modification or replacement to substantially conform to both the first three-dimensional object 102 and any associated specialized shipping requirements provided by the user.


For example, where the first three-dimensional object 102 is a cylindrical object, a cylindrical geometric primitive 104 may be selected by the one or more processors 120, resulting in a cylindrical first three-dimensional packaging wireframe model 106. Perhaps the user has triangular shaped padding that will be used to pad and center the cylindrical object in the center of a rectangular shipping container. In response to an input indicative of a desired change in a position of at least a portion of the first three-dimensional packaging wireframe model 106, the one or more processors 120 may select a second geometric primitive 104, for example a rectangular prismatic geometric primitive, and fit a more appropriate second three-dimensional packaging wireframe model 106 to replace the previously fitted first three-dimensional packaging wireframe model 106.


At 402, the one or more processors 120 receive an input indicative of a desired change to at least a portion of the first three-dimensional packaging wireframe model 106 fitted to the three-dimensional object 102. The input may specify one or more of a single point, multiple points, or even a scalar, arc, curve, face, or line linking two or more points used by the one or more processors 120 to fit the three-dimensional packaging wireframe model 106. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150. The use of a touch-screen display device 114 advantageously enables a user to visually align all or a portion of the first three-dimensional packaging wireframe model 106 with all or a corresponding portion of the image of the first three-dimensional object 102 in an intuitive and easy to visualize manner. In some embodiments, a prior input received by the one or more processors 120 may be used to place the system 100 in a mode where a subsequent input indicating the desired change to the three-dimensional packaging wireframe model 106 will be provided to the one or more processors 120.


At 404, responsive to the input the one or more processors 120 may select from the library one or more second geometric primitives 104 that are different from the first geometric primitive 104 and fit the second three-dimensional packaging wireframe 106 model using the second geometric primitive 104 to substantially encompass the first three-dimensional object 102. For example, where the one or more processors 120 detect from the input that a cylindrical three-dimensional packaging wireframe model 106 is being changed to a rectangular prismatic three-dimensional packaging wireframe model 106, the one or more processors 120 may alternatively select a second geometric primitive 104 corresponding to a rectangular prism from the library to fit the second three-dimensional packaging wireframe model 106 to the first three-dimensional object 102.


At 406, the one or more processors 120 can generate a video, image, or display data output that includes image data of the second three-dimensional packaging wireframe model 106 as fitted to the first three-dimensional object 102. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 that includes an image contemporaneously or simultaneously depicting the first three-dimensional object 102 using image data from the camera 152 and the second three-dimensional packaging wireframe model 106 data as fitted by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting the first three-dimensional object 102 along with the one or more scaled and fitted second geometric primitives 104 and the first three-dimensional packaging wireframe model 106 may also be provided on the one or more display devices 156.



FIG. 5 shows a method 500 extending from the method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. As depicted in FIG. 1A, at times a second three-dimensional object 102 may be present in the field-of-view 116 of the image sensor 114. For various reasons, the second three-dimensional object 102 may not be detected by the one or more processors 120 and consequently a second three-dimensional packaging wireframe model 106 may not be fitted about the second three-dimensional object 102 by the one or more processors 120. In such an instance, one or more second geometric primitives 104 can be selected by the one or more processors 120 and used to fit a second three-dimensional packaging wireframe model 106 to the second three-dimensional object 102 based at least in part upon the receipt of an input by the one or more processors 120 indicating the existence of the second three-dimensional object 102.


At 502, the one or more processors 120 receive an input that indicates a second three-dimensional object 102 exists within the field-of-view 116 of the image sensor 114. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150. The use of a touch-screen display device 114 advantageously enables a user to draw a perimeter around or otherwise clearly delineate the second three-dimensional object 102. In some embodiments, a prior input received by the one or more processors 120 may be used to place the system 100 in a mode where a subsequent input indicating the second three-dimensional object 102 will be provided to the one or more processors 120. Responsive to the input indicating the existence of a second three-dimensional object 102 within the field-of-view 116 of the image sensor 114, the one or more processors 120 may detect additional three-dimensional features associated with the second three-dimensional object 102.


At 504, based at least in part on the three-dimensional features identified in 502, the one or more processors 120 may select from the library one or more second geometric primitives 104 to provide representation of the packaging encompassing the second three-dimensional object 102. The one or more processors 120 fit a second three-dimensional packaging wireframe model 106 about the second three-dimensional object 102 that is responsive to any specialized instructions received from the user and encompasses substantially all the three-dimensional features of the second three-dimensional object 102 identified by the one or more processors 120 at 502.


At 506, the one or more processors 120 can generate a video, image, or display data output that includes image data of the second three-dimensional packaging wireframe model 106 as fitted to the virtual representation of the second three-dimensional object 104. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs to the one or more display devices 156 that includes image data depicting an image of the second three-dimensional object 102 using image data from the camera 152 along with the fitted second three-dimensional packaging wireframe model 106 provided by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting the second three-dimensional object 102 along with the one or more scaled and fitted second geometric primitives 104 and the second three-dimensional packaging wireframe model 106 may also be provided on the one or more display devices 156.


At 508, the one or more processors 120 can generate a video, image, or display data output that includes image data of the first three-dimensional packaging wireframe model 106 as fitted to the first three-dimensional object 102. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 that includes an image contemporaneously or simultaneously depicting images of the first and second three-dimensional objects 102 using image data from the camera 152 along with the respective first and second three-dimensional packaging wireframe models 106 fitted by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting the first and second three-dimensional objects 102 along with the one or more scaled and fitted first and second geometric primitives 104 and the first and second three-dimensional packaging wireframe models 106 may also be provided on the one or more display devices 156.



FIG. 6 shows a method 600 extending from method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. In some situations, the first three-dimensional object 102 may have a complex or non-uniform shape that, when virtually represented as a plurality of geometric primitives 104, is best fitted using a corresponding plurality of three-dimensional packaging wireframe models 106. For instance, one three-dimensional packaging wireframe model 106 may be fitted to a first portion of a three-dimensional object 102 and another three-dimensional packaging wireframe model 106 may be fitted to a second portion of the three-dimensional object 102.


For example, rather than fitting a single three-dimensional packaging wireframe model 106 about a guitar-shaped three-dimensional object 102, a more accurate three-dimensional packaging wireframe model 106 may incorporate a plurality wireframe models 106, such as a first three-dimensional packaging wireframe model 106 fitted to the body portion of the guitar-shaped object and a second three-dimensional packaging wireframe model 106 fitted to the neck portion of the guitar-shaped object may provide a more accurate three-dimensional packaging wireframe model 106 for the entire guitar-shaped object. Fitting of multiple three-dimensional packaging wireframe models 106 may be performed automatically by the one or more processors 120, or performed responsive to the receipt of a user input indicating that a plurality of three-dimensional packaging wireframe models should be used. Providing a user with the ability to designate the use of three-dimensional packaging wireframe models 106 about different portions of a single three-dimensional object 102 may provide the user with a more accurate freight rate estimate based upon the actual configuration of the object and may provide the carrier with a more accurate shipping volume.


At 602, the one or more processors 120 receive an input that identifies a portion of the first three-dimensional object 102 that may be represented using a separate three-dimensional packaging wireframe model 106. Using the example of a guitar, the user may provide an input that when received by the one or more processors 120, indicates the neck of the guitar is best fitted using separate three-dimensional packaging wireframe model 106. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150. The use of a touch-screen display device 114 advantageously enables a user to draw a perimeter or otherwise clearly delineate the portion of the first three-dimensional object 102 for which one or more separate geometric primitives 104 may be selected and about which a three-dimensional packaging wireframe model 106 may be fitted by the one or more processors 120. In some embodiments, a prior input received by the one or more processors 120 may be used to place the system 100 in a mode where a subsequent input indicating the portion of the first three-dimensional object 102 suitable for representation by a separate three-dimensional packaging wireframe model 106 will be provided as an input to the one or more processors 120.


At 604, responsive at least in part to the input indicating the portion of the first three-dimensional object 102 suitable for representation as a separate three-dimensional packaging wireframe model 106, the one or more processors 120 can select one or more geometric primitives 104 encompassing the first portion of the first three-dimensional object 102. Based on the one or more selected geometric primitives 104, the one or more processors 120 fit a three-dimensional packaging wireframe model 106 about the first portion of the three-dimensional object 102. Continuing with the illustrative example of a guitar—the one or more processors 120 may receive an input indicating the user's desire to represent the neck portion of the guitar as a first three-dimensional packaging wireframe model 106. Responsive to the receipt of the input selecting the neck portion of the guitar, the one or more processors 120 select one or more appropriate geometric primitives 104, for example a cylindrical geometric primitive, and fit a cylindrical three-dimensional packaging wireframe model 106 that encompasses the first portion of the first three-dimensional object 102 (i.e., the neck portion of the guitar).


At 606, the one or more processors 120 select one or more geometric primitives 104 encompassing the second portion of the first three-dimensional object 102. Based on the one or more selected geometric primitives 104, the one or more processors 120 fit a three-dimensional packaging wireframe model 106 about the second portion of the first three-dimensional object 102. The separate three-dimensional packaging wireframe model 106 fitted to the second portion may be the same as, different from, or a modified version of the three-dimensional packaging wireframe model 106 fitted to the first portion of the three-dimensional object 102.


Continuing with the illustrative example of a guitar the single, three-dimensional packaging wireframe model 106 originally fitted by the one or more processors 120 to the entire guitar may have been in the form of a rectangular three-dimensional packaging wireframe model 106 encompassing both the body portion and the neck portion of the guitar. After fitting the cylindrical three-dimensional packaging wireframe model 106 about the first portion of the first three-dimensional object 102 (i.e., the neck of the guitar), the one or more processors 120 may reduce the size of the originally fitted, rectangular, three-dimensional packaging wireframe model 106 to a rectangular three-dimensional packaging wireframe model 106 fitted about the second portion of the first three-dimensional object 104 (i.e., the body of the guitar).


At 608, the one or more processors 120 can generate a video, image, or display data output that includes image data of the three-dimensional packaging wireframe models 106 fitted to the first and second portions of the first three-dimensional object 102. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 including an image concurrently or simultaneously depicting the first and second portions of the first three-dimensional object 102 using image data from the camera 152 and the respective three-dimensional packaging wireframe models 106 fitted to each of the first and second portions by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting the first and second portions of the first three-dimensional object 102 along with their respective one or more scaled and fitted geometric primitives 104 and their respective three-dimensional packaging wireframe models 106 may also be provided on the one or more display devices 156.



FIG. 7 shows a method 700 extending from method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. In some situations, one or more features present on the first three-dimensional object 102 may not be visible from the point of view of the image sensor 114. For example, a protruding feature may lie on a portion of the three-dimensional object 102 facing away from the image sensor 114 such that substantially all of the feature is hidden from the image sensor 114. In such instances, a failure to incorporate the hidden feature may result in erroneous or inaccurate rate information being provided to a user or erroneous or inaccurate packing dimensions or volumes being provided to the carrier.


In such instances, obtaining image data from a second point of view that includes the previously hidden or obscured feature will permit the one or more processors 120 to select one or more geometric primitives 104 fitting the entire three-dimensional object 102 including the features hidden in the first point of view. By encompassing all or the features within the one or more geometric primitives 104, the one or more processors 120 are able to fit the first three-dimensional packaging wireframe model 106 about the entire first three-dimensional object 102 or alternatively, to add a second three-dimensional packaging wireframe model 106 incorporating the portion of the first three-dimensional object 102 that was hidden in the first point of view of the image sensor 114.


At 702, after fitting the first three-dimensional packaging wireframe model 106 to the first three-dimensional object 102, the one or more processors 120 rotate the fitted three-dimensional packaging wireframe model 106 about an axis to expose gaps in the model or to make apparent any features absent from the model but present on the first three-dimensional object 102. In some situations, the volume dimensioning system 110 may provide a video, image, or display data output to the host computer 150 providing a sequence or views of the fitted first three-dimensional packaging wireframe model 106 such that the first three-dimensional packaging wireframe model 106 appears to rotate about one or more axes when viewed on the one or more display devices 156.


Responsive to the rotation of the first three-dimensional packaging wireframe model 106 on the one or more display devices 156, the system 100 can generate an output, for example a prompt displayed on the one or more display devices 156, requesting a user to provide an input confirming the accuracy of or noting any deficiencies present in the first three-dimensional packaging wireframe model 106.


At 704, additional image data in the form of a second point of view of the first three-dimensional object 102 that exposes the previously hidden or obscured feature on the first three-dimensional object 102 may be provided to the one or more processors 120. Image data may be acquired or captured from a second point of view in a variety of ways. For example, in some instances, the image sensor 114 may be automatically or manually displaced about the first three-dimensional object 102 to provide a second point of view that includes the previously hidden feature. Alternatively or additionally, a second image sensor (not shown in FIGS. 1A, 1B) disposed remote from the system 100 may provide a second point of view of the first three-dimensional object 102. Alternatively or additionally, the system 100 may generate an output, for example an output visible on the one or more display devices 156 providing guidance or directions to the user to physically rotate the first three-dimensional object 102 to provide a second point of view to the image sensor 114. Alternatively or additionally, the system 100 may generate a signal output, for example a signal output from the host computer 150 that contains instructions to automatically rotate a turntable upon which the first three-dimensional object 102 has been placed to provide a second point of view of the first three-dimensional object 102 to the image sensor 114.


At 706, responsive to the receipt of image data from the image sensor as viewed from the second point of view of the first three-dimensional object 102, the one or more processors 120 can detect a portion of the first three-dimensional object 102 that was hidden in the first point of view. Such detection can be accomplished, for example by tracking the feature points on the first three-dimensional object 102 visible in the first point of view as the first point of view is transitioned to the second point of view. Identifying new feature points appearing in the second point of view that were absent from the first point of view provide an indication to the one or more processors 120 of the existence of a previously hidden or obscured portion or feature of the first three-dimensional object 102.


At 708, responsive to the detection of the previously hidden or obscured portion or feature of the first three-dimensional object 102, the one or more processors 120 can modify one or more originally selected geometric primitives 140 (e.g., by stretching the geometric primitive 104) to incorporate the previously hidden or obscured feature, or alternatively can select one or more second geometric primitives 104 that when combined with the one or more previously selected geometric primitives 104 encompasses the previously hidden or obscured feature on the first three-dimensional object 102.


In some instances, the one or more processors 120 may modify the one or more originally selected geometric primitives 104 to encompass the feature hidden or obscured in the first point of view, but visible in the second point of view. The three-dimensional packaging wireframe model 106 can then be scaled and fitted to the modified originally selected geometric primitive 104 to encompass the feature present on the first three-dimensional object 102. For example, a first three-dimensional packaging wireframe model 106 may be fitted to a rectangular prismatic three-dimensional object 102, and a hidden feature in the form of a smaller rectangular prismatic solid may be located on the rear face of the rectangular prismatic three-dimensional object 102. The one or more processors 120 may in such a situation, modify the originally selected geometric primitive 104 to encompass the smaller rectangular prismatic solid. The one or more processors 120 can then scale and fit the first three-dimensional packaging wireframe model 106 to encompass the entire first three-dimensional object 102 by simply modifying, by stretching, the originally fitted rectangular three-dimensional packaging wireframe model 106.


In other instances, the one or more processors 120 may alternatively select one or more second geometric primitives 104 to encompass the smaller rectangular solid feature and fit a second three-dimensional packaging wireframe model 106 to the second geometric primitive 104. For example, when the three-dimensional object 102 is a guitar-shaped object, the first point of view, may expose only the body portion of the guitar-shaped object to the image sensor 114 while the neck portion remains substantially hidden from the first point of view of the image sensor 114. Upon receiving image data from the second point of view, the one or more processors 120 can detect an additional feature that includes the neck portion of the guitar-shaped object. In response, the one or more processors 120 may select a second geometric primitive 104 and use the selected second geometric primitive 104 to fit a second three-dimensional packaging wireframe model 106 about the neck portion of the guitar-shaped object.


At 710, the one or more processors 120 can generate a video, image, or display data output that includes image data of the one or more three-dimensional packaging wireframe models 106 fitted to the first three-dimensional object 102, including features visible from the first and second points of view of the image sensor 114. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 that includes an image concurrently or simultaneously displaying the first three-dimensional object 102 using image data from the camera 152 and the one or more three-dimensional packaging wireframe models 106 fitted to respective portions of the first three-dimensional object 102 by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting the first and second portions of the first three-dimensional object 102 along with one or more geometric primitives 104 and the scaled and fitted three-dimensional packaging wireframe model 106 may also be provided on the one or more display devices 156.



FIG. 8 shows a method 800 extending from method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. The field-of-view 116 of the image sensor 114 may contain a multitude of potential first three-dimensional objects 102, yet the only three-dimensional objects of interest to a user may have a particular size or shape. For example, the field-of-view 116 of the image sensor 114 may be filled with a three bowling balls and a single box which represents the desired first three-dimensional object 102. In such an instance, the one or more processors 120 may select four geometric primitives 104—three associated with the bowling balls and one associated with the box and fit three-dimensional packaging wireframe models 106 to each of the three bowling balls and the single box. Rather than laboriously deleting the three spherical wireframes fitted to the bowling balls, in some embodiments, the one or more processors 120 may receive an input designating a particular geometric primitive shape as indicating the desired first three-dimensional object 102 within the field-of-view 116 of the image sensor 114.


In the previous example, the one or more processors 120 may receive an input indicating a rectangular prismatic geometric primitive as designating the particular shape of the desired first three-dimensional object. This allows the one or more processors 120 to automatically eliminate the three bowling balls within the field-of-view of the image sensor 114 as potential first three-dimensional objects 102. Such an input, when received by the one or more processors 120 effectively provides a screen or filter for the one or more processors 120 eliminating those three-dimensional objects 102 having geometric primitives not matching the indicated desired geometric primitive received by the one or more processors 120.


At 802, the one or more processors 120 receive an input indicative of a desired geometric primitive 104 useful in selecting, screening, determining or otherwise distinguishing the first three-dimensional object 102 from other objects that are present in the field-of-view 116 of the image sensor 114. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150. In some instances, text or graphical icons indicating various geometric primitive shapes may be provided in the form of a list, menu, or selection window to the user.


At 804, responsive to the receipt of the selected geometric primitive 104, the one or more processors 120 search through the three-dimensional objects 102 appearing in the field-of-view 116 of the image sensor 114 to locate only those first three-dimensional objects 102 having a shape that is substantially similar to or matches the user selected geometric primitive 104.



FIG. 9 shows a method 900 extending from method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. After fitting the first three-dimensional packaging wireframe model 106 to the first three-dimensional object 102, the one or more processors 120 can determine the packaging dimensions and the volume of the first three-dimensional object 102 responsive to receipt of an input indicative of user acceptance of the fitted first three-dimensional packaging wireframe model 106. The calculated packing dimensions are based on dimensional and volumetric information acquired from the fitted first three-dimensional packaging wireframe model 106 and reflect not only the dimensions of the three-dimensional object 102 itself, but also include any additional packaging, boxing, crating, etc., necessary to safely and securely ship the first three-dimensional object 102.


At 902, the one or more processors 120 receive an input indicative of user acceptance of the first three-dimensional packaging wireframe model 106 fitted to the first three-dimensional object 102 by the one or more processors 120. The one or more processors 120 can generate a video, image, or display data output that includes image data of the three-dimensional packaging wireframe model 106 after scaling and fitting to the first three-dimensional object 102, and after any modifications necessary to accommodate any specialized shipping instructions provided by the user.


The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 including image data depicting a simultaneous or concurrent image of the first three-dimensional object 102 using image data from the camera 152 and the three-dimensional packaging wireframe model 106 fitted to the first three-dimensional object 102 by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting the first three-dimensional object 102 along with one or more scaled and fitted geometric primitives 104 and the scaled and fitted three-dimensional packaging wireframe model 106 may also be provided on the one or more display devices 156.


Responsive to the display of at least the first three-dimensional object 102 and the first three-dimensional packaging wireframe model 106, the system 100 may generate a signal output, for example a signal output from the host computer 150 containing a query requesting the user provide an input indicative of an acceptance of the fitting of the first three-dimensional packaging wireframe model 106 to the first three-dimensional object 102.


At 904, responsive to user acceptance of the fitting of the first three-dimensional packaging wireframe model 106 to the first three-dimensional object 102, the one or more processors 120 determine the dimensions and calculate the volume of the first three-dimensional object 102 based at least in part on the three-dimensional packaging wireframe model 106. Any of a large variety of techniques or algorithms for determining a volume of a bounded three-dimensional surface may be employed by the system 100 to determine the dimensions or volume of the first three-dimensional object 102.



FIG. 10 shows a method 1000 extending from method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. In some instances, the one or more processors 120 may select one or more inapplicable geometric primitives 104 or improperly fit a first three-dimensional packaging wireframe model 106 about the first three-dimensional object 102. In such an instance, rather than modify the first three-dimensional packaging wireframe model 106, a more expeditious solution may be to delete the first three-dimensional packaging wireframe model 106 fitted by the one or more processors 120 in its entirety and request the one or more processors 120 to select one or more different geometric primitives 104 and fit a second three-dimensional packaging wireframe model 106 about the first three-dimensional object 102.


At 1002, the one or more processors 120 receive an input indicative of a rejection of the first three-dimensional packaging wireframe model 106 fitted by the one or more processors 120 about the first three-dimensional object 102. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150.


At 1004, responsive to the receipt of the rejection of the first three-dimensional packaging wireframe model 106 fitted about the first three-dimensional object 102, the one or more processors 120 select one or more second geometric primitives 104 and, based on the one or more second selected geometric primitives 104, fit a second three-dimensional packaging wireframe model 106 about the first three-dimensional object 102.


At 1006, the one or more processors 120 can generate a video, image, or display data output that includes image data of the second three-dimensional packaging wireframe model 106 fitted to the first three-dimensional object 102. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 that includes an image contemporaneously or simultaneously depicting the first three-dimensional object 102 using image data from the camera 152 and the second three-dimensional packaging wireframe model 106 fitted by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting an image of the first three-dimensional object 102 along with the one or more second geometric primitives 104 and the scaled and fitted three-dimensional packaging wireframe model 106 may also be provided on the one or more display devices 156.



FIG. 11 shows a method 1100 extending from method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. In some instances, the one or more processors 120 may receive as an input a value indicating a selection of a second three-dimensional packaging wireframe model 106 for fitting about the virtual representation of the first three-dimensional object 104. The one or more processors 120 can fit the second three-dimensional packaging wireframe model about the first three-dimensional object 102. Such an input can be useful in expediting the fitting process when the appropriate geometric primitive or second three-dimensional packaging wireframe model is known in advance.


At 1102, the one or more processors 120 receive an input indicative of a selection of a second geometric primitive 104 as representative of the first three-dimensional object 102 or a second three-dimensional packaging wireframe model 106 for fitting about the first three-dimensional object 102. In some instances, the one or more processors 120 receive an input indicative of one or more second geometric primitives 104 that are different from the one or more first geometric primitives 104 used by the one or more processors 120 to fit the first three-dimensional packaging wireframe model 106. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150. In at least some instances, the input is provided by selecting a text or graphic icon corresponding to the second geometric primitive 104 or an icon corresponding to the second three-dimensional packaging wireframe model 106 from a list, menu or selection window containing a plurality of such icons.


At 1104, responsive to the selection of the second geometric primitive 104 or the second three-dimensional packaging wireframe model, the one or more processors 120 can fit the second three-dimensional packaging wireframe model 106 to the first three-dimensional object 102.


At 1106, the one or more processors 120 can generate a video, image, or display data output that includes image data of the second three-dimensional packaging wireframe model 106 fitted to the first three-dimensional object 102. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 that includes an image concurrently or simultaneously depicting the first three-dimensional object 102 using image data from the camera 152 and the second three-dimensional packaging wireframe model 106 fitted by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting an image of the first three-dimensional object 102 along with the one or more geometric primitives 104 and the scaled and fitted three-dimensional packaging wireframe model 106 may also be provided on the one or more display devices 156.



FIG. 12 shows a method 1200 extending from method 200 and describing one or more additional features of an example volume dimensioning system 100, such as the system depicted in FIGS. 1A and 1B. In some instances, all or a portion of the first three-dimensional object 102 may be too small to easily view within the confines of the one or more display devices 156. The one or more processors 120 may receive an input indicative of a region of interest containing all or a portion of the first three-dimensional object 102. In response to the input, the one or more processors 120 may ascertain whether the first three-dimensional packaging wireframe model 106 included within the indicated region of interest has been properly fitted about the first three-dimensional object 102. Such an input can be useful in increasing the accuracy of the three-dimensional packaging wireframe model 106 fitting process, particularly when all or a portion of the first three-dimensional object 102 is small in size and all or a portion of the fitted first three-dimensional packaging wireframe 106 model is difficult to discern.


At 1202, the one or more processors 120 receive an input indicative of a region of interest lying in the field-of-view 116 of the image sensor 114. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150.


At 1204, responsive to the receipt of the input indicative of a region of interest in the field-of-view 116 of the image sensor 114, the one or more CPUs 160 enlarge the indicated region of interest and output a video, image, or display data output including the enlarged region of interest to the one or more display devices 156 on the host computer system 150. In some situations, the one or more processors 120 may provide the video, image, or display data output including the enlarged region of interest to the one or more display devices 156 on the host computer system 150.


At 1206, the one or more processors 120 automatically select a geometric primitive 104 based upon the features of the first three-dimensional object 102 included in the enlarged region of interest for use in fitting the first three-dimensional packaging wireframe model 106 about all or a portion of the first three-dimensional object 102. Alternatively, the one or more processors 120 may receive an input indicative of a geometric primitive 104 to fit the first three-dimensional packaging wireframe model 106 about all or a portion of the first three-dimensional object 102 depicted in the enlarged region of interest. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150. In at least some instances, the input is provided to the one or more processors 120 by selecting a text or graphic icon corresponding to the geometric primitive from a menu, list or selection window containing a plurality of such icons.



FIG. 13 shows a method 1300 depicting the operation of an example volume dimensioning system 100, such as the system depicted in FIG. 1. In some embodiments, the first three-dimensional object 102 may have a complex or non-uniform shape that is best represented using two or more geometric primitives 104. In such instances, a first geometric primitive 104 may be used by the one or more processors 120 to fit a first three-dimensional packaging wireframe model 106 about a first portion of the first three-dimensional object 102. Similarly, a second geometric primitive 104 may be used by the one or more processors 120 to fit a second three-dimensional packaging wireframe model 106 about a second portion of the first three-dimensional object 102. In at least some embodiments, the first and second geometric primitives 104 may be autonomously selected by the one or more processors 120. Permitting the one or more processors 120 to select two or more geometric primitives 104 and fit a corresponding number of three-dimensional packaging wireframe models 106 about a corresponding number of portions of the three-dimensional object 102 may provide the user with a more accurate estimate of the dimensions or volume of the packaging encompassing the first three-dimensional object 102.


At 1302, the image sensor 114 captures or acquires three-dimensional image data which is communicated to the one or more non-transitory, machine-readable storage media 118 via one or more data busses 126. The three-dimensional image data captured by the image sensor 114 includes a first three-dimensional object 102 disposed within the field-of-view 116 of the image sensor 114. The three-dimensional image data captured by the image sensor 114 may include depth data providing a depth map and intensity data providing an intensity image of the field-of-view 116 of the image sensor 114. At least a portion of the three-dimensional image data received by the one or more non-transitory, machine-readable storage media 118 is communicated to or otherwise accessed by the one or more processors 120 in order to select one or more geometric primitives 104 preparatory to fitting a three-dimensional packaging wireframe model 106 about all or a portion of the first three-dimensional object 104.


At 1304, based in whole or in part on the three-dimensional image data received from the image sensor 114, the one or more processors 120 determine a number of features on the first three-dimensional object 102 that appear in the three-dimensional image data. The features may include any point, edge, or other discernible structure on the first three-dimensional object 102 and detectable in the image represented by the three-dimensional image data. For example, one or more features may correspond to a three-dimensional point on the three-dimensional object 102 that is detectable in a depth map containing the first three-dimensional object, an intensity image in which the three-dimensional object, or both a depth map and an intensity image in which the first three-dimensional object 102 appears as is represented. The identified features may include boundaries or defining edges of the first three-dimensional object, for example corners, arcs, lines, edges, angles, radii, and similar characteristics that define all or a portion of the external boundary of the first three-dimensional object 102.


At 1306, based at least in part on the features identified in 1304, the one or more processors 120 select one or more geometric primitives 104 having the same or differing shapes to encompass substantially all of the identified features of the first three-dimensional object 102. Dependent upon the overall number, arrangement, and complexity of the one or more selected geometric primitives 104, the one or more processors 120 may autonomously determine that a plurality of three-dimensional packaging wireframe models 106 are useful in fitting an overall three-dimensional packaging wireframe model 106 to the relatively complex three-dimensional object 102. The one or more processors 120 may determine that a first three-dimensional packaging wireframe model 106 can be fitted to a first portion of the first three-dimensional object 102 and a second three-dimensional packaging wireframe model 106 can be fitted to a second portion of the first three-dimensional object 102.


At 1308, the one or more processors 120 scale and fit the first three-dimensional packaging wireframe model 106 to the one or more geometric primitives 104 encompassing the first portion of the first three-dimensional object 102. The scaled and fitted first three-dimensional packaging wireframe model 106 encompasses substantially all the first portion of the first three-dimensional object 102.


At 1310 the one or more processors 120 fit the second three-dimensional packaging wireframe model 106 to the one or more geometric primitives 104 encompassing the second portion of the first three-dimensional object 102. The scaled and fitted second three-dimensional packaging wireframe model 106 encompasses substantially all the second portion of the first three-dimensional object 102.


At 1312, the one or more processors 120 can generate a video, image, or display data output that includes image data of the first and second three-dimensional packaging wireframe models 106 as fitted to the first and second portions of the first three-dimensional object 102, respectively. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs viewable on the one or more display devices 156 that includes an image simultaneously or contemporaneously depicting the first and second portions of the first three-dimensional object 102 using image data from the camera 152 and the respective first and second three-dimensional packaging wireframe models 106 fitted to each of the first and second portions by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting an image of the first and second portions of the first three-dimensional object 102 along with the one or more respective first and second geometric primitives 104 and the respective scaled and fitted first and second three-dimensional packaging wireframe models 106 may also be provided on the one or more display devices 156.



FIG. 14 shows a method 1400 depicting the operation of an example volume dimensioning system 100, such as the system depicted in FIG. 1. In some embodiments, the initial or first point of view of the image sensor 114 may not provide sufficient feature data to the one or more processors 120 to determine the extent, scope, or boundary of the first three-dimensional object 102. For example, if the first three-dimensional object 102 is a cubic box and only the two-dimensional front surface of the cubic box is visible to the image sensor 114, the image data provided by the image sensor 114 to the one or more processors 120 is insufficient to determine the depth (i.e., the extent) of the cubic box, and therefore the one or more processors 120 do not have sufficient data regarding the features of the three-dimensional object 102 to select one or more geometric primitives 104 as representative of the first three-dimensional object 102. In such instances, it is necessary to provide the one or more processors 120 with additional data gathered from at least a second point of view to enable selection of one or more appropriate geometric primitives 104 for fitting a first three-dimensional packaging wireframe model 106 that encompasses the first three-dimensional object 102.


At 1402, the image sensor 114 captures or acquires three-dimensional image data which is communicated to the one or more non-transitory, machine-readable storage media 118 via one or more data busses 126. The three-dimensional image data captured by the image sensor 114 includes a first three-dimensional object 102 disposed within the field-of-view 116 of the image sensor 114. The three-dimensional image data captured by the image sensor 114 may include depth data providing a depth map and intensity data providing an intensity image of the field-of-view of the image sensor 114. At least a portion of the three-dimensional image data received by the one or more non-transitory, machine-readable storage media 118 is communicated to or otherwise accessed by the one or more processors 120 in order to select one or more geometric primitives 104 to fit a three-dimensional packaging wireframe model 106 that encompasses the first three-dimensional object 102.


At 1404, based on the image data received from the image sensor 114, the one or more processors 120 determine that an insufficient number of features on the first three-dimensional object 102 are present within the first point of view of the image sensor 114 to permit the selection of one or more geometric primitives 104 to fit the first three-dimensional packaging wireframe model 106.


At 1406, responsive to the determination that an insufficient number of features are present within the first point of view of the image sensor 114, the one or more processors 120 generates an output indicative of the lack of an adequate number of features within the first point of view of the image sensor 114. In some instances, the output provided by the one or more processors 120 can indicate a possible second point of view able to provide a view of a sufficient number of additional features on the first three-dimensional object 102 to permit the selection of one or more appropriate geometric primitives representative of the first three-dimensional object 102.


In some situations, the output generated by the one or more processors 120 may cause a second image sensor positioned remote from the image sensor 114 to transmit image data from a second point of view to the one or more non-transitory, machine-readable storage media 118. In some instances the second image sensor can transmit depth data related to a depth map of first three-dimensional object 102 from the second point of view or intensity data related to an intensity image of the first three-dimensional object 102 from the second point of view. The image data provided by the second image sensor is used by the one or more processors 120 in identifying additional features on the first three-dimensional object 102 that are helpful in selecting one or more appropriate geometric primitives representative of the first three-dimensional object 102.


In some situations, the output generated by the one or more processors 120 may include audio, visual, or audio/visual indicator data used by the host computer 150 to generate an audio output via one or more I/O devices 166 or to generate a visual output on the one or more display devices 156 that designate a direction of movement of the image sensor 114 or a direction of movement of the first three-dimensional object 102 that will permit the image sensor 114 to obtain a second point of view of the first three-dimensional object 102. The image data provided by the image sensor 114 from the second point of view is used by the one or more processors 120 in identifying additional features on the first three-dimensional object 102 that are helpful in selecting one or more appropriate geometric primitives representative of the first three-dimensional object 102.



FIG. 15 depicts an illustrative volume dimensioning system 110 communicably coupled to a host computer 150 via one or more busses 112. The volume dimensioning system 110 is equipped with an image sensor 114 having a field-of-view 116. The host computer 150 is equipped with a camera 152 having a field-of-view 154 and a display device 156.


An interior space of a partially or completely empty container or trailer 1503 is depicted as forming a three-dimensional void 1502 falling within the field-of-view 116 of the image sensor 114 and the field-of-view 154 of the camera 152. An image of the three-dimensional void is depicted as an image on the one or more display devices 156. The one or more processors 120 can select one or more geometric primitives 1504 corresponding to the first three-dimensional void 1502 preparatory to scaling and fitting a three-dimensional receiving wireframe 1506 within the first three-dimensional void 1502. The scaled and fitted three-dimensional receiving wireframe model 1506 is depicted within the three-dimensional void 1502. In some embodiments, the scaled and fitted three-dimensional receiving wireframe model 1506 may be shown in a contrasting or bright color on the one or more display devices 156.


The scaled, fitted three-dimensional receiving wireframe model 1506 may be generated by the host computer 150 or, more preferably may be generated by the volume dimensioning system 110. The image on the display device 156 is a provided in part using the image data acquired by the camera 152 coupled to the host computer system 150 which provides an image of the three-dimensional void 1502, and in part using the scaled and fitted three-dimensional receiving wireframe model 1506 provided by the volume dimensioning system 110. Data, including visible image data provided by the camera 152 and depth map data and intensity image data provided by the image sensor 114 is exchanged between the host computer 150 and the volume dimensioning system 110 via the one or more busses 112. In some instances, the volume dimensioning system 110 and the host computer system 150 may be partially or completely incorporated within the same housing, for example a handheld computing device or a self service kiosk.



FIG. 16 shows a method 1600 depicting the operation of an example volume dimensioning system 1500, such as the system depicted in FIG. 15. In some instances, the first three-dimensional object 102 cannot be constructed based upon the presence of a physical, three-dimensional object, and is instead represented by the absence of one or more physical objects, or alternatively as a three-dimensional void 1502. Such an instance can occur, for example, when the system 100 is used to determine the available dimensions or volume remaining within an empty or partially empty shipping container, trailer, box, receptacle, or the like. For a carrier, the ability to determine with a reasonable degree of accuracy the available dimensions or volume within a particular three-dimensional void 1502 provides the ability to optimize the placement of packaged physical three-dimensional objects 102 within the three-dimensional void 1502. Advantageously, when the dimensions or volumes of the packaged three-dimensional objects 102 intended for placement within the three-dimensional void 1502 are known, for example when a volume dimensioning system 100 as depicted in FIG. 1 has been used to determine the dimensions or volume of the three-dimensional packaging wireframe models 106 corresponding to packaged three-dimensional objects 102, the ability to determine the dimensions or volume available within a three-dimensional void 1502 can assist in optimizing the load pattern of the three-dimensional objects 102 within the three-dimensional void 1502.


At 1602, the image sensor 114 captures or acquires three-dimensional image data of a first three-dimensional void 1502 within the field-of-view of 116 of the image sensor 114. Image data captured or acquired by the image sensor 114 is communicated to the one or more non-transitory, machine-readable storage media 118 via one or more data busses 126. The three-dimensional image data captured by the image sensor 114 includes a first three-dimensional void 1502 disposed within the field-of-view 116 of the image sensor 114. The three-dimensional image data captured by the image sensor 114 may include depth data providing a depth map and intensity data providing an intensity image of the field-of-view of the image sensor 114. At least a portion of the three-dimensional image data received by the one or more non-transitory, machine-readable storage media 118 is communicated to or otherwise accessed by the one or more processors 120 in order to select one or more geometric primitives 1504 preparatory to fitting a first three-dimensional receiving wireframe model 1506 within all or a portion of the first three-dimensional void 1502.


At 1604, based in whole or in part on the image data captured by the image sensor 114, stored in the one or more non-transitory, machine-readable storage media 118, and communicated to the one or more processors 120, the one or more processors 120 determine a number of features related to or associated with the first three-dimensional void 1502 present in the image data received by the one or more processors 120. The features may include any point on the first three-dimensional void 1502 detectable in the image data provided by the image sensor 114. For example, one or more features may correspond to a point on the first three-dimensional void 1502 that is detectable in a depth map containing the first three-dimensional void 1502, an intensity image containing the three-dimensional void 1502, or both a depth map and an intensity image containing the first three-dimensional void 1502. The identified features include boundaries or defining edges of the first three-dimensional void 1502, for example corners, arcs, lines, edges, angles, radii, and similar characteristics that define all or a portion of one or more boundaries defining the first three-dimensional void 1502.


At 1606, based at least in part on the features identified in 1604, the one or more processors 120 select one or more geometric primitives 1504 and fit the selected geometric primitives 1504 within substantially all of the features identified by the one or more processors 120 as defining all or a portion of one or more boundaries of the first three-dimensional void 1502. The one or more selected geometric primitives 1504 are used by the one or more processors 120 to fit a three-dimensional receiving wireframe model 1506 within all or a portion of the first three-dimensional void 1502.


After fitting the first three-dimensional receiving wireframe model 1506 within the three-dimensional void 1502, the one or more processors 120 determine, based on the first three-dimensional receiving wireframe model 1506, the available dimensions or volume within the first three-dimensional void 1502.


At 1608, the one or more processors 120 can generate a video, image, or display data output that includes image data of the first three-dimensional receiving wireframe model 1506 as fitted to the first three-dimensional void 1502. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 including an image concurrently or simultaneously depicting the first three-dimensional void 1502 using image data from the camera 152 and the first three-dimensional receiving wireframe model 1506 fitted therein by the one or more processors 120. In some instances, an image concurrently or simultaneously depicting an image of the first three-dimensional void 1502 along with the one or more geometric primitives 1504 and the scaled and fitted three-dimensional packaging wireframe model 1506 may also be provided on the one or more display devices 156.



FIG. 17 shows a method 1700 extending from logic flow diagram 1600 and describing one or more additional features of an example volume dimensioning system 1500, such as the system depicted in FIG. 15. The one or more processors 120 fit the first three-dimensional receiving wireframe model 1506 within the first three-dimensional void 1502 and determine the dimensions or volume available within the first three-dimensional void 1502. In some instances, the one or more processors 120 can receive data, for example via the host computer 150 that includes volumetric or dimensional data associated with one or more three-dimensional objects 102.


For example, where the first three-dimensional void 1502 corresponds to the available volume in a shipping container 1503 destined for Seattle, the one or more processors 120 may receive volumetric or dimensional data associated with a number of three-dimensional objects 102 for shipment to Seattle using the shipping container 1503. Using the dimensions or volume of the first three-dimensional void 1502, the dimensions of each of the number of three-dimensional objects 102, and any specialized handling instructions (e.g., fragile objects, fragile surfaces, top-load only, etc), the one or more processors 120 can calculate a load pattern including each of the number of three-dimensional objects 102 that accommodates any user specified specialized shipping requirements and also specifies the placement or orientation of each of the number of three-dimensional objects 102 within the three-dimensional void 1502 such that the use of the available volume within the container 1503 is optimized.


At 1702, the one or more processors 120 can receive an input, for example via the host computer system 150, that contains dimensional or volumetric data associated with each of a number of three-dimensional objects 102 that are intended for placement within the first three-dimensional void 1502. In some instances, at least a portion of the dimensional or volumetric data associated with each of a number of three-dimensional objects 102 can be provided by the volume dimensioning system 100. In other instances, at least a portion of the dimensional or volumetric data provided to the one or more processors 120 can be based on three-dimensional packaging wireframe models 106 fitted to each of the three-dimensional objects 102. In some instances, the dimensional or volumetric data associated with a particular three-dimensional object 102 can include one or more user-supplied specialized shipping requirements (e.g., fragile surfaces, top-load items, “this side up” designation, etc.).


At 1704, based in whole or in part upon the received dimensional or volumetric data, the one or more processors 120 can determine the position or orientation for each of the number of three-dimensional objects 102 within the first three-dimensional void 1502. The position or location of each of the number of three-dimensional objects 102 can take into account the dimensions of the object, the volume of the object, any specialized shipping requirements associated with the object, and the available dimensions or volume within the first three-dimensional void 1502. In some instances, the volume dimensioning system 1500 can position or orient the number of three-dimensional objects 102 within the first three-dimensional void 1502 to minimize empty space within the three-dimensional void 1502.


The one or more processors 120 can generate a video, image, or display data output that includes the three-dimensional packaging wireframes 106 fitted to each of the three-dimensional objects 102 intended for placement within the three-dimensional void 1502. The three-dimensional packaging wireframes 106 associated with some or all of the number of three-dimensional objects 102 may be depicted on the one or more display devices 156 in their final positions and orientations within the three-dimensional receiving wireframe 1506. The video, image, or display output data provided by the one or more processors 120 may be used by the one or more CPUs 160 to generate one or more video, image, or display outputs on the one or more display devices 156 that includes an image concurrently or simultaneously depicting the first three-dimensional void 1502 and all or a portion of the three-dimensional packaging wireframe models 106 fitted within the three-dimensional void 1502 by the one or more processors 120.



FIG. 18 shows a method 1800 depicting the operation of an example volume dimensioning system 100, such as the system depicted in FIG. 1. Recall that in certain instances, a user may provide an input to the volume dimensioning system resulting in the changing of one or more three-dimensional packaging wireframe models 106 fitted to the three-dimensional object 102. In other instances, a user can provide a recommended geometric primitive 104 for use by the one or more processors 120 in fitting a three-dimensional packaging wireframe model 106 about the three-dimensional object 102. In other instances, a user may provide an input to the volume dimensioning system 100 indicating a single three-dimensional object 102 can be broken into a plurality of portions, each of the portions represented by a different geometric primitive 104 and fitted by the one or more processors 120 with a different three-dimensional packaging wireframe model 106.


Over time, the volume dimensioning system 110 may “learn” to automatically perform one or more functions that previously required initiation based on a user input. In one instance, a first three-dimensional object 102 provides a particular pattern of feature points to the one or more processors 120 and a user provides an input selecting a particular geometric primitive 104 for use by the one or more processors 120 in fitting a three-dimensional packaging wireframe model 106 to the three-dimensional object 102. If, in the future, a three-dimensional object 102 provides a similar pattern of feature points, the one or more processors 120 may autonomously select the geometric primitive 104 previously selected by the user for fitting a three-dimensional packaging wireframe model 106 about the three-dimensional object 102.


In another instance, a first three-dimensional object 102 provides a particular pattern of feature points to the one or more processors 120 and a user indicates to the one or more processors 120 that the first three-dimensional object 102 should be apportioned into first and second portions about which respective first and second three-dimensional packaging wireframe models 106 can be fitted. If, in the future, a three-dimensional object 102 provides a similar pattern of feature points, the one or more processors 120 may autonomously apportion the three-dimensional object 102 into multiple portions based on the apportioning provided by the former user.


At 1802 the image sensor 114 captures or acquires three-dimensional image data which is communicated to the one or more non-transitory, machine-readable storage media 118 via one or more data busses 126. The three-dimensional image data captured by the image sensor 114 includes a first three-dimensional object 102 disposed within the field-of-view of the image sensor 114. The three-dimensional image data captured by the image sensor 114 may include depth data providing a depth map and intensity data providing an intensity image of the field-of-view of the image sensor 114. At least a portion of the three-dimensional image data received by the one or more non-transitory, machine-readable storage media 118 is communicated to or otherwise accessed by the one or more processors 120 in order to select one or more geometric primitives 104 for use in fitting a three-dimensional packaging wireframe model 106 encompassing all or a portion of the three-dimensional object 102.


At 1804, based in whole or in part on the three-dimensional image data received from the image sensor 114, the one or more processors 120 determine a number of features on the first three-dimensional object 102 appearing in the three-dimensional image data. The features may include any point, edge, face, surface, or other discernible structure on the first three-dimensional object 102 and detectable in the image represented by the three-dimensional image data. For example, one or more features may correspond to a three-dimensional point on the three-dimensional object 102 that is detectable in a depth map containing the first three-dimensional object, an intensity image in which the three-dimensional object, or both a depth map and an intensity image in which the first three-dimensional object 102 appears as is represented. The identified features may include boundaries or defining edges of the first three-dimensional object, for example corners, arcs, lines, edges, angles, radii, and similar characteristics that define all or a portion of the external boundary of the first three-dimensional object 102.


At 1806, based at least in part on the features identified in 1804, the one or more processors 120 select one or more geometric primitives 104 from the library. The one or more processors 120 use the selected one or more geometric primitives 104 in constructing a three-dimensional packaging wireframe model 106 that encompasses all or a portion of the first three-dimensional object 102. The three-dimensional packaging wireframe model 106 encompasses substantially all of the features identified in 1804 as defining all or a portion of the first three-dimensional object 102.


Based at least in part on the identified features, the one or more processors 120 may search the library for one or more geometric primitives 104 having features, points, or nodes substantially similar to the spatial arrangement of the identified features, points, or nodes associated with the first three-dimensional object 102. In searching the library, the one or more processors may use one or more appearance-based or feature-based shape recognition or shape selection methods. For example a large modelbases appearance-based method using eigenfaces may be used to select geometric primitives 104 appropriate for fitting to the first three-dimensional object 102.


At 1808 the one or more processors 120 receives an input indicative of a rejection of the first three-dimensional packaging wireframe model 106 fitted by the one or more processors 120 about the first three-dimensional object 102. The one or more processors 120 may receive the input via an I/O device 166 such as a mouse or keyboard, or in a preferred embodiment via a resistive or capacitive touch-based input device which is part of a touch-screen display device 156 communicably connected to the host computer system 150. Responsive to the receipt of the rejection of the first three-dimensional packaging wireframe model 106 fitted about the first three-dimensional object 102, the one or more processors 120 select a second geometric primitive 104 and, based on the second selected geometric primitive 104, fit a second three-dimensional packaging wireframe model 106 about the first three-dimensional object 102.


At 1810 the one or more processors 120 can associate the number, pattern, or spatial relationship of the features identified in 1804 with the second geometric primitive 104 selected by the one or more processors. If, in the future, the one or more processors 120 identify a similar number, pattern, or spatial relationship of the features, the one or more processors 120 can autonomously select the second geometric primitive 104 for use in constructing the first three-dimensional packaging wireframe model 106 about the first three-dimensional object 102.


The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, schematics, and examples. Insofar as such block diagrams, schematics, and examples contain one or more functions and/or operations, it will be understood by those skilled in the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, the present subject matter may be implemented via Application Specific Integrated Circuits (ASICs) or programmable gate arrays. However, those skilled in the art will recognize that the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more controllers (e.g., microcontrollers) as one or more programs running on one or more processors (e.g., microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of ordinary skill in the art in light of this disclosure.


Various methods and/or algorithms have been described. Some or all of those methods and/or algorithms may omit some of the described acts or steps, include additional acts or steps, combine acts or steps, and/or may perform some acts or steps in a different order than described. Some of the method or algorithms may be implemented in software routines. Some of the software routines may be called from other software routines. Software routines may execute sequentially or concurrently, and may employ a multi-threaded approach.


In addition, those skilled in the art will appreciate that the mechanisms taught herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment applies equally regardless of the particular type of signal bearing media used to actually carry out the distribution. Examples of nontransitory signal bearing media include, but are not limited to, the following: recordable type media such as portable disks and memory, hard disk drives, CD/DVD ROMs, digital tape, computer memory, and other non-transitory computer-readable storage media.


These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.

Claims
  • 1. A method for fitting a packaging wireframe model to one or more three-dimensional objects, the method comprising: receiving image data of an area from a first point of view by at least one nontransitory processor-readable medium from at least one image sensor, the area including at least a first three-dimensional object to be dimensioned;determining that there are insufficient features on the first three-dimensional object to select one or more geometric primitives that are representative of the first three-dimensional object to the packaging wireframe model; andin response to the determination, generating an output to obtain additional features on the first three-dimensional object from a second point of view in order to select one or more geometric primitives that are representative of the first three-dimensional object to fit to the packaging wireframe model, the second point of view different from the first point of view.
  • 2. The method of claim 1, wherein generating the output to obtain the additional features on the first three-dimensional object from the second point of view comprises generating at least one output comprising an audio output that is perceivable by a user.
  • 3. The method of claim 2, wherein the at least one output indicates to the user a direction of movement of the at least one image sensor with respect to the first three-dimensional object.
  • 4. The method of claim 1, wherein generating the output to obtain the additional features on the first three-dimensional object from the second point of view comprises generating at least one output comprising a visual output that is perceivable by a user.
  • 5. The method of claim 4, wherein the at least one output indicates to the user a direction of movement of the at least one image sensor with respect to the first three-dimensional object.
  • 6. The method of claim 1, wherein the packaging wireframe model is two-dimensional and is fitted about a portion of an image of the first three-dimensional object, the method further comprising displaying an image of the packaging wireframe model on a display on which the image of the first three-dimensional object is displayed.
  • 7. The method of claim 6, wherein displaying of the image of the packaging wireframe model occurs before generating the output.
  • 8. The method of claim 1, comprising determining, from the received image data, a number of features in three dimensions of the first three-dimensional object by at least one processor communicatively coupled to the at least one nontransitory processor-readable medium, wherein the features comprise one of corners, arcs, lines, edges, angles, and radii associated with an external boundary of the first three-dimensional object.
  • 9. The method of claim 8, further comprising, based at least in part on the determined features of the first three-dimensional object, displaying an image of the packaging wireframe model, wherein the packaging wireframe model is two dimensional and is fitted about a portion of an image of the first three-dimensional object, on a display on which the image of the first three-dimensional object is displayed.
  • 10. The method of claim 9, wherein displaying the image of the packaging wireframe model occurs before generating the output.
  • 11. A volume dimensioning system, comprising: at least one image sensor communicably coupled to at least one nontransitory processor-readable medium;at least one processor communicably coupled to the at least one nontransitory processor-readable medium; anda machine executable instruction set stored within at least one nontransitory processor-readable medium, that when executed by the at least one processor causes the at least one processor to: read image data from the at least one nontransitory processor-readable medium, the image data associated with a first point of view of an area sensed by the at least one image sensor, the area including at least a first three-dimensional object to be dimensioned;determine from the received image data that there are an insufficient number of features on the first three-dimensional object to select one or more geometric primitives that are representative of the first three-dimensional object to the packaging wireframe model;responsive to the determination of an insufficient number of features in the image data, generate an output to obtain additional features on the first three-dimensional object from a second point of view in order to select one or more geometric primitives that are representative of the first three-dimensional object to fit to the packaging wireframe model, the second point of view different from the first point of view.
  • 12. The volume dimensioning system of claim 11, wherein the machine executable instruction set comprises instructions that when executed by the at least one processor cause the at least one processor to: generate an audio output that is perceivable by a user to obtain the additional features on the first three-dimensional object from the second point of view.
  • 13. The volume dimensioning system of claim 12, wherein the audio output indicates to the user a direction of movement the at least one image sensor with respect to the first three-dimensional object.
  • 14. The volume dimensioning system of claim 11, wherein the machine executable instruction set comprises instructions that when executed by the at least one processor cause the at least one processor to: generate a visual output that is perceivable by a user to obtain the additional features on the first three-dimensional object from the second point of view.
  • 15. The volume dimensioning system of claim 14, wherein the visual output indicates to the user a direction of movement of the at least one image sensor with respect to the first three-dimensional object.
  • 16. The volume dimensioning system of claim 11, wherein the packaged wireframe model is two-dimensional and is fitted about a portion of an image of the first three-dimensional model, the volume dimensioning system comprising a display, wherein the machine executable instruction set comprises instructions that when executed by the at least one processor cause the at least one processor to display an image of the packaging wireframe model object on the display, and wherein the image of the first three-dimensional object is displayed on the display.
  • 17. The volume dimensioning system of claim 11, wherein the packaged wireframe model is two-dimensional and is fitted about a portion of an image of the first three-dimensional model, the volume dimensioning system comprising a display, wherein the machine executable instruction set comprises instructions that when executed by the at least one processor cause the at least one processor to, before generating the output, display an image of the packaging wireframe model on the display, and wherein the image of the first three-dimensional object is displayed on the display.
  • 18. The volume dimensioning system of claim 11, wherein the machine executable instruction set comprises instructions that when executed by the at least one processor cause the at least one processor to determine, from the received image data, a number of features in three dimensions of the first three-dimensional object.
  • 19. The volume dimensioning system of claim 11, wherein the packaged wireframe model is two-dimensional and is fitted about a portion of an image of the first three-dimensional model, the volume dimensioning system comprising a display, wherein the machine executable instruction set comprises instructions that when executed by the at least one processor cause the at least one processor to: determine from the received image data a number of features in three dimensions of the first three-dimensional object, wherein the features comprise one of corners, arcs, lines, edges, angles, and radii associated with an external boundary of the first three-dimensional object; andbased at least in part on the determined features of the first three-dimensional object, display an image of the packaging wireframe model on the display, wherein the image of the first three-dimensional object is displayed on the display.
  • 20. The volume dimensioning system of claim 11, wherein the packaged wireframe model is two-dimensional and is fitted about a portion of an image of the first three-dimensional model, the volume dimensioning system comprising a display, wherein the machine executable instruction set comprises instructions that when executed by the at least one processor cause the at least one processor to: determine from the received image data a number of features in three dimensions of the first three-dimensional object; andbased at least in part on the determined features of the first three-dimensional object and before generating the output, display an image of the packaging wireframe model on the display, wherein the image of the first three-dimensional object is displayed on the display.
US Referenced Citations (1149)
Number Name Date Kind
3971065 Bayer Jul 1976 A
4026031 Siddall et al. May 1977 A
4279328 Ahlbom Jul 1981 A
4398811 Nishioka et al. Aug 1983 A
4495559 Gelatt, Jr. Jan 1985 A
4634278 Ross et al. Jan 1987 A
4730190 Win et al. Mar 1988 A
4803639 Steele et al. Feb 1989 A
4914460 Caimi et al. Apr 1990 A
4974919 Muraki et al. Dec 1990 A
5111325 DeJager May 1992 A
5175601 Fitts Dec 1992 A
5184733 Amarson et al. Feb 1993 A
5198648 Hibbard Mar 1993 A
5220536 Stringer et al. Jun 1993 A
5243619 Albers et al. Sep 1993 A
5331118 Jensen Jul 1994 A
5359185 Hanson Oct 1994 A
5384901 Glassner et al. Jan 1995 A
5477622 Skalnik Dec 1995 A
5548707 LoNegro et al. Aug 1996 A
5555090 Schmutz Sep 1996 A
5561526 Huber et al. Oct 1996 A
5590060 Granville et al. Dec 1996 A
5592333 Lewis Jan 1997 A
5606534 Stringer et al. Feb 1997 A
5619245 Kessler et al. Apr 1997 A
5655095 LoNegro et al. Aug 1997 A
5661561 Wurz et al. Aug 1997 A
5699161 Woodworth Dec 1997 A
5729750 Ishida Mar 1998 A
5730252 Herbinet Mar 1998 A
5732147 Tao Mar 1998 A
5734476 Dlugos Mar 1998 A
5737074 Haga et al. Apr 1998 A
5748199 Palm May 1998 A
5767962 Suzuki et al. Jun 1998 A
5802092 Endriz Sep 1998 A
5808657 Kurtz et al. Sep 1998 A
5831737 Stringer et al. Nov 1998 A
5850370 Stringer et al. Dec 1998 A
5850490 Johnson Dec 1998 A
5869827 Rando Feb 1999 A
5870220 Migdal et al. Feb 1999 A
5900611 Hecht May 1999 A
5923428 Woodworth Jul 1999 A
5929856 LoNegro et al. Jul 1999 A
5938710 Lanza et al. Aug 1999 A
5959568 Woolley Sep 1999 A
5960098 Tao Sep 1999 A
5969823 Wurz et al. Oct 1999 A
5978512 Kim et al. Nov 1999 A
5979760 Freyman et al. Nov 1999 A
5988862 Kacyra et al. Nov 1999 A
5991041 Woodworth Nov 1999 A
6009189 Schaack Dec 1999 A
6025847 Marks Feb 2000 A
6035067 Ponticos Mar 2000 A
6049386 Stringer et al. Apr 2000 A
6053409 Brobst et al. Apr 2000 A
6064759 Buckley et al. May 2000 A
6067110 Nonaka et al. May 2000 A
6069696 McQueen et al. May 2000 A
6115114 Berg et al. Sep 2000 A
6137577 Woodworth Oct 2000 A
6177999 Wurz et al. Jan 2001 B1
6189223 Haug Feb 2001 B1
6232597 Kley May 2001 B1
6236403 Chaki May 2001 B1
6246468 Dimsdale Jun 2001 B1
6333749 Reinhardt Dec 2001 B1
6336587 He et al. Jan 2002 B1
6369401 Lee Apr 2002 B1
6373579 Ober et al. Apr 2002 B1
6429803 Kumar Aug 2002 B1
6457642 Good et al. Oct 2002 B1
6507406 Yagi et al. Jan 2003 B1
6517004 Good et al. Feb 2003 B2
6519550 D'Hooge et al. Feb 2003 B1
6535776 Tobin et al. Mar 2003 B1
6661521 Stern Dec 2003 B1
6674904 McQueen Jan 2004 B1
6705526 Zhu et al. Mar 2004 B1
6773142 Rekow Aug 2004 B2
6781621 Gobush et al. Aug 2004 B1
6804269 Lizotte et al. Oct 2004 B2
6824058 Patel et al. Nov 2004 B2
6832725 Gardiner et al. Dec 2004 B2
6858857 Pease et al. Feb 2005 B2
6912293 Korobkin Jun 2005 B1
6922632 Foxlin Jul 2005 B2
6971580 Zhu et al. Dec 2005 B2
6995762 Pavlidis et al. Feb 2006 B1
7057632 Yamawaki et al. Jun 2006 B2
7085409 Sawhney et al. Aug 2006 B2
7086162 Tyroler Aug 2006 B2
7104453 Zhu et al. Sep 2006 B1
7128266 Zhu et al. Oct 2006 B2
7137556 Bonner et al. Nov 2006 B1
7159783 Walczyk et al. Jan 2007 B2
7161688 Bonner Jan 2007 B1
7205529 Andersen et al. Apr 2007 B2
7214954 Schopp May 2007 B2
7233682 Levine Jun 2007 B2
7277187 Smith et al. Oct 2007 B2
7307653 Dutta Dec 2007 B2
7310431 Gokturk et al. Dec 2007 B2
7313264 Crampton Dec 2007 B2
7353137 Vock et al. Apr 2008 B2
7413127 Ehrhart et al. Aug 2008 B2
7509529 Colucci et al. Mar 2009 B2
7527205 Zhu May 2009 B2
7586049 Wurz Sep 2009 B2
7602404 Reinhardt Oct 2009 B1
7614563 Nunnink et al. Nov 2009 B1
7639722 Paxton et al. Dec 2009 B1
7726206 Terrafranca, Jr. et al. Jun 2010 B2
7726575 Wang et al. Jun 2010 B2
7780084 Zhang et al. Aug 2010 B2
7788883 Buckley et al. Sep 2010 B2
7912320 Minor Mar 2011 B1
7974025 Topliss Jul 2011 B2
8009358 Zalevsky et al. Aug 2011 B2
8027096 Feng et al. Sep 2011 B2
8028501 Buckley et al. Oct 2011 B2
8050461 Shpunt et al. Nov 2011 B2
8055061 Katano Nov 2011 B2
8061610 Nunnink Nov 2011 B2
8072581 Breiholz Dec 2011 B1
8102395 Kondo et al. Jan 2012 B2
8132728 Dwinell et al. Mar 2012 B2
8134717 Pangrazio et al. Mar 2012 B2
8149224 Kuo et al. Apr 2012 B1
8194097 Xiao et al. Jun 2012 B2
8201737 Palacios Durazo et al. Jun 2012 B1
8212158 Wiest Jul 2012 B2
8212889 Chanas et al. Jul 2012 B2
8224133 Popovich et al. Jul 2012 B2
8228510 Pangrazio et al. Jul 2012 B2
8230367 Bell et al. Jul 2012 B2
8294969 Plesko Oct 2012 B2
8301027 Shaw et al. Oct 2012 B2
3305458 Hara Nov 2012 A1
8305458 Hara Nov 2012 B2
8310656 Zalewski Nov 2012 B2
8313380 Zalewski et al. Nov 2012 B2
8317105 Kotlarsky et al. Nov 2012 B2
8320621 McEldowney Nov 2012 B2
8322622 Liu Dec 2012 B2
8339462 Stec et al. Dec 2012 B2
8350959 Topliss et al. Jan 2013 B2
8351670 Ijiri et al. Jan 2013 B2
8366005 Kotlarsky et al. Feb 2013 B2
8368762 Chen et al. Feb 2013 B1
8371507 Haggerty et al. Feb 2013 B2
8374498 Pastore Feb 2013 B2
8376233 Van Horn et al. Feb 2013 B2
8381976 Mohideen et al. Feb 2013 B2
8381979 Franz Feb 2013 B2
8390909 Plesko Mar 2013 B2
8408464 Zhu et al. Apr 2013 B2
8408468 Horn et al. Apr 2013 B2
8408469 Good Apr 2013 B2
8424768 Rueblinger et al. Apr 2013 B2
3437539 Komatsu et al. May 2013 A1
8441749 Brown et al. May 2013 B2
8448863 Xian et al. May 2013 B2
8457013 Essinger et al. Jun 2013 B2
8459557 Havens et al. Jun 2013 B2
8463079 Ackley et al. Jun 2013 B2
8469272 Kearney Jun 2013 B2
8474712 Kearney et al. Jul 2013 B2
8479992 Kotlarsky et al. Jul 2013 B2
8490877 Kearney Jul 2013 B2
8517271 Kotlarsky et al. Aug 2013 B2
8523076 Good Sep 2013 B2
8528818 Ehrhart et al. Sep 2013 B2
8544737 Gomez et al. Oct 2013 B2
8548420 Grunow et al. Oct 2013 B2
8550335 Samek et al. Oct 2013 B2
8550354 Gannon et al. Oct 2013 B2
8550357 Kearney Oct 2013 B2
8556174 Kosecki et al. Oct 2013 B2
8556176 Van Horn et al. Oct 2013 B2
8556177 Hussey et al. Oct 2013 B2
8559767 Barber et al. Oct 2013 B2
8561895 Gomez et al. Oct 2013 B2
8561903 Sauerwein Oct 2013 B2
8561905 Edmonds et al. Oct 2013 B2
8565107 Pease et al. Oct 2013 B2
8570343 Halstead Oct 2013 B2
8571307 Li et al. Oct 2013 B2
8576390 Nunnink Nov 2013 B1
8579200 Samek et al. Nov 2013 B2
8583924 Caballero et al. Nov 2013 B2
8584945 Wang et al. Nov 2013 B2
8587595 Wang Nov 2013 B2
8587697 Hussey et al. Nov 2013 B2
8588869 Sauerwein et al. Nov 2013 B2
8590789 Nahill et al. Nov 2013 B2
8594425 Gurman et al. Nov 2013 B2
8596539 Havens et al. Dec 2013 B2
8596542 Havens et al. Dec 2013 B2
8596543 Havens et al. Dec 2013 B2
8599271 Havens et al. Dec 2013 B2
8599957 Peake et al. Dec 2013 B2
8600158 Li et al. Dec 2013 B2
8600167 Showering Dec 2013 B2
8602309 Longacre et al. Dec 2013 B2
8608053 Meier et al. Dec 2013 B2
8608071 Liu et al. Dec 2013 B2
8611309 Wang et al. Dec 2013 B2
8615487 Gomez et al. Dec 2013 B2
8621123 Caballero Dec 2013 B2
8622303 Meier et al. Jan 2014 B2
8628013 Ding Jan 2014 B2
8628015 Wang et al. Jan 2014 B2
8628016 Winegar Jan 2014 B2
8629926 Wang Jan 2014 B2
8630491 Longacre et al. Jan 2014 B2
8635309 Berthiaume et al. Jan 2014 B2
8636200 Kearney Jan 2014 B2
8636212 Nahill et al. Jan 2014 B2
8636215 Ding et al. Jan 2014 B2
8636224 Wang Jan 2014 B2
8638806 Wang et al. Jan 2014 B2
8640958 Lu et al. Feb 2014 B2
8640960 Wang et al. Feb 2014 B2
8643717 Li et al. Feb 2014 B2
8646692 Meier et al. Feb 2014 B2
8646694 Wang et al. Feb 2014 B2
8657200 Ren et al. Feb 2014 B2
8659397 Vargo et al. Feb 2014 B2
8668149 Good Mar 2014 B2
8678285 Kearney Mar 2014 B2
8678286 Smith et al. Mar 2014 B2
8682077 Longacre Mar 2014 B1
D702237 Oberpriller et al. Apr 2014 S
8687000 Panahpour Tehrani Apr 2014 B2
8687282 Feng et al. Apr 2014 B2
8692927 Pease et al. Apr 2014 B2
8695880 Bremer et al. Apr 2014 B2
8698949 Grunow et al. Apr 2014 B2
8702000 Barber et al. Apr 2014 B2
8717494 Gannon May 2014 B2
8720783 Biss et al. May 2014 B2
8723804 Fletcher et al. May 2014 B2
8723904 Marty et al. May 2014 B2
8727223 Wang May 2014 B2
8740082 Wilz Jun 2014 B2
8740085 Furlong et al. Jun 2014 B2
8746563 Hennick et al. Jun 2014 B2
8750445 Peake et al. Jun 2014 B2
8752766 Xian et al. Jun 2014 B2
8756059 Braho et al. Jun 2014 B2
8757495 Qu et al. Jun 2014 B2
8760563 Koziol et al. Jun 2014 B2
8763909 Reed et al. Jul 2014 B2
8777108 Coyle Jul 2014 B2
8777109 Oberpriller et al. Jul 2014 B2
8779898 Havens et al. Jul 2014 B2
8781520 Payne et al. Jul 2014 B2
8783573 Havens et al. Jul 2014 B2
8789757 Barten Jul 2014 B2
8789758 Hawley et al. Jul 2014 B2
8789759 Xian et al. Jul 2014 B2
8792688 Unsworth Jul 2014 B2
8794520 Wang et al. Aug 2014 B2
8794522 Ehrhart Aug 2014 B2
8794525 Amundsen et al. Aug 2014 B2
8794526 Wang et al. Aug 2014 B2
8798367 Ellis Aug 2014 B2
8807431 Wang et al. Aug 2014 B2
8807432 Van Horn et al. Aug 2014 B2
8810779 Hilde Aug 2014 B1
8820630 Qu et al. Sep 2014 B2
8822806 Cockerell et al. Sep 2014 B2
8822848 Meagher Sep 2014 B2
8824692 Sheerin et al. Sep 2014 B2
8824696 Braho Sep 2014 B2
8842849 Wahl et al. Sep 2014 B2
8844822 Kotlarsky et al. Sep 2014 B2
8844823 Fritz et al. Sep 2014 B2
8849019 Li et al. Sep 2014 B2
D716285 Chaney et al. Oct 2014 S
8851383 Yeakley et al. Oct 2014 B2
8854633 Laffargue Oct 2014 B2
8866963 Grunow et al. Oct 2014 B2
8868421 Braho et al. Oct 2014 B2
8868519 Maloy et al. Oct 2014 B2
8868802 Barten Oct 2014 B2
8868803 Caballero Oct 2014 B2
8870074 Gannon Oct 2014 B1
8879639 Sauerwein Nov 2014 B2
8880426 Smith Nov 2014 B2
8881983 Havens et al. Nov 2014 B2
8881987 Wang Nov 2014 B2
8897596 Passmore et al. Nov 2014 B1
8903172 Smith Dec 2014 B2
8908277 Pesach et al. Dec 2014 B2
8908995 Benos et al. Dec 2014 B2
8910870 Li et al. Dec 2014 B2
8910875 Ren et al. Dec 2014 B2
8914290 Hendrickson et al. Dec 2014 B2
8914788 Pettinelli et al. Dec 2014 B2
8915439 Feng et al. Dec 2014 B2
8915444 Havens et al. Dec 2014 B2
8916789 Woodburn Dec 2014 B2
8918250 Hollifield Dec 2014 B2
8918564 Caballero Dec 2014 B2
8925818 Kosecki et al. Jan 2015 B2
8928896 Kennington et al. Jan 2015 B2
8939374 Jovanovski et al. Jan 2015 B2
8942480 Ellis Jan 2015 B2
8944313 Williams et al. Feb 2015 B2
8944327 Meier et al. Feb 2015 B2
8944332 Harding et al. Feb 2015 B2
8950678 Germaine et al. Feb 2015 B2
D723560 Zhou et al. Mar 2015 S
8967468 Gomez et al. Mar 2015 B2
8971346 Sevier Mar 2015 B2
8976030 Cunningham et al. Mar 2015 B2
8976368 Akel et al. Mar 2015 B2
8978981 Guan Mar 2015 B2
8978983 Bremer et al. Mar 2015 B2
8978984 Hennick et al. Mar 2015 B2
8985456 Zhu et al. Mar 2015 B2
8985457 Soule et al. Mar 2015 B2
8985459 Kearney et al. Mar 2015 B2
8985461 Gelay et al. Mar 2015 B2
8988578 Showering Mar 2015 B2
8988590 Gillet et al. Mar 2015 B2
8991704 Hopper et al. Mar 2015 B2
8993974 Goodwin Mar 2015 B2
8996194 Davis et al. Mar 2015 B2
8996384 Funyak et al. Mar 2015 B2
8998091 Edmonds et al. Apr 2015 B2
9002641 Showering Apr 2015 B2
9007368 Laffargue et al. Apr 2015 B2
9010641 Qu et al. Apr 2015 B2
9014441 Truyen et al. Apr 2015 B2
9015513 Murawski et al. Apr 2015 B2
9016576 Brady et al. Apr 2015 B2
D730357 Fitch et al. May 2015 S
9022288 Nahill et al. May 2015 B2
9030964 Essinger et al. May 2015 B2
9033240 Smith et al. May 2015 B2
9033242 Gillet et al. May 2015 B2
9036054 Koziol et al. May 2015 B2
9037344 Chamberlin May 2015 B2
9038911 Xian et al. May 2015 B2
9038915 Smith May 2015 B2
D730901 Oberpriller et al. Jun 2015 S
D730902 Fitch et al. Jun 2015 S
9047098 Barten Jun 2015 B2
9047359 Caballero et al. Jun 2015 B2
9047420 Caballero Jun 2015 B2
9047525 Barber Jun 2015 B2
9047531 Showering et al. Jun 2015 B2
9049640 Wang et al. Jun 2015 B2
9053055 Caballero Jun 2015 B2
9053378 Hou et al. Jun 2015 B1
9053380 Xian et al. Jun 2015 B2
9057641 Amundsen et al. Jun 2015 B2
9058526 Powilleit Jun 2015 B2
9061527 Tobin et al. Jun 2015 B2
9064165 Havens et al. Jun 2015 B2
9064167 Xian et al. Jun 2015 B2
9064168 Todeschini et al. Jun 2015 B2
9064254 Todeschini et al. Jun 2015 B2
9066032 Wang Jun 2015 B2
9066087 Shpunt Jun 2015 B2
9070032 Corcoran Jun 2015 B2
D734339 Zhou et al. Jul 2015 S
D734751 Oberpriller et al. Jul 2015 S
9076459 Braho et al. Jul 2015 B2
9079423 Bouverie et al. Jul 2015 B2
9080856 Laffargue Jul 2015 B2
9082023 Feng et al. Jul 2015 B2
9082195 Holeva et al. Jul 2015 B2
9084032 Rautiola et al. Jul 2015 B2
9087250 Coyle Jul 2015 B2
9092681 Havens et al. Jul 2015 B2
9092682 Wilz et al. Jul 2015 B2
9092683 Koziol et al. Jul 2015 B2
9093141 Liu Jul 2015 B2
9098763 Lu et al. Aug 2015 B2
9104929 Todeschini Aug 2015 B2
9104934 Li et al. Aug 2015 B2
9107484 Chaney Aug 2015 B2
9111159 Liu et al. Aug 2015 B2
9111166 Cunningham Aug 2015 B2
9135483 Liu et al. Sep 2015 B2
9137009 Gardiner Sep 2015 B1
9141839 Xian et al. Sep 2015 B2
9142035 Rotman et al. Sep 2015 B1
9147096 Wang Sep 2015 B2
9148474 Skvoretz Sep 2015 B2
9158000 Sauerwein Oct 2015 B2
9158340 Reed et al. Oct 2015 B2
9158953 Gillet et al. Oct 2015 B2
9159059 Daddabbo et al. Oct 2015 B2
9165174 Huck Oct 2015 B2
9171278 Kong et al. Oct 2015 B1
9171543 Emerick et al. Oct 2015 B2
9183425 Wang Nov 2015 B2
9189669 Zhu et al. Nov 2015 B2
9195844 Todeschini et al. Nov 2015 B2
9202458 Braho et al. Dec 2015 B2
9208366 Liu Dec 2015 B2
9208367 Wang Dec 2015 B2
9219836 Bouverie et al. Dec 2015 B2
9224022 Ackley et al. Dec 2015 B2
9224024 Bremer et al. Dec 2015 B2
9224027 Van Horn et al. Dec 2015 B2
D747321 London et al. Jan 2016 S
9230140 Ackley Jan 2016 B1
9233470 Bradski et al. Jan 2016 B1
9235553 Fitch et al. Jan 2016 B2
9235899 Kirmani et al. Jan 2016 B1
9239950 Fletcher Jan 2016 B2
9245492 Ackley et al. Jan 2016 B2
9443123 Hejl Jan 2016 B2
9248640 Heng Feb 2016 B2
9250652 London et al. Feb 2016 B2
9250712 Todeschini Feb 2016 B1
9251411 Todeschini Feb 2016 B2
9258033 Showering Feb 2016 B2
9262633 Todeschini et al. Feb 2016 B1
9262660 Lu et al. Feb 2016 B2
9262662 Chen et al. Feb 2016 B2
9269036 Bremer Feb 2016 B2
9270782 Hala et al. Feb 2016 B2
9273846 Rossi et al. Mar 2016 B1
9274812 Doren et al. Mar 2016 B2
9275388 Havens et al. Mar 2016 B2
9277668 Feng et al. Mar 2016 B2
9280693 Feng et al. Mar 2016 B2
9286496 Smith Mar 2016 B2
9297900 Jiang Mar 2016 B2
9298964 Li et al. Mar 2016 B2
9299013 Curlander et al. Mar 2016 B1
9301427 Feng et al. Mar 2016 B2
9304376 Anderson Apr 2016 B2
9310609 Rueblinger et al. Apr 2016 B2
9313377 Todeschini et al. Apr 2016 B2
9317037 Byford et al. Apr 2016 B2
D757009 Oberpriller et al. May 2016 S
9342723 Liu et al. May 2016 B2
9342724 McCloskey May 2016 B2
9361882 Ressler et al. Jun 2016 B2
9365381 Colonel et al. Jun 2016 B2
9366861 Johnson Jun 2016 B1
9373018 Colavito et al. Jun 2016 B2
9375945 Bowles Jun 2016 B1
9378403 Wang et al. Jun 2016 B2
D760719 Zhou et al. Jul 2016 S
9360304 Chang et al. Jul 2016 B2
9383848 Daghigh Jul 2016 B2
9384374 Bianconi Jul 2016 B2
9390596 Todeschini Jul 2016 B1
9399557 Mishra et al. Jul 2016 B1
D762604 Fitch et al. Aug 2016 S
9411386 Sauerwein Aug 2016 B2
9412242 Van Horn et al. Aug 2016 B2
9418269 Havens et al. Aug 2016 B2
9418270 Van Volkinburg et al. Aug 2016 B2
9423318 Lui et al. Aug 2016 B2
9424749 Reed et al. Aug 2016 B1
D766244 Zhou et al. Sep 2016 S
9443222 Singel et al. Sep 2016 B2
9454689 McCloskey et al. Sep 2016 B2
9464885 Lloyd et al. Oct 2016 B2
9465967 Xian et al. Oct 2016 B2
9470511 Maynard et al. Oct 2016 B2
9478113 Xie et al. Oct 2016 B2
9478983 Kather et al. Oct 2016 B2
D771631 Fitch et al. Nov 2016 S
9481186 Bouverie et al. Nov 2016 B2
9486921 Straszheim et al. Nov 2016 B1
9488986 Solanki Nov 2016 B1
9489782 Payne et al. Nov 2016 B2
9490540 Davies et al. Nov 2016 B1
9491729 Rautiola et al. Nov 2016 B2
9497092 Gomez et al. Nov 2016 B2
9507974 Todeschini Nov 2016 B1
9519814 Cudzilo Dec 2016 B2
9521331 Bessettes et al. Dec 2016 B2
9530038 Xian et al. Dec 2016 B2
D777166 Bidwell et al. Jan 2017 S
9558386 Yeakley Jan 2017 B2
9572901 Todeschini Feb 2017 B2
9595038 Cavalcanti et al. Mar 2017 B1
9606581 Howe et al. Mar 2017 B1
D783601 Schulte et al. Apr 2017 S
D785617 Bidwell et al. May 2017 S
D785636 Oberpriller et al. May 2017 S
9646189 Lu et al. May 2017 B2
9646191 Unemyr et al. May 2017 B2
9652648 Ackley et al. May 2017 B2
9652653 Todeschini et al. May 2017 B2
9656487 Ho et al. May 2017 B2
9659198 Giordano et al. May 2017 B2
D790505 Vargo et al. Jun 2017 S
D790546 Zhou et al. Jun 2017 S
9680282 Hanenburg Jun 2017 B2
9697401 Feng et al. Jul 2017 B2
9701140 Alaganchetty et al. Jul 2017 B1
9709387 Fujita et al. Jul 2017 B2
9736459 Mor et al. Aug 2017 B2
9741136 Holz Aug 2017 B2
9779546 Hunt et al. Oct 2017 B2
9828223 Svensson et al. Nov 2017 B2
20010027995 Patel et al. Oct 2001 A1
20010032879 He et al. Oct 2001 A1
20020036765 McCaffrey Mar 2002 A1
20020054289 Thibault et al. May 2002 A1
20020067855 Chiu et al. Jun 2002 A1
20020105639 Roelke Aug 2002 A1
20020109835 Goetz Aug 2002 A1
20020113946 Kitaguchi et al. Aug 2002 A1
20020118874 Chung et al. Aug 2002 A1
20020158873 Williamson Oct 2002 A1
20020167677 Okada et al. Nov 2002 A1
20020179708 Zhu et al. Dec 2002 A1
20020186897 Kim et al. Dec 2002 A1
20020196534 Lizotte et al. Dec 2002 A1
20030038179 Tsikos et al. Feb 2003 A1
20030053513 Vatan et al. Mar 2003 A1
20030063086 Baumberg Apr 2003 A1
20030078755 Leutz et al. Apr 2003 A1
20030091227 Chang et al. May 2003 A1
20030156756 Gokturk et al. Aug 2003 A1
20030163287 Vock et al. Aug 2003 A1
20030197138 Pease et al. Oct 2003 A1
20030225712 Cooper et al. Dec 2003 A1
20030235331 Kawaike et al. Dec 2003 A1
20040008259 Gokturk et al. Jan 2004 A1
20040019274 Galloway et al. Jan 2004 A1
20040024754 Mane et al. Feb 2004 A1
20040066329 Zeitfuss et al. Apr 2004 A1
20040073359 Ichijo et al. Apr 2004 A1
20040083025 Yamanouchi et al. Apr 2004 A1
20040089482 Ramsden et al. May 2004 A1
20040098146 Katae et al. May 2004 A1
20040105580 Hager et al. Jun 2004 A1
20040118928 Patel et al. Jun 2004 A1
20040122779 Stickler et al. Jun 2004 A1
20040132297 Baba et al. Jul 2004 A1
20040155975 Hart et al. Aug 2004 A1
20040165090 Ning Aug 2004 A1
20040184041 Schopp Sep 2004 A1
20040211836 Patel et al. Oct 2004 A1
20040214623 Takahashi et al. Oct 2004 A1
20040233461 Armstrong et al. Nov 2004 A1
20040258353 Gluckstad et al. Dec 2004 A1
20050006477 Patel Jan 2005 A1
20050117215 Lange Jun 2005 A1
20050128193 Popescu et al. Jun 2005 A1
20050128196 Popescu et al. Jun 2005 A1
20050168488 Montague Aug 2005 A1
20050187887 Nicolas et al. Aug 2005 A1
20050211782 Martin Sep 2005 A1
20050240317 Kienzle-Lietl Oct 2005 A1
20050257748 Kriesel et al. Nov 2005 A1
20050264867 Cho et al. Dec 2005 A1
20060036556 Knispel Feb 2006 A1
20060047704 Gopalakrishnan Mar 2006 A1
20060078226 Zhou Apr 2006 A1
20060108266 Bowers et al. May 2006 A1
20060109105 Varner et al. May 2006 A1
20060112023 Horhann May 2006 A1
20060151604 Zhu et al. Jul 2006 A1
20060159307 Anderson et al. Jul 2006 A1
20060159344 Shao et al. Jul 2006 A1
20060213999 Wang et al. Sep 2006 A1
20060230640 Chen Oct 2006 A1
20060232681 Okada Oct 2006 A1
20060255150 Longacre Nov 2006 A1
20060269165 Viswanathan Nov 2006 A1
20060276709 Khamene et al. Dec 2006 A1
20060291719 Ikeda et al. Dec 2006 A1
20070003154 Sun et al. Jan 2007 A1
20070025612 Iwasaki et al. Feb 2007 A1
20070031064 Zhao et al. Feb 2007 A1
20070063048 Havens et al. Mar 2007 A1
20070116357 Dewaele May 2007 A1
20070127022 Cohen et al. Jun 2007 A1
20070143082 Degnan Jun 2007 A1
20070153293 Gruhlke et al. Jul 2007 A1
20070165013 Goulanian et al. Jul 2007 A1
20070171220 Kriveshko Jul 2007 A1
20070177011 Lewin et al. Aug 2007 A1
20070181685 Zhu et al. Aug 2007 A1
20070237356 Dwinell et al. Oct 2007 A1
20070291031 Konev et al. Dec 2007 A1
20070299338 Stevick et al. Dec 2007 A1
20080013793 Hillis et al. Jan 2008 A1
20080035390 Wurz Feb 2008 A1
20080047760 Georgitsis Feb 2008 A1
20080050042 Zhang et al. Feb 2008 A1
20080054062 Gunning et al. Mar 2008 A1
20080056536 Hildreth et al. Mar 2008 A1
20080062164 Bassi et al. Mar 2008 A1
20080065509 Williams Mar 2008 A1
20080077265 Boyden Mar 2008 A1
20080079955 Storm Apr 2008 A1
20080156619 Patel et al. Jul 2008 A1
20080164074 Wurz Jul 2008 A1
20080204476 Montague Aug 2008 A1
20080212168 Olmstead et al. Sep 2008 A1
20080247635 Davis et al. Oct 2008 A1
20080273191 Kim et al. Nov 2008 A1
20080273210 Hilde Nov 2008 A1
20080278790 Boesser et al. Nov 2008 A1
20090038182 Lans et al. Feb 2009 A1
20090046296 Kilpartrick et al. Feb 2009 A1
20090059004 Bochicchio Mar 2009 A1
20090081008 Somin et al. Mar 2009 A1
20090095047 Patel et al. Apr 2009 A1
20090114818 Casares et al. May 2009 A1
20090134221 Zhu et al. May 2009 A1
20090161090 Campbell et al. Jun 2009 A1
20090189858 Lev et al. Jul 2009 A1
20090195790 Zhu et al. Aug 2009 A1
20090225333 Bendall et al. Sep 2009 A1
20090237411 Gossweiler et al. Sep 2009 A1
20090268023 Hsieh Oct 2009 A1
20090272724 Gubler Nov 2009 A1
20090273770 Bauhahn et al. Nov 2009 A1
20090313948 Buckley et al. Dec 2009 A1
20090318815 Barnes et al. Dec 2009 A1
20090323084 Dunn et al. Dec 2009 A1
20090323121 Valkenburg Dec 2009 A1
20100035637 Varanasi et al. Feb 2010 A1
20100060604 Zwart et al. Mar 2010 A1
20100091104 Sprigle Apr 2010 A1
20100113153 Yen et al. May 2010 A1
20100118200 Gelman et al. May 2010 A1
20100128109 Banks May 2010 A1
20100161170 Sills Jun 2010 A1
20100171740 Andersen et al. Jul 2010 A1
20100172567 Prokoski Jul 2010 A1
20100177076 Essinger et al. Jul 2010 A1
20100177080 Essinger et al. Jul 2010 A1
20100177707 Essinger et al. Jul 2010 A1
20100177749 Essinger et al. Jul 2010 A1
20100194709 Tamaki et al. Aug 2010 A1
20100202702 Benos et al. Aug 2010 A1
20100208039 Stettner Aug 2010 A1
20100211355 Horst et al. Aug 2010 A1
20100217678 Goncalves Aug 2010 A1
20100220849 Colbert et al. Sep 2010 A1
20100220894 Ackley et al. Sep 2010 A1
20100223276 Al-Shameri et al. Sep 2010 A1
20100245850 Lee et al. Sep 2010 A1
20100254611 Amz Oct 2010 A1
20100274728 Kugelman Oct 2010 A1
20100303336 Abraham Dec 2010 A1
20100315413 Izadi et al. Dec 2010 A1
20100321482 Cleveland Dec 2010 A1
20110019155 Daniel et al. Jan 2011 A1
20110040192 Brenner et al. Feb 2011 A1
20110040407 Lim Feb 2011 A1
20110043609 Choi et al. Feb 2011 A1
20110075936 Deaver Mar 2011 A1
20110081044 Peeper Apr 2011 A1
20110099474 Grossman et al. Apr 2011 A1
20110169999 Grunow et al. Jul 2011 A1
20110180695 Li et al. Jul 2011 A1
20110188054 Petronius et al. Aug 2011 A1
20110188741 Sones et al. Aug 2011 A1
20110202554 Powilleit et al. Aug 2011 A1
20110234389 Mellin Sep 2011 A1
20110235854 Berger et al. Sep 2011 A1
20110243432 Hirsch et al. Oct 2011 A1
20110249864 Venkatesan et al. Oct 2011 A1
20110254840 Halstead Oct 2011 A1
20110260965 Kim et al. Oct 2011 A1
20110279916 Brown et al. Nov 2011 A1
20110286007 Pangrazio et al. Nov 2011 A1
20110286628 Goncalves et al. Nov 2011 A1
20110288818 Thierman Nov 2011 A1
20110297590 Ackley et al. Dec 2011 A1
20110301994 Tieman Dec 2011 A1
20110303748 Lemma et al. Dec 2011 A1
20110310227 Konertz et al. Dec 2011 A1
20110310256 Shishido Dec 2011 A1
20120014572 Wong et al. Jan 2012 A1
20120024952 Chen Feb 2012 A1
20120056982 Katz et al. Mar 2012 A1
20120057345 Kuchibhotla Mar 2012 A1
20120067955 Rowe Mar 2012 A1
20120074227 Ferren et al. Mar 2012 A1
20120081714 Pangrazio et al. Apr 2012 A1
20120082383 Kruglick Apr 2012 A1
20120111946 Golant May 2012 A1
20120113223 Hilliges et al. May 2012 A1
20120126000 Kunzig et al. May 2012 A1
20120140300 Freeman Jun 2012 A1
20120168509 Nunnink et al. Jul 2012 A1
20120168512 Kotlarsky et al. Jul 2012 A1
20120179665 Baarman et al. Jul 2012 A1
20120185094 Rosenstein et al. Jul 2012 A1
20120190386 Anderson Jul 2012 A1
20120193423 Samek Aug 2012 A1
20120197464 Wang et al. Aug 2012 A1
20120203647 Smith Aug 2012 A1
20120218436 Rodriguez et al. Sep 2012 A1
20120223141 Good et al. Sep 2012 A1
20120224026 Bayer et al. Sep 2012 A1
20120224060 Gurevich et al. Sep 2012 A1
20120236212 Itoh et al. Sep 2012 A1
20120236288 Stanley Sep 2012 A1
20120242852 Hayward et al. Sep 2012 A1
20120113250 Farlotti et al. Oct 2012 A1
20120256901 Bendall Oct 2012 A1
20120261474 Kawashime et al. Oct 2012 A1
20120262558 Boger et al. Oct 2012 A1
20120280908 Rhoads et al. Nov 2012 A1
20120282905 Owen Nov 2012 A1
20120282911 Davis et al. Nov 2012 A1
20120284012 Rodriguez et al. Nov 2012 A1
20120284122 Brandis Nov 2012 A1
20120284339 Rodriguez Nov 2012 A1
20120284593 Rodriguez Nov 2012 A1
20120293610 Doepke et al. Nov 2012 A1
20120293625 Schneider et al. Nov 2012 A1
20120294478 Publicover et al. Nov 2012 A1
20120294549 Doepke Nov 2012 A1
20120299961 Ramkumar et al. Nov 2012 A1
20120300991 Mikio Nov 2012 A1
20120313848 Galor et al. Dec 2012 A1
20120314030 Datta Dec 2012 A1
20120314058 Bendall et al. Dec 2012 A1
20120314258 Moriya Dec 2012 A1
20120316820 Nakazato et al. Dec 2012 A1
20130019278 Sun et al. Jan 2013 A1
20130038881 Pesach et al. Feb 2013 A1
20130038941 Pesach et al. Feb 2013 A1
20130043312 Van Horn Feb 2013 A1
20130050426 Sarmast et al. Feb 2013 A1
20130075168 Amundsen et al. Mar 2013 A1
20130076857 Kurashige et al. Mar 2013 A1
20130093895 Palmer et al. Apr 2013 A1
20130094069 Lee et al. Apr 2013 A1
20130101158 Lloyd et al. Apr 2013 A1
20130156267 Muraoka et al. Jun 2013 A1
20130175341 Kearney et al. Jul 2013 A1
20130175343 Good Jul 2013 A1
20130200150 Reynolds et al. Aug 2013 A1
20130201288 Billerbaeck et al. Aug 2013 A1
20130208164 Cazier et al. Aug 2013 A1
20130211790 Loveland et al. Aug 2013 A1
20130222592 Gieseke Aug 2013 A1
20130223673 Davis et al. Aug 2013 A1
20130257744 Daghigh et al. Oct 2013 A1
20130257759 Daghigh Oct 2013 A1
20130270346 Xian et al. Oct 2013 A1
20130291998 Konnerth Nov 2013 A1
20130292475 Kotlarsky et al. Nov 2013 A1
20130292477 Hennick et al. Nov 2013 A1
20130293539 Hunt et al. Nov 2013 A1
20130293540 Laffargue et al. Nov 2013 A1
20130306728 Thuries et al. Nov 2013 A1
20130306731 Pedraro Nov 2013 A1
20130307964 Bremer et al. Nov 2013 A1
20130308013 Li et al. Nov 2013 A1
20130308625 Park et al. Nov 2013 A1
20130313324 Koziol et al. Nov 2013 A1
20130317642 Asaria Nov 2013 A1
20130326425 Forstall et al. Dec 2013 A1
20130329012 Bartos Dec 2013 A1
20130329013 Metois et al. Dec 2013 A1
20130332524 Fiala et al. Dec 2013 A1
20130342343 Harring et al. Dec 2013 A1
20140001258 Chan et al. Jan 2014 A1
20140001267 Giordano et al. Jan 2014 A1
20140002828 Laffargue et al. Jan 2014 A1
20140009586 McNamer et al. Jan 2014 A1
20140019005 Lee et al. Jan 2014 A1
20140021259 Moed et al. Jan 2014 A1
20140025584 Liu et al. Jan 2014 A1
20140031665 Pinto et al. Jan 2014 A1
20140034731 Gao et al. Feb 2014 A1
20140034734 Sauerwein Feb 2014 A1
20140039674 Motoyama et al. Feb 2014 A1
20140039693 Havens et al. Feb 2014 A1
20140049120 Kohtz et al. Feb 2014 A1
20140049635 Laffargue et al. Feb 2014 A1
20140058612 Wong et al. Feb 2014 A1
20140061306 Wu et al. Mar 2014 A1
20140062709 Hyer et al. Mar 2014 A1
20140063289 Hussey et al. Mar 2014 A1
20140064624 Kim et al. Mar 2014 A1
20140066136 Sauerwein et al. Mar 2014 A1
20140067104 Osterhout Mar 2014 A1
20140067692 Ye et al. Mar 2014 A1
20140070005 Nahill et al. Mar 2014 A1
20140071430 Hansen et al. Mar 2014 A1
20140071840 Venancio Mar 2014 A1
20140074746 Wang Mar 2014 A1
20140076974 Havens et al. Mar 2014 A1
20140078342 Li et al. Mar 2014 A1
20140079297 Tadayon et al. Mar 2014 A1
20140091147 Evans et al. Apr 2014 A1
20140097238 Ghazizadeh Apr 2014 A1
20140097252 He et al. Apr 2014 A1
20140098091 Hori Apr 2014 A1
20140098243 Ghazizadeh Apr 2014 A1
20140098244 Ghazizadeh Apr 2014 A1
20140098792 Wang et al. Apr 2014 A1
20140100774 Showering Apr 2014 A1
20140100813 Showering Apr 2014 A1
20140103115 Meier et al. Apr 2014 A1
20140104413 McCloskey et al. Apr 2014 A1
20140104414 McCloskey et al. Apr 2014 A1
20140104416 Giordano et al. Apr 2014 A1
20140104664 Lee Apr 2014 A1
20140106725 Sauerwein Apr 2014 A1
20140108010 Maltseff et al. Apr 2014 A1
20140108402 Gomez et al. Apr 2014 A1
20140108682 Caballero Apr 2014 A1
20140110485 Toa et al. Apr 2014 A1
20140114530 Fitch et al. Apr 2014 A1
20140125577 Hoang et al. May 2014 A1
20140125853 Wang May 2014 A1
20140125999 Longacre et al. May 2014 A1
20140129378 Richardson May 2014 A1
20140131443 Smith May 2014 A1
20140131444 Wang May 2014 A1
20140133379 Wang et al. May 2014 A1
20140135984 Hirata May 2014 A1
20140136208 Maltseff et al. May 2014 A1
20140139654 Taskahashi May 2014 A1
20140140585 Wang May 2014 A1
20140142398 Patil et al. May 2014 A1
20140152882 Samek et al. Jun 2014 A1
20140152975 Ko Jun 2014 A1
20140157861 Jonas et al. Jun 2014 A1
20140158468 Adami Jun 2014 A1
20140158770 Sevier et al. Jun 2014 A1
20140159869 Zumsteg et al. Jun 2014 A1
20140166755 Liu et al. Jun 2014 A1
20140166757 Smith Jun 2014 A1
20140168380 Heidemann et al. Jun 2014 A1
20140168787 Wang et al. Jun 2014 A1
20140175165 Havens et al. Jun 2014 A1
20140177931 Kocherscheidt et al. Jun 2014 A1
20140191913 Ge et al. Jul 2014 A1
20140192187 Atwell et al. Jul 2014 A1
20140192551 Masaki Jul 2014 A1
20140197239 Havens et al. Jul 2014 A1
20140197304 Feng et al. Jul 2014 A1
20140201126 Zadeh et al. Jul 2014 A1
20140204268 Grunow et al. Jul 2014 A1
20140205150 Ogawa Jul 2014 A1
20140214631 Hansen Jul 2014 A1
20140217166 Berthiaume et al. Aug 2014 A1
20140217180 Liu Aug 2014 A1
20140225918 Mittal et al. Aug 2014 A1
20140225985 Klusza et al. Aug 2014 A1
20140231500 Ehrhart et al. Aug 2014 A1
20140240454 Lee Aug 2014 A1
20140247279 Nicholas et al. Sep 2014 A1
20140247280 Nicholas et al. Sep 2014 A1
20140247315 Marty et al. Sep 2014 A1
20140263493 Amurgis et al. Sep 2014 A1
20140263645 Smith et al. Sep 2014 A1
20140267609 Laffargue Sep 2014 A1
20140268093 Tohme et al. Sep 2014 A1
20140270196 Braho et al. Sep 2014 A1
20140270229 Braho Sep 2014 A1
20140270361 Amma et al. Sep 2014 A1
20140278387 DiGregorio Sep 2014 A1
20140282210 Bianconi Sep 2014 A1
20140288933 Braho et al. Sep 2014 A1
20140297058 Barker et al. Oct 2014 A1
20140299665 Barber et al. Oct 2014 A1
20140306833 Ricci Oct 2014 A1
20140307855 Withagen et al. Oct 2014 A1
20140313527 Askan Oct 2014 A1
20140319219 Liu et al. Oct 2014 A1
20140320408 Zagorsek et al. Oct 2014 A1
20140320605 Johnson Oct 2014 A1
20140333775 Naikal et al. Nov 2014 A1
20140347533 Ovsiannikov et al. Nov 2014 A1
20140350710 Gopalkrishnan et al. Nov 2014 A1
20140351317 Smith et al. Nov 2014 A1
20140362184 Jovanovski et al. Dec 2014 A1
20140363015 Braho Dec 2014 A1
20140369511 Sheerin et al. Dec 2014 A1
20140374483 Lu Dec 2014 A1
20140374485 Xian et al. Dec 2014 A1
20140379613 Nishitani et al. Dec 2014 A1
20150001301 Ouyang Jan 2015 A1
20150003673 Fletcher Jan 2015 A1
20150009100 Haneda et al. Jan 2015 A1
20150009301 Ribnick et al. Jan 2015 A1
20150009338 Laffargue et al. Jan 2015 A1
20150014416 Kotlarsky et al. Jan 2015 A1
20150016712 Rhoads et al. Jan 2015 A1
20150021397 Rueblinger et al. Jan 2015 A1
20150028104 Ma et al. Jan 2015 A1
20150029002 Yeakley et al. Jan 2015 A1
20150032709 Maloy et al. Jan 2015 A1
20150036876 Marrion et al. Feb 2015 A1
20150039309 Braho et al. Feb 2015 A1
20150040378 Saber et al. Feb 2015 A1
20150042791 Metois et al. Feb 2015 A1
20150049347 Laffargue et al. Feb 2015 A1
20150051992 Smith Feb 2015 A1
20150053769 Thuries et al. Feb 2015 A1
20150062160 Sakamoto et al. Mar 2015 A1
20150062366 Liu et al. Mar 2015 A1
20150062369 Gehring et al. Mar 2015 A1
20150063215 Wang Mar 2015 A1
20150063676 Lloyd et al. Mar 2015 A1
20150070158 Hayasaka Mar 2015 A1
20150070489 Hudman et al. Mar 2015 A1
20150088522 Hendrickson et al. Mar 2015 A1
20150096872 Woodburn Apr 2015 A1
20150100196 Hollifield Apr 2015 A1
20150115035 Meier et al. Apr 2015 A1
20150116498 Vartiainen et al. Apr 2015 A1
20150117749 Chen et al. Apr 2015 A1
20150127791 Kosecki et al. May 2015 A1
20150128116 Chen et al. May 2015 A1
20150130928 Maynard et al. May 2015 A1
20150133047 Smith et al. May 2015 A1
20150134470 Hejl et al. May 2015 A1
20150136851 Harding et al. May 2015 A1
20150142492 Kumar May 2015 A1
20150144692 Hejl May 2015 A1
20150144698 Teng et al. May 2015 A1
20150149946 Benos et al. May 2015 A1
20150161429 Xian Jun 2015 A1
20150163474 You Jun 2015 A1
20150178900 Kim et al. Jun 2015 A1
20150182844 Jang Jul 2015 A1
20150186703 Chen et al. Jul 2015 A1
20150199957 Funyak et al. Jul 2015 A1
20150204662 Kobayashi et al. Jul 2015 A1
20150210199 Payne Jul 2015 A1
20150213590 Brown Jul 2015 A1
20150213647 Laffargue et al. Jul 2015 A1
20150219748 Hyatt Aug 2015 A1
20150220753 Zhu et al. Aug 2015 A1
20150229838 Hakim et al. Aug 2015 A1
20150243030 Pfeiffer Aug 2015 A1
20150248578 Utsumi Sep 2015 A1
20150253469 Le Gros et al. Sep 2015 A1
20150254485 Feng et al. Sep 2015 A1
20150260830 Ghosh et al. Sep 2015 A1
20150269403 Lei et al. Sep 2015 A1
20150201181 Herschbach Oct 2015 A1
20150276379 Ni et al. Oct 2015 A1
20150308816 Laffargue et al. Oct 2015 A1
20150310243 Ackley Oct 2015 A1
20150310389 Crimm et al. Oct 2015 A1
20150316368 Moench et al. Nov 2015 A1
20150325036 Lee Nov 2015 A1
20150327012 Bian et al. Nov 2015 A1
20150332075 Burch Nov 2015 A1
20150332463 Galera et al. Nov 2015 A1
20150355470 Herschbach Dec 2015 A1
20160014251 Hejl Jan 2016 A1
20160169665 Deschenes et al. Jan 2016 A1
20160040982 Li et al. Feb 2016 A1
20160042241 Todeschini Feb 2016 A1
20160048725 Holz et al. Feb 2016 A1
20160057230 Todeschini et al. Feb 2016 A1
20160070982 Li et al. Feb 2016 A1
20160062473 Bouchat et al. Mar 2016 A1
20160063429 Varley et al. Mar 2016 A1
20160065912 Peterson Mar 2016 A1
20160088287 Sadi et al. Mar 2016 A1
20160090283 Svensson et al. Mar 2016 A1
20160090284 Svensson et al. Mar 2016 A1
20160092805 Geisler et al. Mar 2016 A1
20160094016 Beach et al. Mar 2016 A1
20160101936 Chamberlin Apr 2016 A1
20160102975 McCloskey et al. Apr 2016 A1
20160104019 Todeschini et al. Apr 2016 A1
20160104274 Jovanovski et al. Apr 2016 A1
20160109219 Ackley et al. Apr 2016 A1
20160109220 Laffargue et al. Apr 2016 A1
20160109224 Thuries et al. Apr 2016 A1
20160112631 Ackley et al. Apr 2016 A1
20160112643 Laffargue et al. Apr 2016 A1
20160117627 Raj et al. Apr 2016 A1
20160117631 McCloskey et al. Apr 2016 A1
20160124516 Schoon et al. May 2016 A1
20160125217 Todeschini May 2016 A1
20160125342 Miller et al. May 2016 A1
20160133253 Braho et al. May 2016 A1
20160138247 Conway et al. May 2016 A1
20160138248 Conway et al. May 2016 A1
20160138249 Svensson et al. May 2016 A1
20160147408 Bevis et al. May 2016 A1
20160164261 Warren Jun 2016 A1
20160171597 Todeschini Jun 2016 A1
20160171666 McCloskey Jun 2016 A1
20160171720 Todeschini Jun 2016 A1
20160171775 Todeschini et al. Jun 2016 A1
20160171777 Todeschini et al. Jun 2016 A1
20160174674 Oberpriller et al. Jun 2016 A1
20160178479 Goldsmith Jun 2016 A1
20160178685 Young et al. Jun 2016 A1
20160178707 Young et al. Jun 2016 A1
20160178915 Mor et al. Jun 2016 A1
20160179132 Harr et al. Jun 2016 A1
20160179143 Bidwell et al. Jun 2016 A1
20160179368 Roeder Jun 2016 A1
20160179378 Kent et al. Jun 2016 A1
20160180130 Bremer Jun 2016 A1
20160180133 Oberpriller et al. Jun 2016 A1
20160180136 Meier et al. Jun 2016 A1
20160180594 Todeschini Jun 2016 A1
20160180663 McMahan et al. Jun 2016 A1
20160180678 Ackley et al. Jun 2016 A1
20160180713 Bernhardt et al. Jun 2016 A1
20160185136 Ng et al. Jun 2016 A1
20160185291 Chamberlin Jun 2016 A1
20160186926 Oberpriller et al. Jun 2016 A1
20160187186 Coleman et al. Jun 2016 A1
20160187187 Coleman et al. Jun 2016 A1
20160187210 Coleman et al. Jun 2016 A1
20160188861 Todeschini Jun 2016 A1
20160188939 Sailors et al. Jun 2016 A1
20160188940 Lu et al. Jun 2016 A1
20160188941 Todeschini et al. Jun 2016 A1
20160188942 Good et al. Jun 2016 A1
20160188943 Linwood Jun 2016 A1
20160188944 Wilz et al. Jun 2016 A1
20160189076 Mellott et al. Jun 2016 A1
20160189087 Morton et al. Jun 2016 A1
20160189088 Pecorari et al. Jun 2016 A1
20160189092 George et al. Jun 2016 A1
20160189284 Mellott et al. Jun 2016 A1
20160189288 Todeschini Jun 2016 A1
20160189366 Chamberlin et al. Jun 2016 A1
20160189443 Smith Jun 2016 A1
20160189447 Valenzuela Jun 2016 A1
20160189489 Au et al. Jun 2016 A1
20160191684 DiPiazza et al. Jun 2016 A1
20160191801 Sivan Jun 2016 A1
20160192051 DiPiazza et al. Jun 2016 A1
20160125873 Braho et al. Jul 2016 A1
20160202478 Masson et al. Jul 2016 A1
20160202951 Pike et al. Jul 2016 A1
20160202958 Zabel et al. Jul 2016 A1
20160202959 Doubleday et al. Jul 2016 A1
20160203021 Pike et al. Jul 2016 A1
20160203429 Mellott et al. Jul 2016 A1
20160203641 Bostick et al. Jul 2016 A1
20160203797 Pike et al. Jul 2016 A1
20160203820 Zabel et al. Jul 2016 A1
20160204623 Haggert et al. Jul 2016 A1
20160204636 Allen et al. Jul 2016 A1
20160204638 Miraglia et al. Jul 2016 A1
20160210780 Paulovich et al. Jul 2016 A1
20160316190 McCloskey et al. Jul 2016 A1
20160223474 Tang et al. Aug 2016 A1
20160227912 Oberpriller et al. Aug 2016 A1
20160232891 Pecorari Aug 2016 A1
20160292477 Bidwell Oct 2016 A1
20160294779 Yeakley et al. Oct 2016 A1
20160306769 Kohtz et al. Oct 2016 A1
20160314276 Sewell et al. Oct 2016 A1
20160314294 Kubler et al. Oct 2016 A1
20160323310 Todeschini et al. Nov 2016 A1
20160325677 Fitch et al. Nov 2016 A1
20160327614 Young et al. Nov 2016 A1
20160327930 Charpentier et al. Nov 2016 A1
20160328762 Pape Nov 2016 A1
20160328854 Kimura Nov 2016 A1
20160330218 Hussey et al. Nov 2016 A1
20160343163 Venkatesha et al. Nov 2016 A1
20160343176 Ackley Nov 2016 A1
20160364914 Todeschini Dec 2016 A1
20160370220 Ackley et al. Dec 2016 A1
20160372282 Bandringa Dec 2016 A1
20160373847 Vargo et al. Dec 2016 A1
20160377414 Thuries et al. Dec 2016 A1
20160377417 Jovanovski et al. Dec 2016 A1
20170010141 Ackley Jan 2017 A1
20170010328 Mullen et al. Jan 2017 A1
20170010780 Waldron et al. Jan 2017 A1
20170016714 Laffargue et al. Jan 2017 A1
20170018094 Todeschini Jan 2017 A1
20170046603 Lee et al. Feb 2017 A1
20170047864 Stang et al. Feb 2017 A1
20170053146 Liu et al. Feb 2017 A1
20170053147 Geramine et al. Feb 2017 A1
20170053647 Nichols et al. Feb 2017 A1
20170055606 Xu et al. Mar 2017 A1
20170060316 Larson Mar 2017 A1
20170061961 Nichols et al. Mar 2017 A1
20170064634 Van Horn et al. Mar 2017 A1
20170083730 Feng et al. Mar 2017 A1
20170091502 Furlong et al. Mar 2017 A1
20170091706 Lloyd et al. Mar 2017 A1
20170091741 Todeschini Mar 2017 A1
20170091904 Ventress Mar 2017 A1
20170092908 Chaney Mar 2017 A1
20170094238 Germaine et al. Mar 2017 A1
20170098947 Wolski Apr 2017 A1
20170100949 Celinder et al. Apr 2017 A1
20170103545 Holz Apr 2017 A1
20170108838 Todeschini et al. Apr 2017 A1
20170108895 Chamberlin et al. Apr 2017 A1
20170115490 Hsieh et al. Apr 2017 A1
20170115497 Chen et al. Apr 2017 A1
20170116462 Ogasawara Apr 2017 A1
20170118355 Wong et al. Apr 2017 A1
20170121158 Wong May 2017 A1
20170123598 Phan et al. May 2017 A1
20170124369 Rueblinger et al. May 2017 A1
20170124396 Todeschini et al. May 2017 A1
20170124687 McCloskey et al. May 2017 A1
20170126873 McGary et al. May 2017 A1
20170126904 d'Armancourt et al. May 2017 A1
20170132806 Balachandreswaran May 2017 A1
20170139012 Smith May 2017 A1
20170139213 Schmidtlin May 2017 A1
20170140329 Bernhardt et al. May 2017 A1
20170140731 Smith May 2017 A1
20170147847 Berggren et al. May 2017 A1
20170148250 Angermayer May 2017 A1
20170150124 Thuries May 2017 A1
20170018294 Hardy et al. Jun 2017 A1
20170169198 Nichols Jun 2017 A1
20170171035 Lu et al. Jun 2017 A1
20170171703 Maheswaranathan Jun 2017 A1
20170171803 Maheswaranathan Jun 2017 A1
20170180359 Wolski et al. Jun 2017 A1
20170180577 Nguon et al. Jun 2017 A1
20170181299 Shi et al. Jun 2017 A1
20170190192 Delario et al. Jul 2017 A1
20170193432 Bernhardt Jul 2017 A1
20170193461 Jonas et al. Jul 2017 A1
20170193727 Van Horn et al. Jul 2017 A1
20170200108 Au et al. Jul 2017 A1
20170200275 McCloskey et al. Jul 2017 A1
20170200296 Jones et al. Jul 2017 A1
20170309108 Sadovsky et al. Oct 2017 A1
20170336870 Everett et al. Nov 2017 A1
20180018627 Ross Jan 2018 A1
Foreign Referenced Citations (61)
Number Date Country
2004212587 Apr 2005 AU
201139117 Oct 2008 CN
3335760 Apr 1985 DE
10210813 Oct 2003 DE
102007037282 Mar 2008 DE
1111435 Jun 2001 EP
1443312 Aug 2004 EP
1112483 May 2006 EP
1232480 May 2006 EP
2013117 Jan 2009 EP
2216634 Aug 2010 EP
2286932 Feb 2011 EP
2372648 Oct 2011 EP
2381421 Oct 2011 EP
2533009 Dec 2012 EP
2562715 Feb 2013 EP
2722656 Apr 2014 EP
2779027 Sep 2014 EP
2833323 Feb 2015 EP
2843590 Mar 2015 EP
2845170 Mar 2015 EP
2966595 Jan 2016 EP
3006893 Mar 2016 EP
3012601 Mar 2016 EP
3007096 Apr 2016 EP
3270342 Jan 2018 EP
2503978 Jan 2014 GB
2525053 Oct 2015 GB
2531928 May 2016 GB
H04129902 Apr 1992 JP
200696457 Apr 2006 JP
2007084162 Apr 2007 JP
2008210276 Sep 2008 JP
2014210646 Nov 2014 JP
2015174705 Oct 2015 JP
20100020115 Feb 2010 KR
20110013200 Feb 2011 KR
20110117020 Oct 2011 KR
20120028109 Mar 2012 KR
9640452 Dec 1996 WO
0077726 Dec 2000 WO
0114836 Mar 2001 WO
2006095110 Sep 2006 WO
2007015059 Feb 2007 WO
200712554 Nov 2007 WO
2011017241 Feb 2011 WO
2012175731 Dec 2012 WO
2013021157 Feb 2013 WO
2013033442 Mar 2013 WO
2013163789 Nov 2013 WO
2013166368 Nov 2013 WO
20130184340 Dec 2013 WO
2014023697 Feb 2014 WO
2014102341 Jul 2014 WO
2014149702 Sep 2014 WO
2014151746 Sep 2014 WO
2015006865 Jan 2015 WO
2016020038 Feb 2016 WO
2016061699 Apr 2016 WO
2016061699 Apr 2016 WO
2016085682 Jun 2016 WO
Non-Patent Literature Citations (128)
Entry
European Extended Search Report in related EP Application No. 17201794.9, dated Mar. 16, 2018, 10 pages [Only new art cited herein].
European Extended Search Report in related EP Application 17205030.4, dated Mar. 22, 2018, 8 pages.
European Exam Report in related EP Application 16172995.9, dated Mar. 15, 2018, 7 pages (Only new art cited herein).
United Kingdom Combined Search and Examination Report dated Mar. 21, 2018, 5 pages (Art has been previously cited).
European extended Search Report in related Application No. 17207882.6 dated Apr. 26, 2018, 10 pages.
Ulusoy, Ali Osman et al.; “One-Shot Scanning using De Bruijn Spaced Grids”, Brown University; 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, pp. 1786-1792 [Cited in EPO Search Report dated Dec. 5, 2017}.
Extended European Search report in related EP Application No. 17189496.7 dated Dec. 5, 2017; 9 pages.
Extended European Search report in related EP Application No. 17190323.0 dated Jan. 19, 2018; 6 pages [Only new art cited herein].
Examination Report in related GB Application No. GB1517843.7, dated Jan. 19, 2018, 4 pages [Only new art cited herein].
Examination Report in related EP Application No. 15190315, dated Jan. 26, 2018, 6 pages [Only new art cited herein].
United Kingdom Further Exam Report in related application GB1607394.2 dated Oct. 5, 2018; 5 pages {Only new art cited here in].
European Extended Search Report in related EP application 18184864.9, dated Oct. 30, 2018, 7 pages.
United Kingdom Further Examination Report in related GB Patent Application No. 1517842.9 dated Jul. 26, 2018; 5 pages [Cited art has been previously cited in this matter].
United Kingdom Further Examination Report in related GB Patent Application No. 1517112.7 dated Jul. 17, 2018; 4 pages [No art cited].
United Kingdom Further Examination Report in related GB Patent Application No. 1620676.5 dated Jul. 17, 2018; 4 pages [No art cited].
Padzensky, Ron; “Augmera; Gesture Control”, Dated Apr. 18, 2015, 15 pages [Examiner Cited Art in Office Action dated Jan. 20, 2017 in related Application.].
Grabowski, Ralph; “New Commands in AutoCADS 2010: Part 11 Smoothing 3D Mesh Objects” Dated 2011 (per examiner who cited reference), 6 pages, [Examiner Cited Art in Office Action dated Jan. 20, 2017 in related Application.].
Theodoropoulos, Gabriel; “Using Gesture Recognizers to Handle Pinch, Rotate, Pan, Swipe, and Tap Gestures” dated Aug. 25, 2014, 34 pages, [Examiner Cited Art in Office Action dated Jan. 20, 2017 in related Application.].
Boavida et al., “Dam monitoring using combined terrestrial imaging systems”, 2009 Civil Engineering Survey Dec./Jan. 2009, pp. 33-38 {Cited in Notice of Allowance dated Sep. 15, 2017 in related matter}.
Ralph Grabowski, “Smothing 3D Mesh Objects,” New Commands in AutoCAD 2010: Part 11, Examiner Cited art in related matter Non Final Office Action dated May 19, 2017; 6 pages.
Wikipedia, “Microlens”, Downloaded from https://en.wikipedia.org/wiki/Microlens, pp. 3. {Feb. 9, 2017 Final Office Action in related matter}.
Fukaya et al., “Characteristics of Speckle Random Pattern and Its Applications”, pp. 317-327, Nouv. Rev. Optique, t.6, n.6. (1975) {Feb. 9, 2017 Final Office Action in related matter: downloaded Mar. 2, 2017 from http://iopscience.iop.org}.
Thorlabs, Examiner Cited NPL in Advisory Action dated Apr. 12, 2017 in related commonly owned application, downloaded from https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=6430, 4 pages.
Eksma Optics, Examiner Cited NPL in Advisory Action dated Apr. 12, 2017 in related commonly owned application, downloaded from http://eksmaoptics.com/optical-systems/f-theta-lenses/f-theta-lens-for-1064-nm/, 2 pages.
Sill Optics, Examiner Cited NPL in Advisory Action dated Apr. 12, 2017 in related commonly owned application, http://www.silloptics.de/1/products/sill-encyclopedia/laser-optics/f-theta-lenses/, 4 pages.
European Extended Search Report in related EP Application No. 16190017.0, dated Jan. 4, 2017, 6 pages.
European Extended Search Report in related EP Application No. 16173429.8, dated Dec. 1, 2016, 8 pages [US 2013/0038881 cited on separate IDS filed concurrently herewith].
Extended European Search Report in related EP Application No. 16175410.0, dated Dec. 13, 2016, 5 pages.
European extended search report in related EP Application 16190833.0, dated Mar. 9, 2017, 8 pages [US Publication 2014/0034731 cited on separate IDS filed concurrently herewith].
United Kingdom Combined Search and Examination Report in related Application No. GB1620676.5, dated Mar. 8, 2017, 6 pages [References cited on separate IDS filed concurrently herewith; WO2014/151746, WO2012/175731, US 2014/0313527, GB2503978].
European Exam Report in related , EP Application No. 16168216.6, dated Feb. 27, 2017, 5 pages, [cited on separate IDS filed concurrently herewith; WO2011/017241 and US 2014/0104413].
EP Search Report in related EP Application No. 17171844 dated Sep. 18, 2017. 4 pages [Only new art cited herein; some art has been cited on separate IDS filed concurrently herewith}.
EP Extended Search Report in related EP Applicaton No. 17174843.7 dated Oct. 17, 2017, 5 pages {Only new art cited herein; some art has been cited on separate IDS filed concurrently herewith}.
UK Further Exam Report in related UK Application No. GB1517842.9, dated Sep. 1, 2017, 5 pages (only new art cited herein; some art cited on separate IDS filed concurrently herewith).
European Exam Report in related EP Application No. 15176943.7, dated Apr. 12, 2017, 6 pages [Art cited on separate IDS filed concurrently herewith].
European Exam Report in related EP Application No. 15188440.0, dated Apr. 21, 2017, 4 pages [Art has been cited on separate IDS filed concurrently herewith.].
European Examination report in related EP Application No. 14181437.6, dated Feb. 8, 2017, 5 pages [References cited on separate IDS filed concurrently herewith].
Chinese Notice of Reexamination in related Chinese Application 201520810313.3, dated Mar. 14, 2017, English Computer Translation provided, 7 pages [References cited on separate IDS filed concurrently herewith].
Extended European search report in related EP Application 16199707.7, dated Apr. 10, 2017, 15 pages.
Ulusoy et al., One-Shot Scanning using De Bruijn Spaced Grids, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, 7 pages [Cited in EP Extended search report dated Apr. 10, 2017; NPL 14].
European Exam Report in related EP Application No. 16152477.2, dated Jun. 20, 2017, 4 pages [References cited on separate IDS filed concurrently herewith].
European Exam Report in related EP Applciation 16172995.9, dated Jul. 6, 2017, 9 pages [References cited on separate IDS filed concurrently herewith].
United Kingdom Search Report in related Application No. GB1700338.5, dated Jun. 30, 2017, 5 pages.
European Search Report in related EP Application No. 17175357.7, dated Aug. 17, 2017, pp. 1-7 [References cited on separate IDS filed concurrently herewith].
EP Search Report in related EP Application No. 17171844 dated Sep. 18, 2017. 4 pages [Only new art cited herein}.
EP Extended Search Report in related EP Applicaton No. 17174843.7 dated Oct. 17, 2017, 5 pages {Only new art cited herein}.
UK Further Exam Report in related UK Application No. GB1517842.9, dated Sep. 1, 2017, 5 pages (only new art cited herein).
Office Action in counterpart European Application No. 13186043.9 dated Sep. 30, 2015, pp. 1-7.
Lloyd et al., “System for Monitoring the Condition of Packages Throughout Transit”, U.S. Appl. No. 14/865,575, filed Sep. 25, 2015, 59 pages, not yet published.
McCloskey et al., “Image Transformation for Indicia Reading,” U.S. Appl. No. 14/928,032, filed Oct. 30, 2015, 48 pages, not yet published.
Great Britain Combined Search and Examination Report in related Application GB1517842.9, dated Apr. 8, 2016, 8 pages.
Search Report in counterpart European Application No. 15182675.7, dated Dec. 4, 2015, 10 pages.
Wikipedia, “3D projection” Downloaded on Nov. 25, 2015 from www.wikipedia.com, 4 pages.
M.Zahid Gurbuz, Selim Akyokus, Ibrahim Emiroglu, Aysun Guran, An Efficient Algorithm for 3D Rectangular Box Packing, 2009, Applied Automatic Systems: Proceedings of Selected AAS 2009 Papers, pp. 131-134.
European Extended Search Report in Related EP Application No. 16172995.9, dated Aug. 22, 2016, 11 pages.
European Extended search report in related EP Application No. 15190306.9, dated Sep. 9, 2016, 15 pages.
Collings et al., “The Applications and Technology of Phase-Only Liquid Crystal on Silicon Devices”, Journal of Display Technology, IEEE Service Center, New, York, NY, US, vol. 7, No. 3, Mar. 1, 2011 (Mar. 1, 2011), pp. 112-119.
European extended Search report in related EP Application 13785171.3, dated Sep. 19, 2016, 8 pages.
El-Hakim et al., “Multicamera vision-based approach to flexible feature measurement for inspection and reverse engineering”, published in Optical Engineering, Society of Photo-Optical Instrumentation Engineers, vol. 32, No. 9, Sep. 1, 1993, 15 pages.
El-Hakim et al., “A Knowledge-based Edge/Object Measurement Technique”, Retrieved from the Internet: URL: https://www.researchgate.net/profile/Sabry_E1 -Hakim/ publication/44075058_A_Knowledge_Based_EdgeObject_Measurement_Technique/links/00b4953b5faa7d3304000000.pdf [retrieved on Jul. 15, 2016] dated Jan. 1, 1993, 9 pages.
H. Sprague Ackley, “Automatic Mode Switching in a Volume Dimensioner”, U.S. Appl. No. 15/182,636, filed Jun. 15, 2016, 53 pages, Not yet published.
Bosch Tool Corporation, “Operating/Safety Instruction for DLR 130”, Dated Feb. 2, 2009, 36 pages.
European Search Report for related EP Application No. 16152477.2, dated May 24, 2016, 8 pages.
Mike Stensvold, “get the Most Out of Variable Aperture Lenses”, published on www.OutdoorPhotogrpaher.com; dated Dec. 7, 2010; 4 pages, [As noted on search report retrieved from URL: http;//www.outdoorphotographer.com/gear/lenses/get-the-most-out-ofvariable-aperture-lenses.html on Feb. 9, 2016].
Houle et al., “Vehical Positioning and Object Avoidance”, U.S. Appl. No. 15/007,522 [not yet published], filed Jan. 27, 2016, 59 pages.
United Kingdom combined Search and Examination Report in related GB Application No. 1607394.2, dated Oct. 19, 2016, 7 pages.
European Search Report from related EP Application No. 16168216.6, dated Oct. 20, 2016, 8 pages.
Peter Clarke, Actuator Developer Claims Anti-Shake Breakthrough for Smartphone Cams, Electronic Engineering Times, p. 24, May 16, 2011. [Previously cited and copy provided in parent application].
Spiller, Jonathan; Object Localization Using Deformable Templates, Master's Dissertation, University of the Witwatersrand, Johannesburg, South Africa, 2007; 74 pages [Previously cited and copy provided in parent application].
Leotta, Matthew J.; Joseph L. Mundy; Predicting High Resolution Image Edges with a Generic, Adaptive, 3-D Vehicle Model; IEEE Conference on Computer Vision and Pattern Recognition, 2009; 8 pages. [Previously cited and copy provided in parent application].
European Search Report for application No. EP13186043 dated Feb. 26, 2014 (now EP2722656 (Apr. 23, 2014)): Total pages 7 [Previously cited and copy provided in parent application].
International Search Report for PCT/US2013/039438 (WO2013166368), dated Oct. 1, 2013, 7 pages [Previously cited and copy provided in parent application].
Lloyd, Ryan and Scott McCloskey, “Recognition of 3D Package Shapes for Singe Camera Metrology” IEEE Winter Conference on Applications of computer Visiona, IEEE, Mar. 24, 2014, pp. 99-106, {retrieved on Jun. 16, 2014}, Authors are employees of common Applicant [Previously cited and copy provided in parent application].
European Office Action for application EP 13186043, dated Jun. 12, 2014(now EP2722656 (Apr. 23, 2014)), Total of 6 pages [Previously cited and copy provided in parent application].
Zhang, Zhaoxiang; Tieniu Tan, Kaiqi Huang, Yunhong Wang; Three-Dimensional Deformable-Model-based Localization and Recognition of Road Vehicles; IEEE Transactions on Image Processing, vol. 21, No. 1, Jan. 2012, 13 pages. [Previously cited and copy provided in parent application].
U.S. Appl. No. 14/801,023, Tyler Doomenbal et al., filed Jul. 16, 2015, not published yet, Adjusting Dimensioning Results Using Augmented Reality, 39 pages [Previously cited and copy provided in parent application].
Wikipedia, YUV description and definition, downloaded from http://www.wikipeida.org/wiki/YUV on Jun. 29, 2012, 10 pages [Previously cited and copy provided in parent application].
YUV Pixel Format, downloaded from http://www.fource.org/yuv.php on Jun. 29, 2012; 13 pages. [Previously cited and copy provided in parent application].
YUV to RGB Conversion, downloaded from http://www.fource.org/fccyvrgb.php on Jun. 29, 2012; 5 pages [Previously cited and copy provided in parent application].
Benos et al., “Semi-Automatic Dimensioning with Imager of a Portable Device,” U.S. Appl. No. 61/149,912, filed Feb. 4, 2009 (now expired), 56 pages. [Previously cited and copy provided in parent application].
Dimensional Weight—Wikipedia, the Free Encyclopedia, URL=http://en.wikipedia.org/wiki/Dimensional_weight, download date Aug. 1, 2008, 2 pages. [Previously cited and copy provided in parent application].
Dimensioning—Wikipedia, the Free Encyclopedia, URL=http://en.wikipedia.org/wiki/Dimensioning, download date Aug. 1, 2008, 1 page [Previously cited and copy provided in parent application].
European Patent Office Action for Application No. 14157971.4-1906, dated Jul. 16, 2014, 5 pages. [Previously cited and copy provided in parent application].
European Patent Search Report for Application No. 14157971.4-1906, dated Jun. 30, 2014, 6 pages. [Previously cited and copy provided in parent application].
Caulier, Yannick et al., “A New Type of Color-Coded Light Structures for an Adapted and Rapid Determination of Point Correspondences for 3D Reconstruction.” Proc. of SPIE, vol. 8082 808232-3; 2011; 8 pages [Previously cited and copy provided in parent application].
Kazantsev, Aleksei et al. “Robust Pseudo-Random Coded Colored STructured Light Techniques for 3D Object Model Recovery”; ROSE 2008 IEEE International Workshop on Robotic and Sensors Environments (Oct. 17-18, 2008) , 6 pages [Previously cited and copy provided in parent application].
Mouaddib E. et al. “Recent Progress in Structured Light in order to Solve the Correspondence Problem in Stereo Vision” Proceedings of the 1997 IEEE International Conference on Robotics and Automation, Apr. 1997; 7 pages [Previously cited and copy provided in parent application].
Proesmans, Marc et al. “Active Acquisition of 3D Shape for Moving Objects” 0-7803-3258-X/96 1996 IEEE; 4 pages [Previously cited and copy provided in parent application].
Salvi, Joaquim et al. “Pattern Codification Strategies in Structured Light Systems” published in Pattern Recognition; The Journal of the Pattern Recognition Society, Received Mar. 6, 2003; Accepted Oct. 2, 2003; 23 pages [Previously cited and copy provided in parent application].
EP Search and Written Opinion Report in related matter EP Application No. 14181437.6, dated Mar. 26, 2015, 7 pages. [Previously cited and copy provided in parent application].
Hetzel, Gunter et al.; “3D Object Recognition from Range Images using Local Feature Histograms,”, Proceedings 2OO1 IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2001. Kauai, Hawaii, Dec. 8-14, 2001; pp. 394-399, XP010584149, ISBN: 978-0-7695-1272-3. [Previously cited and copy provided in parent application].
Second Chinese Office Action in related CN Application No. 201520810685.6, dated Mar. 22, 2016, 5 pages, no references. [Previously cited and copy provided in parent application].
European Search Report in related EP Application No. 15190315.0, dated Apr. 1, 2016, 7 pages [Previously cited and copy provided in parent application].
Second Chinese Office Action in related CN Application No. 2015220810562.2, dated Mar. 22, 2016, 5 pages. English Translation provided [No references] [Previously cited and copy provided in parent application].
European Search Report for related Application EP 15190249.1, dated Mar. 22, 2016, 7 pages. [Previously cited and copy provided in parent application].
Second Chinese Office Action in related CN Application No. 201520810313.3, dated Mar. 22, 2016, 5 pages. English Translation provided [No references].
U.S. Appl. No. 14/800,757 , Eric Todeschini, filed Jul. 16, 2015, not published yet, Dimensioning and Imaging Items, 80 pages [Previously cited and copy provided in parent application].
U.S. Appl. No. 14/747,197, Serge Thuries et al., filed Jun. 23, 2015, not published yet, Optical Pattern Rojector; 33 pages [Previously cited and copy provided in parent application].
U.S. Appl. No. 14/747,490, Brian L. Jovanovski et al., filed Jun. 23, 2015, not published yet, Dual-Projector Three-Dimensional Scanner; 40 pages [Previously cited and copy provided in parent application].
Search Report and Opinion in related GB Application No. 1517112.7, dated Feb. 19, 2016, 6 pages [Previously cited and copy provided in parent application].
U.S. Appl. No. 14/793,149, H. Sprague Ackley, filed Jul. 7, 2015, not published yet, Mobile Dimensioner Apparatus for Use in Commerce; 57 pages [Previously cited and copy provided in parent application].
U.S. Appl. No. 14/740,373, H. Sprague Ackley et al., filed Jun. 16, 2015, not published yet, Calibrating a Volume Dimensioner; 63 pages [Previously cited and copy provided in parent application].
Intention to Grant in counterpart European Application No. 14157971.4 dated Apr. 14, 2015, pp. 1-8 [Previously cited and copy provided in parent application].
Decision to Grant in counterpart European Application No. 14157971.4 dated Aug. 6, 2015, pp. 1-2 [Previously cited and copy provided in parent application].
Leotta, Matthew, Generic, Deformable Models for 3-D Vehicle Surveillance, May 2010, Doctoral Dissertation, Brown University, Providence RI, 248 pages [Previously cited and copy provided in parent application].
Ward, Benjamin, Interactive 3D Reconstruction from Video, Aug. 2012, Doctoral Thesis, Univesity of Adelaide, Adelaide, South Australia, 157 pages [Previously cited and copy provided in parent application].
Hood, Frederick W.; William A. Hoff, Robert King, Evaluation of an Interactive Technique for Creating Site Models from Range Data, Apr. 27-May 1, 1997 Proceedings of the ANS 7th Topical Meeting on Robotics & Remote Systems, Augusta GA, 9 pages [Previously cited and copy provided in parent application].
Gupta, Alok; Range Image Segmentation for 3-D Objects Recognition, May 1988, Technical Reports (CIS), Paper 736, University of Pennsylvania Department of Computer and Information Science, retrieved from Http://repository.upenn.edu/cis_reports/736, Accessed May 31, 2015, 157 pages [Previously cited and copy provided in parent application].
Reisner-Kollmann,Irene; Anton L. Fuhrmann, Werner Purgathofer, Interactive Reconstruction of Industrial Sites Using Parametric Models, May 2010, Proceedings of the 26th Spring Conference of Computer Graphics SCCG ″10, 8 pages [Previously cited and copy provided in parent application].
Drummond, Tom; Roberto Cipolla, Real-Time Visual Tracking of Complex Structures, Jul. 2002, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, No. 7; 15 pages. [Previously cited and copy provided in parent application].
European Search Report for Related EP Application No. 15189214.8, dated Mar. 3, 2016, 9 pages [Previously cited and copy provided in parent application].
Santolaria et al. “A one-step intrinsic and extrinsic calibration method for laster line scanner operation in coordinate measuring machines”, dated Apr. 1, 2009, Measurement Science and Technology, IOP, Bristol, GB, vol. 20, No. 4; 12 pages [Previously cited and copy provided in parent application].
Search Report and Opinion in Related EP Application 15176943.7, dated Jan. 8, 2016, 8 pages [Previously cited and copy provided in parent application].
European Search Report for related EP Application No. 15188440.0, dated Mar. 8, 2016, 8 pages. [Previously cited and copy provided in parent application].
United Kingdom Search Report in related application GB1517842.9, dated Apr. 8, 2016, 8 pages [Previously cited and copy provided in parent application].
Great Britain Search Report for related Application On. GB1517843.7, dated Feb. 23, 2016; 8 pages [Previously cited and copy provided in parent application].
Combined Search and Examination Report in related UK Application No. GB1900752.5 dated Feb. 1, 2019, pp. 1-5.
Examination Report in related UK Application No. GB1517842.9 dated Mar. 8, 2019, pp. 1-4.
Examination Report in related EP Application No. 13193181.8 dated Mar. 20, 2019, pp. 1-4.
First Office Action in related CN Application No. 201510860188.1 dated Jan. 18, 2019, pp. 1-14 [All references previously cited.].
Examination Report in related EP Application No. 13785171.3 dated Apr. 2, 2019, pp. 1-5.
Lowe David G., “Filling Parameterized Three-Dimensional Models to Images”, IEEE Transaction on Pattern Analysis and Machine Intelligence, IEEE Computer Society, USA, vol. 13, No. 5, May 1, 1991, pp. 441-450.
Combined Search and Examination Report in related UK Application No. GB1817189.2 dated Nov. 14, 2018, pp. 1-4 [Reference previously cited].
Examination Report in related UK Application No. GB1517842.9 dated Dec. 21, 2018, pp. 1-7 [All references previously cited.].
Examination Report in European Application No. 16152477.2 dated Jun. 18, 2019, pp. 1-6.
Examination Report in European Application No. 17175357.7 dated Jun. 26, 2019, pp. 1-5.
Examination Report in European Application No. 19171976.4 dated Jun. 19, 2019, pp. 1-8.
Examination Report in GB Application No. 1607394.2 dated Jul. 5, 2019, pp. 1-4.
Related Publications (1)
Number Date Country
20180018820 A1 Jan 2018 US
Continuations (1)
Number Date Country
Parent 13464799 May 2012 US
Child 15718593 US