Absolute position detection

Information

  • Patent Grant
  • 11816856
  • Patent Number
    11,816,856
  • Date Filed
    Friday, April 2, 2021
    3 years ago
  • Date Issued
    Tuesday, November 14, 2023
    a year ago
Abstract
A system for determining the absolute position of a first object with respect to a second object includes a scalar element attached to the first object and a measuring device attached to the second object. The scalar element comprises a series of coded regions. The coded region represents a number designating a position along an axis of the scalar element. The measuring device includes a two-dimensional optical sensor array configured to capture an image of a portion of the scalar element. The system also includes a processor configured to receive the image and determine an absolute position of the first object with respect to the second object based on at least one coded region of the series of coded regions.
Description
BACKGROUND
1. Field

The present disclosure relates generally to determining absolute position along an axis of motion, and more specifically to viewing an optically readable scalar element with a two-dimensional sensor array and determining an absolute position along the axis of motion.


2. Description of Related Art

Relative distance along an axis can be measured using a traditional optical scale. A traditional optical scale may include a series of regularly repeating index marks set apart by a pitch that is a known distance. A relative distance or the position of an object along the scale can be determined by counting the number of index marks and multiplying the count by the known distance of the pitch.


A traditional optical scale can be used to measure the relative motion between two objects. For example, a traditional optical scale may be attached to a first object in a location that can be viewed with respect to a reference point or indicator on a second object. The position of the second object can be measured with respect to the reference point by counting the number of index marks that pass the reference point. Using this technique, the position along an axis can be measured by determining an absolute home location, usually at one end of the scale, and then counting the number of index marks from the home location. The index count can be stored in computer memory and incremented or decremented depending on the movement of the second object.


One disadvantage to using a traditional scale is that the position of the second object cannot be determined if the index count is lost or accumulates errors. For example, if the index count is lost due to an electrical reset or loss of power, the position of the second object cannot be determined without returning to the home location. Similarly, the index count may be lost if the second object is removed from the first object and returned in a different position along the axis. Because the index count was not incremented or decremented while the second object was removed, the relative position of the second object stored in computer memory is no longer correct. Additionally, errors in the index count can accumulate over time and result in a reported position that is inaccurate.


What is needed is a technique for determining absolute position along an axis of motion without the disadvantages of a traditional optical scale.


BRIEF SUMMARY

The embodiments described herein include a measurement system for measuring the absolute position of a first object with respect to a second object along an axis of motion. The system comprises a scalar element attached to the first object. The scalar element comprises a series of coded regions, each coded region of the series of coded regions having information encoded along a direction perpendicular to the axis of motion. The coded region represents a number designating a position along an axis of the scalar element. A measuring device is attached to the second object and includes a two-dimensional optical sensor array configured to capture an image of a portion of the scalar element. A processor is configured to receive the image and determine an absolute position of the first object with respect to the second object based on one coded region of the series of coded regions. In some embodiments, the two-dimensional optical sensor array is a camera sensor.


In some embodiments, each coded region of the series of coded regions is a binary code that represents a number designating a position along an axis of the scalar element. In some embodiments, the system also includes a display configured to display the absolute position to a user. The system may also include a motion controller configured to receive the absolute position, and to cause a movement of the first object along the axis of motion based on the received absolute position.


In some embodiments, the measuring device is configured to capture a plurality of images as the first or second object move along the axis of motion and the processor is configured to determine, in real time, an absolute position for each of the plurality of images.


In some embodiments, each coded region comprises a colored region, and the processor is further configured to determine an absolute position of the first object with respect to the second object based on the colored region. In some cases, each coded region comprises more than one colored regions, and the processor is further configured to determine an absolute position of the first object with respect to the second object based on the more than one colored region.


The embodiments described herein include a measurement system for measuring the absolute position of a first object with respect to a second object. The system comprises a scalar element attached to the first object and a measuring device attached to the second object. In one embodiment, the scalar element comprises a series of regularly repeating optically readable index lines, and a series of coded regions, each coded region of the series of coded regions disposed between two index lines of the series of index lines. The coded region represents a number designating a position along an axis of the scalar element. In another embodiment, only the coded regions are provided.


The measuring device includes a two-dimensional optical sensor array configured to capture an image of a portion of the scalar element. The system also includes a processor configured to receive the image and determine an absolute position of the first object with respect to the second object. In one embodiment, the position is based on at least one index line of the series of index lines and at least one coded region of the series of coded regions. Where only coded regions are provided, the position is based on an image of a coded region.





DESCRIPTION OF THE FIGURES


FIG. 1 depicts a system for determining an absolute position.



FIG. 2 depicts an exemplary embodiment of a scalar element.



FIGS. 3A-C depicts portions of a scalar element.



FIGS. 4A-C depict exemplary images of portions of a scalar element.



FIG. 5 depicts another exemplary embodiment of a scalar element.



FIG. 6 depicts a method for determining absolute position using a measuring device and a scalar element.



FIG. 7 depicts exemplary dimensions of a scalar element.



FIG. 8 depicts an exemplary caliper.



FIG. 9 depicts a table showing an exemplary binary coding scheme using a graphical representation of binary numbers in a sequence.





DETAILED DESCRIPTION

The following description is presented to enable a person of ordinary skill in the art to make and use the various embodiments. Descriptions of specific devices, techniques, and applications are provided only as examples. Various modifications to the examples described herein will be readily apparent to those of ordinary skill in the art, and the general principles defined herein may be applied to other examples and applications without departing from the spirit and scope of the various embodiments. Thus, the various embodiments are not intended to be limited to the examples described herein and shown, but are to be accorded the scope consistent with the claims.


The following embodiments relate to the measurement of an absolute position between two objects that are capable of moving with respect to each other, at least with respect to one axis. One object may be fixed with respect to ground and the other object may move along an axis of motion. The objects may be moved manually or may be driven by motor or other electromechanical device.


One exemplary embodiment is depicted in the measurement system of FIG. 1. The measurement system 100 includes a first object 102 and a second object 104. In the present embodiment, the first object 102 is fixed with respect to ground and the second object 104 is able to move along the an axis (the x-axis). The second object 104 may be constrained by a rail or mechanical guide attached to a third object 106, which is fixed with respect to ground. The range of motion of the second object 104 may be limited by hard stops on either end of the third object 106.


A scalar element 110 is attached to the top face of the first object 102. An exemplary scalar element 110 is described in more detail below with respect to FIG. 2. A measuring device 120 is attached to the bottom face of the second object 104 and disposed above the scalar element 110. In the present embodiment, the measuring device 120 remains disposed over the scalar element 110 throughout the range of motion of the second object 104.


The measuring device 120 includes a two-dimensional optical sensor array 122 for capturing an image of a portion of the scalar element 110. Specifically, the optical sensor array 122 has an optical field of view that is sufficiently wide to view one or more optical features of the scalar element 110. In the present embodiment, the optical sensor array 122 is a charge-coupled device (CCD) capable of producing an electrical signal in response to light incident to the surface of the CCD. The measuring device 120 may include one or more optical elements (e.g. lenses) for focusing light onto the CCD. The measuring device 120 may also include one or more lighting elements 124 for illuminating the surface of the scalar element 110. For example, the lighting elements 124 may include one or more light emitting diodes (LEDS) configured to emit an illuminating light over the portion of the scalar element 110. The measuring device 120 is a capable of producing an image of a portion of the scalar element 110 and output the image as an array of pixel values.


The measuring device 120 may also include or be operatively coupled to one or more processors for interpreting the array of pixel values and determining an absolute position of the second object 104 with respect to the first object 102. A more detailed discussion of the image processing technique is provided below with respect to process 1000 depicted in FIG. 6.



FIG. 2 depicts a top view of an exemplary scalar element 110. The scalar element 110 includes a series of regularly repeating optically readable index lines 112. In the present embodiment, the index lines 112 are vertical lines spaced at a 0.7 mm pitch. The scalar element 110 also includes a series of coded regions 114 disposed between each pair of index lines 112. In the present embodiment, the coded region 114 includes a binary code 116, which represents a number value designating an absolute position of the binary region with respect to the scalar element 110.


In one embodiment, the each binary code 116 represents a value designating the number of units of a known distance (or number of index lines 112 having a known spacing) from a home location on the scalar element 110. In FIG. 2, the home location is represented by the index line labeled 0%. Starting from the left, the first coded region includes a binary code representing the number value 0, indicating the starting location or home position. The next coded region to the right of the first coded region includes a binary code representing the number value 1, indicating that this location is spaced from the home position by one known distance unit. Similarly, each coded region 114 of the series of coded regions includes a binary code 116 representing a value that indicates the number of units of a known distance (or the number of index lines 112 having a known spacing) between the coded region 114 and the home location.


The embodiment depicted in FIG. 2 is exemplary and other techniques could be used to designate a number value within the coded regions. For example, in an alternative embodiment, the coded regions could include a shaded region having a grey scale value or color value that represents a number value indicating the number of units of distance from a particular location on the scalar element 110. In another alternative embodiment, the coded region could include a both a binary code and a grey scale value or color value that represents a number value. The coded region may also include other information, such as error correction bits or optical reference marks used to register an image.


Since the binary code 116 can be used to represent a number of units of distance from the origin, the system does not require index lines 112. In practice, index lines are preferable since they provide a simple way of extending the length of the scale by a factor of two. Also, the index lines have a high contrast that can be more easily measured. It should be noted that the extended length index lines depicted in FIG. 2 (corresponding to the numbers 0, 10, 20, 30, and 40) are not required and are used to aid human observation.


Note also that it is possible that the binary code 116 can be arranged to correspond to an actual, absolute total distance from the origin rather than a unit distance multiplier.


In FIG. 2, the pattern on the scalar element is arranged along a straight line and attached to a flat surface. In alternative embodiments, the pattern on the scalar element could be arranged along a curve and the scalar element could be attached to an arced, helical, or other topographically shaped surface.


As discussed above, the measuring device 120 includes a two-dimensional optical sensor array configured to capture an image of a portion of the scalar element 110. With respect to the present embodiment, the two-dimensional optical sensor array is a black and white CCD camera sensor coupled with an optical element configured to produce an image representing an approximately 0.7 mm square portion of the scalar element 110.



FIGS. 3A-C depict exemplary portions of a scalar element as viewed by the measuring device. FIGS. 3A-C represent an exemplary optical field of view, which is typically larger than the side of the image produced by the camera sensor. FIGS. 4A-C depict images of the portion of the scalar element produced by the measuring device. The images depicted in FIGS. 4A-C correspond to the portions of the scalar element depicted in FIGS. 3A-C.


As shown in FIGS. 3A-C, the optical field of view of the measuring device 120 is sufficiently large to view at least index line 112 and at least one coded region 114 having a binary code 116. Accordingly, as shown in FIGS. 4A-C, the field of view of the measuring device 120 is sufficiently large to produce an image having pixel data for least one index line 112 and at least one coded region 114 (including binary code 116) regardless of the position of the measuring device 120 with respect to the scalar element 110. In some embodiments, the image is larger and includes three features of the scalar element 110: one index line 112, one coded region 114, and one more feature that is either an index line 112 or a coded region 114. In embodiments that do not include index lines, only the binary code 116 portion of the coded region 114 need be captured in the image.


As shown in FIGS. 4A-C, an image produced by the measuring device 120 is composed of a two-dimensional array of pixels. As explained in more detail below with respect to process 1000, image processing may be performed on the image to improve the quality of the pixel data. For example, a threshold filter or other image processing technique may be applied to the acquired image to convert the image into an array of black and white (on or off) pixel data. Further processing may be performed to group pixels that represent a feature on the scalar element 110 and to determine the position of the pixel groups within the image.


Within the image, the position information of the pixel groups can be used to improve the precision of the position of the measuring device 120 with respect to the scalar element 110. For example, as shown in FIGS. 4A-C, the center of a pixel group 404 can be determined by averaging the location of pixels within the group along an axis of the image. The center of the image 406 can also be determined. Using the center of the pixel group 404 and the center of the image 406, an offset 406 between the coded region and the optical center of the measuring device 120 can be determined. The offset 406 can then be added or subtracted from the absolute position indicated by the coded region to improve the precision of the measurement. The same technique can be applied to pixel groups that represent a coded region and pixel groups that represent an index line.


In the present embodiment, the index lines 112 have a width of approximately 0.1 mm and are spaced approximately 0.7 mm apart. The binary code 116 includes binary elements that are approximately 0.1 mm wide and are also spaced approximately 0.7 mm apart. The binary elements of the binary code are arranged along a direction that is perpendicular to the axis of motion between the scalar element 110 and the measuring device 120. More generally, the coded region typically includes information that is encoded along a direction perpendicular to the axis of motion. In the present embodiment, the information is encoded only along a direction that is perpendicular to the axis of motion.



FIG. 7 depicts other exemplary dimensions for an exemplary scalar element 710. A full view of the scalar element 510 is depicted on the left hand of FIG. 7 and indicates a 70 mm working length of the scalar element 510 that include index lines 512 and coded regions 514. The two detail views (Detail A and Detail B) depict left and right portions of the scalar element 510, respectively. As shown in FIG. 7, the index line 512 is 1 mm long and the coded region is 0.8 mm long.


With respect to FIGS. 2, 3A-C, and 7, the position of the binary elements within the coded region represents the bit location within an 8 bit word. The color of the binary element represents the bit value (0 or 1) for the corresponding bit location. For example, the top or first location represents first bit in the word. A binary element in the first location indicates a binary value of 1 for the first bit and a binary element in the second element represents a binary value of 1 for the second bit and so on. FIG. 9 depicts an exemplary binary coding scheme using a graphical representation of binary numbers in a sequence. The example in FIG. 9 uses a graphical binary element placed in one of 8 locations to create an 8-bit coding scheme. Additional bits could be accommodated by expanding the number of locations or by using color-coded elements, as described in more detail below with respect to FIG. 5.


As described in more detail below with respect to process 1000, the index lines and the coded regions can both be used to determine an absolute position along the scalar element. Specifically, the coded region represents a number value indicating the number of index lines from a known location on the scalar element. As previously mentioned, the position of the index lines within the image allows the system to determine a precise position of the measuring device with respect to the scalar element by indicating the relative location of the coded region within the image. Combining the information provided by the coded region with the information provided by the location of the index lines, the system can determine the absolute position of the measuring device with respect to the scalar element. As noted above, in some embodiments, the absolute position can be determined using only the coded region without reference to index lines.


The coded region and the index lines can also be used together to produce rapid position feedback between the first and second objects. For example, the index lines can be used to count the number of steps during a rapid motion between the two objects. The count of the number of index lines can be used to determine the magnitude of the rapid motion and can be used as position feedback for a motion control system, for example. The coded region can then be used near the end or at the end of the movement to verify or correct the magnitude of the rapid motion and provide an absolute position of the second object with respect to the first object.


By having the coding regions encoded with information along a direction that is perpendicular to the axis of motion, additional advantages may be utilized. Specifically, the coded regions can be used to count the number of steps as described above with respect to the index lines. Accordingly, the coded region can serve a dual role as both an indicator for counting relative motion and as a representation of the absolute location.


In an alternative embodiment, the measuring device may include a color camera sensor and the scalar element may include one or more color-coded regions that provides additional information about the location of the coded regions with respect to a known location on the scalar element. FIG. 5 depicts another exemplary scalar element 510. As shown in FIG. 5, the scalar element 510 includes a color-coded region 118 that, together with other portions of the coded regions 114, represents a number indicating the number of index lines from a known location on the scalar element. For example, the color of the color-coded region 118 may indicate an additional bit of information used to determine the number represented by the coded regions 114.


The location of the color-coded regions 118 may provide additional information used to determine the distance from the home position. For example, the position of the color-coded region 118 may designate additional bits of information that can be used to determine the number of units of distance from the home position. The color-coded region may be a portion of the coded region or the entire background color of the coded region, or both.


In the embodiment of FIG. 5, the color coded region 118 is aligned with the first binary bit. One can obtain a full extra eight bits of information by moving the region around, adding regions and/or expanding the region. In this regard, the color regions could be laid out in much the same way as the bits are shown in the table in FIG. 9. For example, decimal 5 would include color strips aligned with the location corresponding to the first and third binary bits, while decimal 7 could include a broad color strip aligned with the location corresponding to the first three binary bits.


The above concept can be expanded to include stripes having different colors, such as red, blue, and green. One advantage of using multiple color-coded regions is that the amount of information that can be encoded in the coded region can be significantly expanded. For example, if an n-number of colors are used, the color region can be used to represent n-based number sequences. In this way, the color and location of the one or more color-coded regions can be used to expand the amount of information contained on the scalar element 110 without increasing the width of the scalar element or the field of view of the measuring device 120.


The features described with respect to the measuring device 120 and the scalar element 110 can be used in various combinations to achieve an absolute position of a first object with respect to a second object. In addition, the particular configuration may vary without departing from the nature of the measurement system 100. For example, the scalar element 110 may be attached to the second object 104 and the measurement device 104 may be attached to the first object.


There are multiple implementations of the measuring system described with respect to the embodiments described above that can be used to determine the absolute position of a first object with respect to a second object. For example, the first object may include a base stage element in a gantry robot system. The second object may include a movable armature that is able to traverse with respect to the base stage element. Accordingly, a measurement system in accordance with the embodiments herein can be used to determine an absolute position of the armature with respect to the base stage element. The gantry robot system may include motion controller electronics for controlling motors for moving the armature. The motion controller electronics may use the absolute position of the armature as position feedback for controlling the motion and positioning the armature. As described below with respect to process 1000, the measurement system can be used to calculate an absolute position in real time as the measuring device (or scalar element) is moved, which is advantageous in providing rapid and accurate position feedback to motion controller electronics.


Another exemplary embodiment is described with respect to a bottle dispenser with a digital volume display. A description of a bottle dispenser embodiment is attached as Appendix A and incorporated by reference herein in its entirety. Another description of a bottle dispenser embodiment is attached as Appendix B and incorporated by reference herein in its entirety. A description of the bottle dispenser embodiment is also provided in published application WO/2012/103870 which is incorporated by reference herein in its entirety. In these embodiments, the distance measured by the system is converted into a volume of dispensed fluid.



FIG. 6 depicts a flow chart for an exemplary process 1000 for determining absolute position using a measuring device and a scalar element. The process 1000 may be implemented as computer-readable instruction executed on one or more computer processors.


In operation 1002, an image is acquired using the measuring device. As described above with respect to FIGS. 4A-C, the measuring device 120 can be used to produce an image of a portion of the scalar element 110. The image includes at least one coded region 114. In some embodiments the image includes at least one index line 112 and at least one coded region 114.


In operation 1004, image processing is performed on the acquired image. For example, a threshold filter or other image processing technique may be applied to the acquired image to convert the image data to binary values for each pixel in the image. Additional image processing may be performed to determine pixel groups and the shape and location of the pixel groups within the image. The location of the pixel groups may be representative of the location of the index lines and the coded regions of the scalar element with respect to the measuring device.


In operation 1006, the pixel groups of the image representing one or more coded regions is used to determine a number value. As discussed above with respect to FIG. 2, the value may indicate the number of units of distance (or number of index lines of known spacing) from a known or home position on the scalar element. In one example, the coded region includes a binary code having binary elements that are position within the coded region. The number and position of the binary elements can be used to determine the number value.


In operation 1008, an offset of the coded region is determined within the image. As shown in FIGS. 4A-C, the position of the coded region may vary within the captured image. In one example, one or more pixel groups in the image may represent one or more index lines and can be used to determine the center of the pixel group representing the coded region within the image. This position information can be used, for example, to determine an offset between the center of the coded region and the center of the image.


In operation 1010, an absolute position is determined using the number and the position of the coded region within the image. For example, the value may represent the number of units of distance (or the number of index lines of known spacing) from a known position along the scalar element. By multiplying the number of units times the known distance or spacing between coded regions, an absolute position of measuring device can be determined. The accuracy of the absolute position can then be improved by, for example, adding (or subtracting) the offset between the center of the coded region and the center of the image determined in operation 1008.


Process 1000 is typically repeated as the measuring device and the scalar element are moving with respect to each other. In some embodiments, a plurality of images are captured as the measuring device and the scalar element are moving with respect to each other. By processing multiple captured images, the absolute position can be calculated in real time as the measuring device and the scalar element are moving with respect to each other.


As previously mentioned, in some cases, the absolute position is provided to a motion control system as position feedback. In some cases the absolute position is displayed to a user on, for example, a digital read out display or a computer monitor display.


The measurement system 100 described with respect to FIG. 1 can set to produce a position with respect to a home or reference position that does not coincide with the end of the scalar element. For example, the second object 104 may be moved to one end of the range of travel. A proximity switch or hard stop can be used to determine the home or reference location. The position of the second object 104 may be set to zero at this location. The absolute position may be determined using, for example, process 1000 described above. The difference between the reference position zero and the absolute position zero can be stored and added (or subtracted) from subsequent measurements of absolute position to determine a distance from the home or reference position.


The measurement system 100 described with respect to FIG. 1 can also be calibrated to provide a repeatable task or motion. For example, the second object may be attached to a piston used to deliver a quantity of liquid. To calibrate the system, the second object may be moved to a first position where the piston motion should begin. The second object is then moved to a second position where the piston motion should end. The amount of liquid displaced or dispensed by the piston can be measured and the absolute positions of the first and second position can be stored. A simple linear relationship can then be determined between absolute position and the amount of liquid displaced or dispensed by the piston.


A more detailed discussion of such a system is provided in a description of a bottle dispenser embodiment that is attached as Appendices A and B.



FIG. 8 is an illustration of a caliper 800 modified using the position detection system of the subject invention. The caliper includes a pair of jaws 802 and 804. When the jaws are opened to obtain a measurement, the left jaw 802 remains stationary and the right jaw 804 moves the right. This movement causes the position detection electronics housing 806 to move to the right over stationary ruler 808. Ruler 808 includes a scalar element 810. As in the previous embodiments, the scalar element will include a series of coded regions. The scalar element can also include index marks.


Housing 806 includes a camera 812 aligned with the scalar element 810. As in the previous embodiments, in order to determine the spacing of the caliper jaws, the camera obtains an image of the coded region. The coded region provides information about the distance from the start or home position. Any of the various approaches for encoding of the coded region discussed above can be used. For example, the coded regions can be in the form of actual distances from the home position or can be a number which is multiplied by a fixed distance.


In the illustrated embodiment, housing 806 includes a display 814 to show the distance from the home position (jaws closed). A single switch 816 is provided for turning on the electronics and for toggling between inches and millimeters in the display.


A ten bit binary encoding system would provide over 1000 unique binary codes spaced apart at 0.1 mm to cover 100 mm caliper separation. Adding index lines can double that range. Various approaches for printing these type of closely spaced codes can be used including lithographic printing. As in the previous embodiments, the location of the coded region within the two dimensional image generated by the camera can be used to provide position information with a higher resolution than the spacing between the coded regions.


It is envisioned that during the final manufacturing steps, the device will be subjected to a one time calibration procedure. Specifically, the jaws will be placed in the closed position and the aligned coded region (which is preferably spaced from the end of the scalar element) is detected. The detected coded region will become the effective home position and will be uploaded to the software in the processor (not shown) in the housing. This value would then be subtracted from the any value measured by the system when the jaws are separated.

Claims
  • 1. A measurement system for measuring the absolute position of a first object with respect to a second object along an axis of motion representing a direction of relative motion between the first object and the second object, the system comprising: a scalar element attached to the first object, the scalar element comprising a series of coded regions positioned along an axis of the scalar element parallel to the axis of motion, each coded region of the series of coded regions having a pattern with a dimension along a direction perpendicular to the axis of motion, wherein the variation of the pattern along the direction perpendicular to the axis of motion encodes information that represents a number designating an absolute position of the pattern along the axis of the scalar element;a measuring device attached to the second object, the measuring device including a two-dimensional optical sensor array configured to capture an image of a portion of the scalar element, the image including a first pattern in one coded region of the series of coded regions; anda processor configured to: receive the image;determine the absolute position of the first pattern along the axis of the scalar element based on the information encoded by the first pattern;determine an offset between a center region of the image and the one coded region; anddetermine an absolute position of the first object with respect to the second object based on the absolute position of the first pattern and the offset between the center region of the image and the one coded region of the series of coded regions.
  • 2. The measurement system of claim 1, wherein each coded region of the series of coded regions is a binary code that represents a number designating a position along an axis of the scalar element.
  • 3. The measurement system of claim 1, further comprising a display configured to display the absolute position to a user.
  • 4. The measurement system of claim 1, further comprising a motion controller configured to receive the absolute position, and to cause a movement of the first object along the axis of motion based on the received absolute position.
  • 5. The measurement system of claim 1, wherein the measuring device is configured to capture a plurality of images as the first or second object move along the axis of motion and the processor is configured to determine, in real time, an absolute position for each of the plurality of images.
  • 6. The measurement system of claim 1, wherein each coded region comprises a colored region, and the processor is further configured to determine an absolute position of the first object with respect to the second object based on the colored region.
  • 7. The measurement system of claim 1, wherein each coded region comprises more than one colored regions, and the processor is further configured to determine an absolute position of the first object with respect to the second object based on the more than one colored region.
  • 8. The measurement system of claim 1, wherein the two-dimensional optical sensor array is a camera sensor.
  • 9. A measurement system for measuring the absolute position of a first object with respect to a second object along an axis of motion representing a direction of relative motion between the first object and the second object, the system comprising: a scalar element attached to the first object, the scalar element comprising: a series of regularly repeating optically readable index lines positioned along an axis of the scalar element parallel to the axis of motion, anda series of coded regions positioned along the axis of the scalar element, each coded region of the series of coded regions disposed between two index lines of the series of index lines, wherein each coded region includes a pattern with a dimension along a direction perpendicular to the axis of motion, and wherein the variation of the pattern along the direction perpendicular to the axis of motion encodes information that represents a number designating an absolute position of the pattern along the axis of the scalar element;a measuring device attached to the second object, the measuring device including a two-dimensional optical sensor array configured to capture an image of a portion of the scalar element, the image including a first pattern in one coded region of the series of coded regions; anda processor configured to: receive the image;determine the absolute position of the first pattern along the axis of the scalar element based on the information encoded by the first pattern;determine an offset between a center region of the image and the one coded region; anddetermine an absolute position of the first object with respect to the second object based on at least one index line of the series of index lines, the absolute position of the first pattern, and the offset between the center region of the image and the one coded region of the series of coded regions.
  • 10. The measurement system of claim 9, wherein each coded region of the series of coded regions is a binary code that represents a number designating a position along an axis of the scalar element.
  • 11. The measurement system of claim 9, further comprising a display configured to display the absolute position to a user.
  • 12. The measurement system of claim 9, further comprising a motion controller configured to receive the absolute position, and to cause a movement of the first object along the axis of motion based on the received absolute position.
  • 13. The measurement system of claim 9, wherein the measuring device is configured to capture a plurality of images as the first or second object move along the axis of motion and the processor is configured to determine, in real time, an absolute position for each of the plurality of images.
  • 14. The measurement system of claim 9, wherein each coded region comprises a colored region disposed between two index lines of the series of index lines, and the processor is further configured to determine an absolute position of the first object with respect to the second object based on the colored region.
  • 15. The measurement system of claim 9, wherein each coded region comprises more than one colored regions disposed between two index lines of the series of index lines, and the processor is further configured to determine an absolute position of the first object with respect to the second object based on the more than one colored region.
  • 16. The measurement system of claim 9, wherein the image of a portion of the scalar element includes a first index line, a first coded region, and an additional feature that is either a second index line or a second coded region.
  • 17. The measurement system of claim 9, wherein the two-dimensional optical sensor array is a camera sensor.
  • 18. A method of determining absolute position of a first object with respect to a second object along an axis of motion representing a direction of relative motion between the first object and the second object, the method comprising: acquiring an image of a portion of a scalar element attached to the first object using a measuring device attached to the second object, wherein the scalar element comprises a series of coded regions positioned along an axis of the scalar element parallel to the axis of motion, each coded region of the series of coded regions having a pattern with a dimension along a direction perpendicular to the axis of motion, wherein the variation of the pattern along the direction perpendicular to the axis of motion encodes information that represents a number designating an absolute position of the pattern along an axis of the scalar element, andwherein the measuring device includes a two-dimensional optical sensor array configured to capture an image of a portion of the scalar element, the image including a first pattern in one coded region of the series of coded regions;performing image processing on the acquired image to obtain a representation of the one coded region;determining a number value represented by the representation of the one coded region, wherein the number value defines the absolute position of the first pattern along the axis of the scalar element based on the information encoded by the first pattern;determining an offset between a center region of the image and the one coded region; anddetermining an absolute position of the first object with respect to the second object along the scalar element based on the number value defining the absolute position of the first pattern and the offset between the center region of the image and the one coded region of the series of coded regions.
  • 19. The method of claim 18, further comprising displaying the absolute position to a user.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation of patent application Ser. No. 15/884,171, filed Jan. 30, 2018, which is a continuation of U.S. patent application Ser. No. 13/918,704, filed Jun. 14, 2013, which claims the benefit under 35 USC 119(e) of prior U.S. Provisional Patent Application No. 61/678,581, filed Aug. 1, 2012, and prior U.S. Provisional Patent Application No. 61/660,614, filed Jun. 15, 2012, each which is hereby incorporated by reference in their entirety.

US Referenced Citations (113)
Number Name Date Kind
2300110 De Oct 1942 A
2959862 Ernst Nov 1960 A
3126638 Wickert et al. Mar 1964 A
3143252 Shapiro Aug 1964 A
3211335 Shapiro Oct 1965 A
3248950 Pursell et al. May 1966 A
3285123 Kurt et al. Nov 1966 A
3327900 Goda Jun 1967 A
3337095 Marbach et al. Aug 1967 A
3452901 Roach Jul 1969 A
3653556 Moran et al. Apr 1972 A
3766355 Kottkamp Oct 1973 A
3805998 Croslin Apr 1974 A
3807517 Freeman Apr 1974 A
3831602 Broadwin Aug 1974 A
3863807 Shapiro et al. Feb 1975 A
3940027 Marterer Feb 1976 A
3987934 Reed et al. Oct 1976 A
4003499 Shapiro et al. Jan 1977 A
4004969 Beauchemin Jan 1977 A
4047585 Dlugos Sep 1977 A
4072247 Yamazaki Feb 1978 A
4074831 Roach Feb 1978 A
4210261 Trujillo et al. Jul 1980 A
4225035 Mohney et al. Sep 1980 A
4251065 Mcdougal Feb 1981 A
4263504 Thomas Apr 1981 A
4271440 Jenkins et al. Jun 1981 A
4273257 Smith et al. Jun 1981 A
4306670 Oshikubo Dec 1981 A
4347988 Warren et al. Sep 1982 A
4399856 Anderson Aug 1983 A
4558691 Okada Dec 1985 A
4567780 Oppenlander et al. Feb 1986 A
4578052 Engel et al. Mar 1986 A
4607893 Damico Aug 1986 A
4631519 Johnston Dec 1986 A
4695989 Kimoto Sep 1987 A
4729646 Clark et al. Mar 1988 A
4741736 Brown May 1988 A
4974164 Lewis et al. Nov 1990 A
5061073 Michel Oct 1991 A
5128753 Lemelson Jul 1992 A
5220407 Yamane et al. Jun 1993 A
5235181 Durana et al. Aug 1993 A
5260769 Ieki et al. Nov 1993 A
5279044 Bremer Jan 1994 A
5437290 Bolger et al. Aug 1995 A
5466079 Quintana Nov 1995 A
5532874 Stein Jul 1996 A
5563958 Higgins et al. Oct 1996 A
5576535 Oosterwijk et al. Nov 1996 A
5624059 Lo Apr 1997 A
5862958 Edwards et al. Jan 1999 A
5894678 Masreliez et al. Apr 1999 A
5965879 Leviton Oct 1999 A
5978418 Greve et al. Nov 1999 A
6015067 Lang et al. Jan 2000 A
6135325 Fessel et al. Oct 2000 A
6161731 Sigg Dec 2000 A
6175414 Holzapfel et al. Jan 2001 B1
6552866 Lechner Apr 2003 B1
6603115 Gordon-Ingram Aug 2003 B1
6631990 Schippert et al. Oct 2003 B2
6639179 Vogler et al. Oct 2003 B2
6664535 Nahum Dec 2003 B1
6674066 Kaneda et al. Jan 2004 B1
6736009 Schwabe May 2004 B1
6765195 Leviton Jul 2004 B1
6770056 Price et al. Aug 2004 B2
6829400 Nakano et al. Dec 2004 B2
6860409 Lutz et al. Mar 2005 B2
6888635 Lacovara May 2005 B2
6923938 Cote et al. Aug 2005 B2
6977368 Drescher et al. Dec 2005 B2
7034283 Williams et al. Apr 2006 B2
7102123 Chin et al. Sep 2006 B2
7112782 Hoefer Sep 2006 B2
7120288 Fujishima Oct 2006 B2
7164120 Strasser Jan 2007 B2
7166852 Saracen et al. Jan 2007 B2
7408654 Hardin et al. Aug 2008 B1
7418160 Lynggaard Aug 2008 B2
7520749 Ohlsson Apr 2009 B2
7663093 Kusano Feb 2010 B2
7667186 Kauhanen Feb 2010 B2
7690567 Mauerberger et al. Apr 2010 B2
7976793 Solotareff et al. Jul 2011 B2
8142738 Boehm et al. Mar 2012 B2
8537370 Benner Sep 2013 B2
8597593 Boehm et al. Dec 2013 B2
8809764 Kato Aug 2014 B2
9030554 Pichon et al. May 2015 B2
20020085173 Schippert et al. Jul 2002 A1
20020121507 Vogler et al. Sep 2002 A1
20020136483 Nakano et al. Sep 2002 A1
20020159074 Kawachi et al. Oct 2002 A1
20030094354 Badarneh May 2003 A1
20030167061 Schlegel et al. Sep 2003 A1
20050218341 Saracen et al. Oct 2005 A1
20050259240 Goren Nov 2005 A1
20060007316 Yamanaka Jan 2006 A1
20060127864 Ohlsson Jun 2006 A1
20070069924 Goren Mar 2007 A1
20080040942 Eales et al. Feb 2008 A1
20080111063 Otsuka et al. May 2008 A1
20090079996 Mayer et al. Mar 2009 A1
20090161121 Oberhauser et al. Jun 2009 A1
20090190142 Taylor et al. Jul 2009 A1
20100206898 Santo Aug 2010 A1
20110127298 Shah Jun 2011 A1
20110187855 Pichon et al. Aug 2011 A1
20120026321 Hasler et al. Feb 2012 A1
Foreign Referenced Citations (30)
Number Date Country
1083209 Mar 1994 CN
101909527 Dec 2010 CN
102095737 Jun 2011 CN
0355221 Feb 1990 EP
0488633 Jun 1992 EP
0243588 Dec 1992 EP
0488633 Jun 1995 EP
0462266 Aug 1995 EP
0723873 Jun 1997 EP
1099936 May 2001 EP
0819913 Oct 2001 EP
1586866 Oct 2008 EP
9-128759 May 1997 JP
WO 1984001027 Mar 1984 WO
WO 1986003866 Jul 1986 WO
WO 1987001224 Feb 1987 WO
WO 1991011090 Jul 1991 WO
WO 1993018584 Sep 1993 WO
WO 1995009345 Apr 1995 WO
WO 1997042601 Nov 1997 WO
WO 1999004368 Jan 1999 WO
WO 1999046602 Sep 1999 WO
WO 2000031560 Jun 2000 WO
WO 2001061637 Aug 2001 WO
WO 2002053062 Jul 2002 WO
WO 2003073738 Sep 2003 WO
WO 2004051602 Jun 2004 WO
WO 2005099819 Oct 2005 WO
WO 2006080851 Aug 2006 WO
WO 2006120182 Nov 2006 WO
Non-Patent Literature Citations (3)
Entry
Final Office Action received for U.S. Appl. No. 13/982,742, dated Jan. 15, 2016, 8 pages.
Non-Final Office Action received for U.S. Appl. No. 13/982,742, dated Apr. 27, 2015, 14 pages.
Office Action received for Chinese Patent Application No. 201280006710.4, dated Jan. 7, 2015, 10 pages (2 pages of Partial Translation and 8 pages of Official Copy).
Related Publications (1)
Number Date Country
20210398307 A1 Dec 2021 US
Provisional Applications (2)
Number Date Country
61678581 Aug 2012 US
61660614 Jun 2012 US
Continuations (2)
Number Date Country
Parent 15884171 Jan 2018 US
Child 17221623 US
Parent 13918704 Jun 2013 US
Child 15884171 US