This application relates to cameras used in machine vision and more particularly to automatic focusing lens assemblies.
Vision systems that perform measurement, inspection, alignment of objects and/or decoding of symbology (e.g. bar codes, or more simply “IDs”) are used in a wide range of applications and industries. These systems are based around the use of an image sensor, which acquires images (typically grayscale or color, and in one, two or three dimensions) of the subject or object, and processes these acquired images using an on-board or interconnected vision system processor. The processor generally includes both processing hardware and non-transitory computer-readable program instructions that perform one or more vision system processes to generate a desired output based upon the image's processed information. This image information is typically provided within an array of image pixels each having various colors and/or intensities. In the example of an ID reader, the user or automated process acquires an image of an object that is believed to contain one or more IDs. The image is processed to identify ID features, which are then decoded by a decoding process and/or processor to obtain the inherent information (e.g. alphanumeric data) that is encoded in the pattern of the ID.
Often, a vision system camera includes an internal processor and other components that allow it to act as a standalone unit, providing a desired output data (e.g. decoded symbol information) to a downstream process, such as an inventory tracking computer system or logistics application. It is often desirable that the camera assembly contain a lens mount, such as the commonly used C-mount, that is capable of receiving a variety of lens configurations. In this manner, the camera assembly can be adapted to the specific vision system task. The choice of lens configuration can be driven by a variety of factors, such as lighting/illumination, field of view, focal distance, relative angle of the camera axis and imaged surface, and the fineness of details on the imaged surface. In addition, the cost of the lens and/or the available space for mounting the vision system can also drive the choice of lens.
An exemplary lens configuration that can be desirable in certain vision system applications is the automatic focusing (auto-focus) assembly. By way of example, an auto-focus lens can be facilitated by a so-called liquid lens assembly. One form of liquid lens uses two iso-density liquids—oil is an insulator while water is a conductor. The variation of voltage passed through the lens by surrounding circuitry leads to a change of curvature of the liquid-liquid interface, which in turn leads to a change of the focal length of the lens. Some significant advantages in the use of a liquid lens are the lens' ruggedness (it is free of mechanical moving parts), its fast response times, its relatively good optical quality, and its low power consumption and size. The use of a liquid lens can desirably simplify installation, setup and maintenance of the vision system by eliminating the need to manually touch the lens. Relative to other auto-focus mechanisms, the liquid lens has extremely fast response times. It is also ideal for applications with reading distances that change from object-to-object (surface-to-surface) or during the changeover from the reading of one object to another object—for example in scanning a moving conveyor containing differing sized/height objects (such as shipping boxes). In general, the ability to quickly focus “on the fly” is desirable in many vision system applications.
A recent development in liquid lens technology is available from Optotune AG of Switzerland. This lens utilizes a movable membrane covering a liquid reservoir to vary its focal distance. A bobbin exerts pressure to alter the shape of the membrane and thereby vary the lens focus. The bobbin is moved by varying the input current within a preset range. Differing current levels provide differing focal distances for the liquid lens. This lens advantageously provides a larger aperture (e.g. 6 to 10 millimeters) than competing designs (e.g. Varioptic of France) and operates faster. However, due to thermal drift and other factors, there may be variation in calibration and focus setting during runtime use, and over time in general. A variety of systems can be provided to compensate and/or correct for focus variation and other factors. However, these can require processing time (within the camera's internal processor) that slows the lens' overall response time in coming to a new focus. It is recognized generally that a control frequency of at least approximately 1000 Hz may be required to adequately control the focus of the lens and maintain it within desired ranges. This poses a burden to the vision system's processor, which can be based on a DSP or similar architecture. That is vision system tasks would suffer if the DSP were continually preoccupied with lens-control tasks.
This invention overcomes disadvantages of the prior art by providing a removably mountable lens assembly for a vision system camera that includes an integral auto-focusing, liquid lens unit, in which the lens unit compensates for focus variations by employing a feedback control circuit that is integrated into the body of the lens assembly. The feedback control circuit receives motion information related to and actuator, such as a bobbin (which variably biases the membrane under current control) of the lens from a position sensor (e.g., a Hall sensor) and uses this information internally to correct for motion variations that deviate from the lens setting position at a target lens focal distance setting. The defined “position sensor” can be a single (e.g. Hall sensor) unit or a combination of discrete sensor's located variously with respect to the actuator/bobbin to measure movement at various locations around the lens unit. Illustratively, the feedback circuit can be interconnected with one or more temperature sensors that adjust the lens setting position for a particular temperature value. In addition, the feedback circuit can communicate with an accelerometer that senses the acting direction of gravity, and thereby corrects for potential sag (or other orientation-induced deformation) in the lens membrane based upon the spatial orientation of the lens.
In an illustrative embodiment, a lens assembly for a vision system camera having variable focus provides a lens body having a variable lens assembly and a fixed optics assembly. A controller (control circuit) is located within the body. The controller is constructed and arranged to receive a target focal distance from the vision system camera. The controller generates a target position of an actuator that controls curvature of the variable lens assembly. Based upon an actual measured position of the actuator, the controller corrects the measured position of the actuator to the target continuously, in a feedback loop. Illustratively, the variable lens assembly includes a membrane-based liquid lens element in which the membrane curvature is driven by a moving actuator. The liquid lens element can include a position sensor located to measure movement of the actuator associated with movement of a membrane of the membrane-based liquid lens assembly. This position sensor can comprise one or more linear Hall sensor(s) that measure(s) a magnet positioned to move on the actuator. The actuator can be a bobbin that is driven by current using a current controller operatively connected with the controller. The target position information illustratively defines a position that focuses an image acquired by the vision system camera. Additionally, the target position information can be further corrected by the controller for at least one of temperature of the liquid lens assembly, spatial orientation and/or other parameters (e.g. flange-to-sensor distance tolerance) of the liquid lens assembly. Thus, the controller converts this information into a corrected target position value for the Hall sensor. The corrected position information is determined by the controller based upon stored calibration parameters that reside in the memory (e.g. an EEPROM of the lens assembly). The calibration parameters can relate to temperature of the lens, provided by a temperature sensor, spatial orientation of the lens, provided by an accelerometer, and/or other parameters, such as flange-to-sensor distance tolerance. The controller can also allow for upgrade of its process instructions (firmware) via the communication network (e.g. an I2C communication interface), typically upon startup. This firmware upgrade is received from the vision system if newer information is available from it.
Illustratively, the controller can reside on a circuit board that is positioned on a shelf surrounded by a cap assembly. The (e.g., cylindrical) cap assembly surrounds a filler having the shelf and a main barrel assembly that contains the fixed optics therein. The cap assembly is operatively connected to the filler containing the shelf. It is selectively rotatable about an optical axis with respect to the main barrel assembly. The main barrel assembly includes a mount base constructed and arranged to removably secure to a mount of the vision system camera so that the lens assembly is exchangeable. The controller illustratively indicates when the lens position has moved to a corrected position.
In an illustrative embodiment, a method for controlling focus of a membrane-based liquid lens assembly of a vision system camera in the form of a “local” feedback loop (i.e. using a lens-assembly based controller/processor includes measuring of a present position of an actuator of the membrane-based liquid lens assembly with a position sensor. A target position of the actuator is received from an interconnected vision system processor of the vision system camera in the form of a focal distance. This distance is interpreted into the target position of the actuator by the controller. The controller (locally mounted in a body of the lens assembly) compares the measured, actual position of the actuator with the target position, and determines whether the two positions are currently substantially equal. If the values are substantially equal, then a correct position is indicated by the controller. If the values are sufficiently unequal, then the controller sends a correction to the actuator and repeats the above steps in a feedback loop that continuously maintains correct position based upon the current target.
The invention description below refers to the accompanying drawings, of which:
By way of further background, it has been observed that such liquid lenses exhibit excessive drift of its optical power over time and temperature. Although the lens can be focused relatively quickly to a new focal position (i.e. within 5 milliseconds), it tends to drift from this focus almost immediately. The initial drift (or “lag”) is caused by latency in the stretch of the membrane from one focus state to the next—i.e. the stretch takes a certain amount of time to occur. A second drift effect with a longer time constant is caused by the power dissipation of the lens' actuator bobbin heating up the lens membrane and liquid. In addition the orientation of the lens with respect to the acting direction of gravity can cause membrane sag that has an effect on focus. The system and method of the embodiments described herein address disadvantages observed in the operation and performance such liquid lenses.
The rear 130 of the lens assembly 100 includes a threaded base that can be adapted to seat in a standard camera mount, such as the popular cine or (C-mount). While not shown, it is expressly contemplated that the lens assembly 100 can be (removably) mounted a variety of camera types adapted to perform vision system tasks with an associated vision system processor.
With further reference also to
As shown in
The main barrel assembly 220 includes a rear externally threaded base 260 having a diameter and thread smaller than that of a C-mount—for example a conventional M-12 mount size for interchangeability with camera's employing this standard, or another arbitrary thread size. A threaded mount ring 262 with, for example, a C-mount external thread 264 is threaded over the base thread 260. This ring 262 allows the back focus of the lens with respect to the camera sensor to be accurately set. In general, the shoulder 266 of the ring is set to abut the face of the camera mount when the lens is secured against the camera body. A pair of set screws 360 (
An O-ring 267 is provided on the front face of the liquid lens 120 to cancel out tolerances. In addition, and with reference also to
As shown in
Notably, the barrel assembly 220 is an interchangeable component so that different fixed lens arrangements can be provided in the overall lens assembly (i.e. with the same liquid lens, cap and control circuitry). Thus, this design provides substantial versatility in providing a range of possible focal distances for different vision system applications.
Also notably, the provision of a lens control circuit within the overall structure of the lens assembly allows certain control functions to be localized within the lens itself. This is described in further detail below. The circuit board 350 is connected via a connector 422 and standard ribbon cable 420 to the liquid lens 120 as shown in
The control functions of the circuit board 350 are now described in further detail with reference to
At startup, the vision system 520 communicates to the lens assembly circuit 350 the tolerance value of its flange-to-sensor distance. This value is the deviation from the ideal C-mount distance (typically 17.526 millimeters), which has been measured after assembly of the vision system and has been stored in the memory 526 (e.g. a non-volatile flash memory) of the vision system. The control circuit 510 is arranged to correct for the flange tolerance as described further below.
Upon startup, the control circuit 510 can request the vision system processor 522 of the vision system camera 520 to provide the latest firmware upgrade 528 so that the function lens assembly is synchronized with the software and firmware of the vision system. If the firmware is up-to-date, then the processor indicates this state to the lens control circuit and no upgrade is performed. If the firmware is out-of-date, then the new firmware is loaded in the appropriate location of the lens assembly's program memory 620 (
Note, as used herein the terms “process” and/or “processor” should be taken broadly to include a variety of electronic hardware and/or software based functions and components. Moreover, a depicted process or processor can be combined with other processes and/or processors or divided into various sub-processes or processors. Such sub-processes and/or sub-processors can be variously combined according to embodiments herein. Likewise, it is expressly contemplated that any function, process and/or processor herein can be implemented using electronic hardware, software consisting of a non-transitory computer-readable medium of program instructions, or a combination of hardware and software.
The control circuit 510 can be implemented using a variety of electronic hardware. Illustratively a microcontroller is employed. The control circuit 510 receives focus information 530 (e.g. focal distance, which is translated by the controller into target bobbin position) from the vision system camera 520 (i.e. via cable 270 and interface link 531). This focus information can be derived from a focus process 532 that operates in the camera processor 522. The focus process can use conventional or custom auto-focus techniques to determine proper focus. These can include range-finding or stepping through a series of focus values in an effort to generate crisp edges in the image 534 of an object acquired by the sensor 536. While highly variable a 2K×1K-pixel sensor is used in the exemplary embodiment.
The focus information 530 is used by the control circuit 510 to generate a target bobbin position and to provide a digital signal with movement information 540 to the current controller 544. The current controller applies the appropriate current to an annular bobbin assembly 550 (or “bobbin”), which thereby deforms the liquid lens membrane 552 to provide an appropriate convex shape to the bulged lensmatic region 554 within the central opening of the bobbin 550. The bobbin 550 includes a magnet 558 that passes over a conventional linear Hall sensor 560. This Hall sensor 560 generates a digital position signal 562 that is directed back to the control circuit 510 where it is analyzed for actual bobbin position (for example, calling up values in the memory 512) versus the target position represented by a corresponding Hall sensor target position. If, in a comparison of the actual Hall sensor value and target Hall sensor value, these values do not match, then the control circuit 510 applies a correction, and that is delivered to the current controller 544, where it is used to move the bobbin 550 to a correct position that conforms with the target Hall sensor position. Once the bobbin 550 is at the correct position, the controller can signal that correction is complete.
Note that additional Hall sensors (or other position-sensing devices) 566 (shown in phantom) can generate additional (optional) position signals 568 that are used by the control circuit to verify and/or supplement the signal of sensor 560. In an embodiment, data is transmitted between components using an I2C protocol, but other protocols are expressly contemplated. In general, the commercially available Hall sensor operates in the digital realm (i.e. using the I2C interface protocol), thereby effectively avoiding signal interference due to magnetic effects. By way of non-limiting example, a model AS5510 Hall linear sensor (or sensors) available from AustriaMicrosystems (AMS) of Austria can be used.
With reference to
Note that this local feedback loop 570 can run continuously to maintain focus at a set position once established, and until a new bobbin position/focus is directed by the camera. Thus, the feedback loop 570 ensures a steady and continuing focus throughout the image acquisition of an object, and does so in a manner that avoids increased burdens on the camera's vision system processor.
The determination of the target value for the Hall sensor(s) in step 574 can include optional temperature, spatial orientation and/or other parameter (e.g. flange distance) correction based upon parameters 612, 614, 616 (
As shown in
Likewise, correction for orientation with respect to gravity that can result in sag or other geometric deformation of the lens membrane in differing ways is compensated by an (optional) accelerometer 594 that transmits the spatial orientation 596 of the lens/camera with respect to the acting directing of gravity to the control circuit via, for example, an I2C protocol. In an embodiment, an orientation correction factor is determined (by reading the accelerometer 594), and applied to the target Hall sensor value by the control circuit in a manner similar to temperature correction (
Other parameters (616 in
It should be clear that superior position correction, on the order of 1 millisecond, can be achieved using the local feedback loop instantiated in a control circuit packaged in the lens assembly. The entire lens assembly package fits within a standard C-mount lens affording a high degree of interoperability with a wide range of vision system camera models and types. The system and method for controlling and correcting the focus of a liquid (or other similar auto-focusing) lens described herein can be employed rapidly, and at any time during camera runtime operation and generally free of burden to the camera's vision system processor. This system and method also desirably accounts for variations in focus due to thermal conditions and spatial orientation (i.e. lens sag due to gravity). This system and method more generally allow for a lens assembly that mounts in a conventional camera base.
The foregoing has been a detailed description of illustrative embodiments of the invention. Various modifications and additions can be made without departing from the spirit and scope of this invention. Features of each of the various embodiments described above can be combined with features of other described embodiments as appropriate in order to provide a multiplicity of feature combinations in associated new embodiments. Furthermore, while the foregoing describes a number of separate embodiments of the apparatus and method of the present invention, what has been described herein is merely illustrative of the application of the principles of the present invention. For example, while a Hall sensor is used to measure position, a variety of alternate position-sensing devices can be used in association with the feedback loop herein. For example an optical/interference-based position sensor can be employed in alternate embodiments. Also, it is contemplated that the principles herein can be applied to a variety of lenses (liquid and otherwise), in which the curvature of the lens is varied via electronic control. Thus the term “variable lens assembly” should be taken broadly to expressly include at least such lens types. In addition while various bobbin position corrections are performed within the lens control circuit and feedback loop, it is contemplated that some corrections can be performed within the vision system camera processor, and the corrected focal distance is then sent to the lens assembly for use in further feedback loop operations. As used herein, various directional and orientation terms such as “vertical”, “horizontal”, “up”, “down”, “bottom”, “top”, “side”, “front”, “rear”, “left”, “right”, and the like, are used only as relative conventions and not as absolute orientations with respect to a fixed coordinate system, such as gravity. Accordingly, this description is meant to be taken only by way of example, and not to otherwise limit the scope of this invention.
Number | Name | Date | Kind |
---|---|---|---|
5973852 | Task | Oct 1999 | A |
6081388 | Widl | Jun 2000 | A |
6188526 | Sasaya et al. | Feb 2001 | B1 |
6344930 | Kaneko | Feb 2002 | B1 |
6898021 | Tang | May 2005 | B1 |
7296746 | Philyaw | Nov 2007 | B2 |
7382545 | Jung | Jun 2008 | B2 |
7436587 | Feldman | Oct 2008 | B2 |
7453646 | Lo | Nov 2008 | B2 |
7466493 | Kim et al. | Dec 2008 | B2 |
7490576 | Metcalfe et al. | Feb 2009 | B2 |
7710535 | Nomura | May 2010 | B2 |
7742075 | Kimura | Jun 2010 | B2 |
7755841 | Christenson et al. | Jul 2010 | B2 |
7855838 | Jannard et al. | Dec 2010 | B2 |
8027095 | Havens | Sep 2011 | B2 |
8035624 | Bell et al. | Oct 2011 | B2 |
8072689 | Bolis | Dec 2011 | B2 |
8154805 | Jannard et al. | Apr 2012 | B2 |
8169709 | Jannard et al. | May 2012 | B2 |
8203628 | Honjo et al. | Jun 2012 | B2 |
8284275 | Abe | Oct 2012 | B2 |
8363149 | Yumiki | Jan 2013 | B2 |
8381976 | Mohideen et al. | Feb 2013 | B2 |
8405680 | Lopes et al. | Mar 2013 | B1 |
8472122 | Obu et al. | Jun 2013 | B2 |
8545555 | Berge | Oct 2013 | B2 |
8548270 | Katz et al. | Oct 2013 | B2 |
8576390 | Nunnink | Nov 2013 | B1 |
8794521 | Joussen et al. | Aug 2014 | B2 |
8983233 | Katz et al. | Mar 2015 | B2 |
9270974 | Zhang et al. | Feb 2016 | B2 |
9760837 | Nowozin et al. | Sep 2017 | B1 |
9805454 | Hudman | Oct 2017 | B2 |
20010050758 | Suzuki et al. | Dec 2001 | A1 |
20030095238 | Imafuku et al. | May 2003 | A1 |
20040228003 | Takeyama et al. | Nov 2004 | A1 |
20050058337 | Fujimura et al. | Mar 2005 | A1 |
20060047039 | Kato et al. | Mar 2006 | A1 |
20060257142 | Tanaka | Nov 2006 | A1 |
20070170259 | Nunnink et al. | Jul 2007 | A1 |
20070216851 | Matsumoto | Sep 2007 | A1 |
20080055425 | Kuiper | Mar 2008 | A1 |
20080062529 | Helwegen et al. | Mar 2008 | A1 |
20080231966 | Hendriks et al. | Sep 2008 | A1 |
20080273760 | Metcalfe et al. | Nov 2008 | A1 |
20080277477 | Thuries | Nov 2008 | A1 |
20080277480 | Thuries et al. | Nov 2008 | A1 |
20090072037 | Good et al. | Mar 2009 | A1 |
20090141365 | Jannard et al. | Jun 2009 | A1 |
20090162601 | Dickover et al. | Jun 2009 | A1 |
20090302197 | Uchino | Dec 2009 | A1 |
20100039709 | Lo | Feb 2010 | A1 |
20100231783 | Bueler et al. | Sep 2010 | A1 |
20100243862 | Nunnink | Sep 2010 | A1 |
20100276493 | Havens et al. | Nov 2010 | A1 |
20100322612 | Tsuda et al. | Dec 2010 | A1 |
20110158634 | Craen et al. | Jun 2011 | A1 |
20110176221 | Tanaka | Jul 2011 | A1 |
20110205340 | Garcia et al. | Aug 2011 | A1 |
20110229840 | Liang | Sep 2011 | A1 |
20110274372 | Bianchi et al. | Nov 2011 | A1 |
20120037705 | Mohideen et al. | Feb 2012 | A1 |
20120062725 | Wampler, II et al. | Mar 2012 | A1 |
20120092485 | Meinherz et al. | Apr 2012 | A1 |
20120105707 | Futami et al. | May 2012 | A1 |
20120143004 | Gupta et al. | Jun 2012 | A1 |
20120160918 | Negro | Jun 2012 | A1 |
20120200764 | Afshari et al. | Aug 2012 | A1 |
20120261474 | Kawashime | Oct 2012 | A1 |
20120261551 | Rogers | Oct 2012 | A1 |
20120281295 | Jannard et al. | Nov 2012 | A1 |
20130021087 | Rosset et al. | Jan 2013 | A1 |
20140183264 | Nunnink | Jul 2014 | A1 |
20140268361 | Nunnink | Sep 2014 | A1 |
20150260830 | Ghosh et al. | Sep 2015 | A1 |
20160127715 | Shotton et al. | May 2016 | A1 |
20160292522 | Chen et al. | Oct 2016 | A1 |
20160306046 | Axelsson et al. | Oct 2016 | A1 |
20160375524 | Hsu | Dec 2016 | A1 |
Number | Date | Country |
---|---|---|
104315995 | Jan 2015 | CN |
1583354 | Oct 2006 | EP |
1837689 | Sep 2007 | EP |
H10527165 | Feb 1993 | JP |
2006520007 | Aug 2006 | JP |
2009505543 | Feb 2009 | JP |
2011218156 | Nov 2011 | JP |
2004083899 | Sep 2004 | WO |
2007020451 | Feb 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20140268361 A1 | Sep 2014 | US |