The present invention relates to the field of digitized imagery for the state of a machine, and more particularly, for monitoring the changing state based on imaging of analog or other physical state conditions of an apparatus that take place as the change occurs.
With the ubiquity of global communications links, much of the world and its machines can be connected rapidly and easily to other machines and digital control systems on a worldwide basis. It is now possible to create digital control loops and employ global networks to enable machines to directly communicate important information to other machines (so-called M2M applications), to enable automated decision making without the intervention of humans. However, unlike human-to-human communications which may be in the form of voice or images, M2M applications generally require the transmission of digital data. One common part of many M2M applications today is simply converting analog telemetry and signals from a machine into a digital form, suitable for transmission over the internet. Examples, without limitation, include converting the electrical signals that drive analog type gauges on a mobile machine like a vehicle, boat or airplane, or which drive the gauges a remotely located machine like an oil rig or switch gear in a switch yard, into digital representations of said signals and sending the resulting digital representations of the electrical signals to another location via wired or wireless internet for further action or monitoring by another machine, or human. However, many machines are designed originally with gauges and indicator lights which are intended only for humans to view them, and enabling such machines to transmit a digital version of their indicator lights or gauges requires extensive modifications to the machine and interruption of the electrical signaling systems in such machines. The effort required to perform the analog to digital conversion or even the copying of already digital information gauges into a form transmittable over the internet, reduces the rate of adoption of M2M communication functions in system applications that could benefit. Furthermore, some machines present regulatory or safety or warranty difficulties in converting them to be able to send digital information from their control systems or gauges. For example the high voltages in switch gear or motor control gear must be fully isolated from the low voltage circuits that normally digitize and communicate data over the internet; gauges and wiring on an aircraft cannot be tampered with or changed without affecting the aircraft's airworthiness certificate; new circuitry to digitize analog gauges, or to tap into signals already present in digital form on a vehicle cannot be easily added without violating the warranty on the vehicle.
What is needed is a system for digitizing gauges, lights and other human-readable machine gauges and functions and status without interfering with the operation of the machine or requiring re-working or interfering with the existing machine wiring, signaling, electrical or mechanical elements or operating modes, or adding new digitizing equipment to the machine.
The present invention provides a system for digitizing devices, such as gauges, lights and other human-readable machine gauges and functions and status. The system operates without the need to reconfigure operations or electronic components of the device being monitored. The potential for adversely interfering with the operation of the machine is eliminated, and there is no need to re-work any of the machine components or interfere with the existing machine wiring, signaling, electrical or mechanical elements or operating modes. In addition the system may be implemented using equipment configured by the system, which does not require the addition of new digitizing equipment to the machine.
These and other advantages may be provided by the invention.
A preferred embodiment of the invention, called the image-M2M system, is depicted in
The images are then communicated over a network connection 107, which may be the internet, or a private communications link, and of which one or more segments may be electrically wired, optically linked, or wirelessly linked via a terrestrial wireless link or a satellite wireless link. This network connection 107 may also serve to provide command and control directives to the Supervisory control circuitry 106A. The images provided by the remote system 100 arrive at separate supervisory and control circuitry, 108A, and subsequent decompression, formatting and decryption functions 108. The resulting images can then be stored either permanently or temporarily in a memory or other electronic storage device 109. Subsequently, extraction and digitization algorithms are employed 110, 111, which, as further described below, turns each image into a sequence of digital values associated with any or all of the indicators, meters, gauges or switches in the image of the panel. The extraction algorithm 110 and digitization algorithm 111 may be pre-programmed or may operate selectively on various parts of the panel on a request basis, and may include optical character recognition (OCR) sub-algorithms. The results may then optionally be stored in a memory device 112, which can then be accessed by either a private communications system, such as a SCADA network, or a standards based network such as the internet. By the many means well known to those practiced in the art, the data available in memory 112 may be presented to the internet as individually addressable, or as sub-addressable, data elements. As indicated, all of the functions 108a, 108, 109, 110, 111 and 112 may be contained within a single sub-system of hardware and software, 120, or within only software operating on a larger server farm (not shown).
In the above manner, one or more indicators, readouts, meters and the like may become digital numbers, each addressable at an address or sub-address over the internet, without any direct connection to, or interference with, the operation of the said indicators in the panel 101.
The change detection function 103 may be programmed to provide for the capture of a frame or video when any external object interrupts the view of the panel or meters, such as a human operator adjusting controls or taking other actions. In this way, the invention provides additional security and information which would not necessarily be present in a simple digital telemetry stream which were formed by digitization of the signals within the panel itself. Also, the remote system 100 can be programmed to recognize certain alarm conditions such as a specific indicator light, or a hot temperature and instigate communications with the local system on its own.
Now the readout extraction and digitization algorithms will be described, with reference to
Referring to
The image-M2M invention system can also add additional functions to the image processing, such as perspective de-warping in the case where a camera is not mounted head-on to a meter or indicator, or where a large panel is imaged, without departing from the invention. Libraries of common meters may be maintained for easy manual and/or automated initial setup.
The image-M2M invention presented has the advantages of being able to adapt to a wide variety of gauges, meters and indicators, including many which may not be easily amenable to digitization with traditional embedded telemetry measurement and analog-to-digital electrical or mechanical techniques. In addition, through the use of the camera and the image system, many meters or indicators may be captured and digitized at once. Furthermore, the image-M2M system can provide additional information not normally available from traditional telemetry, such as when an employee operated a dial, inspected a machine, or similar environmental or incidental events which would otherwise go unrecorded. In addition, a camera may be focused on a motor or engine or set of pulleys or belts, and may make a frame-to-frame comparison to determine the amount of vibration or torque movement in the motor or engine, providing information which would otherwise be very difficult to ascertain remotely, even with sophisticated telemetry.
According to one exemplary embodiment, the system is configured with one or more cameras which are focused on one or more aspects of a motor or engine, including connected or associated components, such as, for example, drive or driven mechanisms. For example, a camera is positioned to have its focus directed to the engine or motor, or operating portion or component. The camera images the field of view and provides an image frame that is comprised of an image area A. The image area A is illustrated in
The motor image area or an operating motor A0 may be set to image area coordinates within which the motor 310 is imaged when operating, and more preferably, when operating within acceptable ranges. In this example, the range represents an acceptable vibration level. The motor vibration is imaged by the motor positioning within the image area A. The motor image area may represent a number of separate motor images Am1, Am2, Am3, . . . AmN, where each separate image corresponds with a set of image coordinates or pixels of the image frame capture (at different times), which, in this example, may be represented by the image area A. The image coordinates of each image may be used to determine an operation parameter, which, in this example, is a vibration level, and an acceptable vibration level. Referring to
According to some implementations, the system is configured to process the motor image AmT (where, for example, the motor image Am represents an image at a particular time T), and compare the pixels or coordinates of the motor image location on the image field to determine whether the motor image parameters have been breached. The breach may be confirmed by a determination that the processed image reveals the motor (e.g., portion thereof) being detected at a position within the frame that is outside of a designated motor image boundary area, which in this example is the motor image boundary area A0. For example, where A0 defines a set of coordinates within which the motor imaging is acceptable for the motor position or location, and a motor image breaches the coordinate boundary, a positive detection result may be recorded. According to some embodiments, pixel locations and values of the motor images may be compared (for example, to an absolute value or reference value), to determine whether a breach of an acceptable operating condition has occurred.
Alternately, the motor vibration may be determined with reference to a deviation from the static image position Ams. For example, the separate images Am1, Am2, Am3, . . . AmN of the motor during operation of the motor may provide image coordinates that are different than the static image coordinates Ams (although it is possible that some of the images Am1, . . . AmN may correspond with the status image Ams as the motor is operating). A variance level may be determined for the image area of an operating motor, and the camera may operate as shown and described herein, with continuous imaging, or a frame rate imaging, which ascertains an image of the motor. Referring to
According to some alternate embodiments, the system may be configured to image a particular portion of a component, such as the motor. For example, a pulley wheel, or a portion thereof, such as, the top portion, may be imaged, and when its radial portion being imaged is detected to have breached an are range (e.g., as defined by the frame coordinates), which previously was determined to be an acceptable operating range, a positive detection result may be initiated. In addition, an alert or other operation may be generated.
The system may be configured to generate a response to a positive detection (where the motor vibration is detected and determined to be operating outside of acceptable parameters), which may be from storing the result for later use in maintenance, or to relay an instruction to shut down the motor, or any other response, which may be to send an alert to an operator or technician (and allow the motor to continue). In addition, the vibration level image data may be processed and stored, as images, or as processed information. The vibration levels may be provided to recreate motor movements that were imaged. In addition, the vibration image data, such as, for example, the position of the motor within a frame, or processed vibration data ascertained from the images, may be time-stamped to provide an indication of the time of operation. This may be useful to determine whether there is a particular load, time of day, or operation that produces an associated motor operation.
Although the example is provided in conjunction with a motor, other operating components may be used and imaged, and the image information processed to determine whether the component operation is within an acceptable operation parameter or condition. In addition, a plurality of cameras may be provided to image a respective plurality of components, and the images of multiple components may be determined and detected by the system.
Furthermore, the camera system may be augmented with audio information or an audio track which can record the sounds of buzzers, alarms or other audible signaling which is otherwise unobservable by the image system. Also, the sounds of motors or engines may be recorded so that if they begin to make unusual sounds, a local action can be taken for what might otherwise be an unobserved remote phenomenon.
These and other advantages may be realized with the present invention. While the invention has been described with reference to specific embodiments, the description is illustrative and is not to be construed as limiting the scope of the invention. Although a plurality of motor images are shown on the image frame A of
This patent application claims the benefit under 35 U.S.C. 119 and 35 U.S.C. 120 of U.S. provisional application Ser. No. 62/343,430 entitled “System for Transmission and Digitization of Machine Telemetry”, filed May 31, 2016, the complete contents of which is herein incorporated by reference
Number | Name | Date | Kind |
---|---|---|---|
4568972 | Arents | Feb 1986 | A |
5673331 | Lewis et al. | Sep 1997 | A |
5805813 | Schweitzer, III | Sep 1998 | A |
5870140 | Gillberry | Feb 1999 | A |
6208266 | Lyons et al. | Mar 2001 | B1 |
6961445 | Jensen et al. | Nov 2005 | B1 |
7697028 | Johnson | Apr 2010 | B1 |
8786706 | Kennedy et al. | Jul 2014 | B2 |
9088699 | An et al. | Jul 2015 | B2 |
9546002 | Azcuenaga et al. | Jan 2017 | B1 |
20030138146 | Johnson et al. | Jul 2003 | A1 |
20050246295 | Cameron | Nov 2005 | A1 |
20070130599 | Monroe | Jun 2007 | A1 |
20070236366 | Gur et al. | Oct 2007 | A1 |
20080089666 | Aman | Apr 2008 | A1 |
20090190795 | Derkalousdian et al. | Jul 2009 | A1 |
20090251542 | Cohen et al. | Oct 2009 | A1 |
20090322884 | Bolick et al. | Dec 2009 | A1 |
20110012989 | Tseng et al. | Jan 2011 | A1 |
20110149067 | Lewis et al. | Jun 2011 | A1 |
20130070099 | Gellaboina et al. | Mar 2013 | A1 |
20130115050 | Twerdochlib | May 2013 | A1 |
20140347482 | Weinmann et al. | Nov 2014 | A1 |
20150003665 | Kumar | Jan 2015 | A1 |
20150109136 | Capozella et al. | Apr 2015 | A1 |
20160086031 | Shigeno et al. | Mar 2016 | A1 |
20160086034 | Kennedy et al. | Mar 2016 | A1 |
20160104046 | Doettling et al. | Apr 2016 | A1 |
20160109263 | Dubs | Apr 2016 | A1 |
20160314367 | Chmiel et al. | Oct 2016 | A1 |
20170088048 | Iwamoto | Mar 2017 | A1 |
20170116725 | Stuart et al. | Apr 2017 | A1 |
20170163944 | Jeong | Jun 2017 | A1 |
20170169593 | Leigh et al. | Jun 2017 | A1 |
20170249731 | Van Gorp et al. | Aug 2017 | A1 |
20180253619 | Petruk | Sep 2018 | A1 |
Number | Date | Country |
---|---|---|
105049786 | Nov 2015 | CN |
1419964 | May 2004 | EP |
1990271699 | Mar 1990 | JP |
2002008181 | Jan 2002 | JP |
2002188939 | Jul 2002 | JP |
2002342870 | Nov 2002 | JP |
2003242587 | Aug 2003 | JP |
2003331376 | Nov 2003 | JP |
2010176202 | Aug 2010 | JP |
Entry |
---|
Klosterman, “Vision helps perform predictive maintenance”. Vision Systems Design, Apr. 11, 2016, pp. 1-8, [online], [retrieved on Aug. 16, 2017], URL: http:..www.vision-systems.com/articles/print/volume-21/issue-4/features/vision-helps-perform-predictive-maintenance.html. |
Guler Puren, et al. Real-time multi-camera video analytics system on GPU, Journal of Real-Time Image Processing, Springer DE, vol. 11, No. 3, Mar. 27, 2013, pp. 457-472, XP035643528. |
Jörg Barrho et al., “Visual Tracking of Human Hands for a Hazard Analysis based on Particle Filtering”, Proc. 11th IPMU Int. Conf., Jan. 1, 2006, pp. 1-4, XP055698840. |
Gil-Jimenez, P. et al., “Automatic Control of Video Surveillance camera Sabotage”, Jun. 18, 2007, Nature Inspired Problem-Solving Methods in Knowledge Engineering; Springer Berlin Heidelberg, pp. 222-231, XP019095486, ISBN 978-3-540-73054-5. |
Zainul Abdin Jaffrey et al., “Architecture of Noninvasive Real Time Visual Monitoring System for Dial Type Measuring Instrument”, IEEE Sensors Journal, IEEE Service Center, NY, NY, US, vol. 13, No. 4, Apr. 1, 2013, pp. 1236-1244, XP011493885. |
Mark Curtis, “Handbook of Dimensional Measurement, 5th ed.”, Jan. 1, 2013, Industrial Press XP055654382, p. 86, Figures 5-3. |
Number | Date | Country | |
---|---|---|---|
20230052760 A1 | Feb 2023 | US |
Number | Date | Country | |
---|---|---|---|
62343430 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15610057 | May 2017 | US |
Child | 17739587 | US |