The present invention relates generally to video imaging and video recording systems and specifically to video imaging and video recording systems including one or more analog composite video cameras or other video sources, and digital display, enhancement, or recording equipment that incorporates analog to digital conversion means.
Video imaging systems are used in a broad variety of applications, including telecommunications, entertainment, security monitoring, and the like. Video imaging systems are generally placed in one of two categories, namely computer and television. In television applications, baseband video signals can follow a number of different formats, including the National Television Systems Committee or NTSC standard (the U.S. and Japan standard), the Phase Alternating Line or PAL standard (the European standard), and the High Definition/Standard Definition Digital Television format.
In television, there are three basic levels of baseband signal interfaces. In order of increasing quality, they are composite video (or Color, Video, Blanking, and Sync or CVBS), which uses one wire pair, Y/C (or S-video), which uses two wire pairs, and component, which uses three wire pairs. Each wire pair includes a signal and a ground. Composite signals are the most commonly used analog video interface and combine the brightness information (luma) and the color information (chroma) and the synchronizing signals on just one cable.
In video imaging and video recording systems, analog composite video signals are often converted to a digital video signal for subsequent enhancement, display, and/or recording. While it is desirable that the analog composite video signal input to an analog-to-digital (A/D) converter fully meets the appropriate composite video standard (either the NTSC or PAL standards), in practice the composite video signal is often degraded and does not meet the desired standard due to some combination of low signal output from the video camera, signal loss, interference, or improper AC or DC signal levels. When the analog composite video signal fed to the A/D converter is degraded, the resulting digital video signal may be distorted. For example, the sync tip 204 may be disfigured. This distortion may be very slight or severe, depending on the type and level of degradation to the analog video signal.
The most common causes of composite video signal distortion in video imaging and video recording systems are as follows:
Low output from the video source. If the signal output from the video camera or other source is not of sufficient amplitude, the video signal at the input to the A/D converter may not meet the desired standard even if the transmission line is virtually lossless.
Interference due to ground loops. Ground loops result when the ground potential is different between two components within a video system. This difference in ground potential can result in “hum bars” on the video picture caused by 60 Hz commercial power, or herringbone interference on the video picture caused by AM broadcast signals, or a combination of 60 Hz and AM broadcast interference that can result in hum bars, herringbone, blanking, color distortion, or other degradation to the video picture.
One exemplary application for the present invention is video security systems, such as that shown in
When the input video signal is even slightly degraded and does not meet the desired standard, the A/D conversion process may distort the digitized video signal, causing the resultant video picture to be significantly worse in terms of picture quality than would be the case with an analog video recorder. Whereas an analog video recorder might record a useable, though degraded, picture, the picture recorded by the DVR may be distorted to such an extent that much or all of the information is lost. Additionally many DVRs are designed to detect motion within the video picture based on an algorithm performed on the digitized video signal. If the digitized video signal is not an accurate and true representation of the video picture captured by the video camera, then the DVR algorithm may falsely detect motion when no motion is present, or conversely not detect motion when motion is actually present. In the first instance, the DVR will rapidly fill its available digital storage medium resulting in loss of data due to either insufficient storage capacity or earlier data being overwritten; in the second instance, the DVR will not record the desired data at all.
The common causes of video signal degradation within a video security system are well understood by video engineers and other experienced technical people. However, video security system installers typically have neither the necessary training to know the causes of video signal degradation, the needed test equipment (or training) to identify the specific cause of degradation in a video system installation, nor the knowledge to determine the most cost-effective solution even if the cause of a problem is identified. Additionally, in many installations the level of video signal degradation would be acceptable with an analog video recorder but results in severe picture degradation, continuous recording, or no recording, when a DVR is installed. There is currently no system able to simultaneously correct many of the common causes of video signal degradation within a video security system. With the increasing use of digital video recorders in video security systems, the need for such a solution is immediate and growing.
Another application for the present invention is video imaging systems. Analog composite video cameras are commonly used to capture video pictures that are converted to a digital video signal, then digitally manipulated or enhanced, displayed and/or recorded. Such applications, which include medical, satellite and airborne imaging systems, utilize an A/D conversion process as described above, and these systems may suffer from the same distortion effects described above if the analog video signal input is degraded. Since imaging systems typically require the highest possible picture resolution, any loss of intelligence due to degradation of the analog composite video system is highly undesirable.
Another application for the present invention is radar system displays. In many radar systems, the output from the radar receiver is a composite video signal. Many radar systems digitally enhance the displayed signal. Such systems utilize an A/D conversion process as described above, and these systems may suffer from the same distortion effects described above if the analog video signal input is degraded. Such display systems are utilized to produce the highest possible picture resolution, and the loss of intelligence due to degradation of the analog composite video system is highly undesirable.
These and other needs are addressed by the various embodiments and configurations of the present invention. The present invention is generally directed to a signal correction unit that removes noise and/or corrects various signal errors prior to conversion of the analog video signal into a digital video signal.
In one embodiment of the present invention, a signal processing method is provided that includes the steps of:
(a) receiving a first analog video signal that includes a fixed reference;
(b) removing the fixed reference (e.g., DC voltage offset component) from the video signal to form an adjusted analog video signal; and
(c) setting an average voltage of a first portion of the adjusted analog video signal (the first portion can, for example, be one or more of the front porch, breezeway, color burst, and back porch) to a ground reference voltage to produce a restored video signal.
In one configuration, the restored video signal is converted into a digital video signal without the problems caused by unwanted DC offsets and loss of the DC reference voltage from the video signal. After removal of the DC offsets and before DC reference voltage restoration, the DC-adjusted analog video signal may further be passed through a differential input to remove ground loop interference.
To compensate for signal losses in the transmission line from the video source and low power output from the video source, the analog video signal, after being passed through the differential input may be further processed by adjustable gain and equalizer circuitry. The adjustable gain circuitry amplifies all components of the video signal equally while the equalizer circuitry amplifies only higher frequency components.
Applications for this embodiment of the present invention include, but are not limited to, security systems, video imaging, and radar display systems.
Until the present invention, there has been no single device that specifically corrects for the common causes of composite video signal degradation to ensure that the corrected analog video signal nearly meets, or most preferably fully meets, the desired composite video standard prior to an Analog-to-Digital or A/D conversion process where the resulting digital video signal is enhanced or otherwise digitally manipulated, displayed, and or recorded. The digital replica of the analog video signal can be relatively free of distortion, causing the resultant video picture to be of high quality and providing a high degree of accuracy in motion detection (as more accurate detection of pixel changes between frames). The analog-to-digital or A/D converter, due to the high standard of the analog video signal, can create fewer artifacts (noise) and the compression scheme can work more efficiently. As a result, the video file size can be reduced significantly. As will be appreciated, significant reductions in video file size equate to lower memory requirements, thereby avoiding the need for costly hard drives or limiting the time that the data can be stored. While the absolute resolution of the system is constrained by the camera resolution and the recording parameters of the Digital Video Recorder or DVR (which are typically user-defined), the resolution can also be impacted by the A/D conversion process. By providing a compliant video signal to the A/D converter, the resolution will not be adversely impacted by the digitization of the video signal.
These and other advantages will be apparent from the disclosure of the invention(s) contained herein.
As used herein, “at least one . . . and”, “at least one . . . or”, “one or more of . . . and”, “one or more of . . . or”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, and A, B and C together.
The above-described embodiments and configurations are neither complete nor exhaustive. As will be appreciated, other embodiments of the invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
As depicted in
Although the preferred embodiment is discussed with reference to NTSC-type analog composite video signals, it is to be understood that the embodiment can operate equally with other types of video signals, such as PAL-type video signals, HDTV component video signals, Red Green Blue or RGB component video signals, Video Graphics Array or VGA-type video signals, and Extended Graphics Array or XGA-type signals over any suitable video interface, including not only composite signals but also Y/C and component video interfaces.
The subcomponents of the signal correction unit 300 are depicted in
As will be appreciated, A/C coupling 400 is implemented by locating capacitive couplers on each video input/channel. Stated another way, in each incoming coax transmission line a capacitive coupler is connected in line with each of the center conductor and shield. Because the capacitors will not pass DC voltage components, such as DC offsets, capacitive coupling will remove from the composite analog signal any DC offset (or fixed reference) on the transmission line 104. In a preferred configuration, A/C coupling removes all of the DC offset in the input composite analog video signal. Any suitable A/C coupling capacitors may be employed.
The differential input 404 results in a high impedance path between the unit's input and output signal ground references. This high impedance path greatly reduces, and most preferably eliminates, undesirable ground loop currents. Specifically, the differential input references the input video signal 420 between the center conductor and shield of the coax cable. The input connector is not grounded, and the input signal is not referenced to the ground. By isolating the input signal shield from the output signal ground, the ground segment of the signal path is broken. The signal input and output impedances are independent of the impedance between the input shield and the output signal ground and are selected to match the impedance of the transmission lines (typically coax cable) 104 and 304 respectively. Preferably, the differential input impedance ranges from about 50 to about 90Ω, with an impedance of about 75Ω being most preferred.
The adjustable gain circuitry 408 allows for adjustment of the video signal amplitude (or IRE levels). The adjustable gain circuitry 408 can be any amplifier that amplifies equally all of the video signal frequency components by a desired gain. This is intended to restore the sync tips 204 to the proper magnitude (or IRE level). Preferably, the signal power is adjusted by an amount ranging from about 0.5× to about 4× (or from about −3 db to about 6 db) and even more preferably from about 0.5× to about 2.5× (or from about −3 db to about 4 db).
The adjustable EQ circuitry 412 allows for frequency-dependant adjustment of the video signal amplitude and is used to compensate for the high-frequency roll-off that is inherent in many types of transmission lines, particularly, but not limited, to coaxial cable and twisted pair. The adjustable EQ 412 may be any amplifier that differentially amplifies only the higher frequency video signal components (e.g., the components have a frequency greater than about 1 MHz). In one configuration, the amount of amplification is frequency independent (or provides fixed or constant gain) for frequencies greater than a selected frequency. In another configuration, the amount of amplification is frequency dependent (or provides variable gain) for frequencies greater than the selected frequency. As will be appreciated, the losses along certain transmission lines, such as a coax, are frequency dependent. For example, 1,000 feet of coaxial cable will provide relatively low losses for 2 MHz signals but much higher losses for 40 MHz signal components. The latter amplifier configuration is intended to provide a frequency dependent gain that matches the signal losses up to about 100 MHz. With reference to NTSC signals, the adjustable EQ 412 is intended to adjust the high frequency components of the video signal to the appropriate magnitude (or IRE level), and the amount of amplification is based, at least in part, on the previously adjusted gain of the adjustable gain circuitry. The adjustable gain is set so that the sync tip 204 is at the correct amplitude. For signal components having a frequency above about 1 MHz, the signal amplification preferably ranges from about 0.5× to about 4× (or from about −3 dB to about 6 dB) and even more preferably from about 0.75× to about 2× (or from about −1.3 dB to about 3 dB).
The differential input 404 and adjustable gain and EQ 408 and 412 may each be any suitable circuitry, such as an integrated circuit. In a preferred implementation, the differential input 404 and adjustable gain and EQ 408 and 412, respectively, are embodied in the EL9110™ integrated circuit manufactured by Intersil Corporation. The integrated circuit is a single channel differential receiver and equalizer. It contains a high speed differential receiver with five programmable poles. The outputs of the pole blocks are summed into an output buffer. The equalization length can be set with the voltage on a single pin. The gain can be adjusted up or down by 6 dB using a VGAIN control signal. Additionally, a further 6 dB of gain can be switched in to provide a matched drive into a cable. The circuit has a bandwidth of about 150 MHz. A single input voltage can be used to set the compensation levels for the required length of transmission line.
The equivalent circuitry of this integrated circuit is shown in
The components of chip 800 are shown in
The DC restore circuitry 416 measures the average DC voltage of the burst interval (or the voltage level of the front porch 200, breezeway 208, and/or back porch 216) as compared to a ground reference and generates a correction voltage to ensure that the blanking portion (or the voltage level of the front porch 200, breezeway 208, and/or back porch 216) of the video signal is at DC ground potential. Preferably, the DC restore circuitry 416 restores the front porch 200 (and the breezeway 208 and back porch 216) (or the 0 volt reference point(s)) to the proper position. When the AC coupling 400 removes any DC offset from the video signal, it effectively removes the DC reference voltage (represented by the front porch 200) from the video signal. DC restore circuitry 416 restores the true zero volt DC reference to the signal. Thus, the presence of the DC restore circuitry 416 is preferred when AC coupling 400 is employed.
In a preferred configuration shown in
The sample-hold switch circuit 704 is time gated by a sync separator 708 that is capacitively coupled to the line 432 carrying the amplified and equalized video signal. The sync separator 708 samples the amplified and equalized video signal and generates and sends the timing pulse (or the hold signal) when the sync tip 204 is detected. Although the sync tip may be detected using any voltage level of the sync pulse at or below the 50% level 250 (−20 IRE), the sync tip is preferably detected when the sync pulse voltage hits the 50% level 250 (or −20 IRE). The sync separator 708 sends the hold signal to pin 4. After a predetermined time (or duration of the hold signal), the hold signal goes low as noted above. The predetermined time is measured from the time that the sync tip is detected and is selected so that the hold signal goes low before or during the color burst. The predetermined time is typically about 5.06 microseconds. The capacitor 750 is a DC blocking capacitor (that blocks all of the DC voltages in the video signal) as the sync separator may be damaged by a negative DC voltage. After passing through the capacitor, the signal is suitably adjusted so its voltage does not go negative.
In a preferred implementation, the AC coupling 400, differential input circuitry 404, adjustable gain circuitry 408, adjustable EQ circuitry 412, and DC restore circuitry 416 are located relatively close to one another. Preferably, these circuit elements are electrically adjacent, and even more preferably are integrated onto a single circuit board to minimize signal losses, simplify impedance matching between circuit elements, and minimize undesirable radiation and grounding problems.
As shown in
Depending on the video system configuration, the output of the DC restore circuitry 416 may either be the signal 448 outputted by the video signal correction unit 300, or may be the signal 452 inputted to a distribution amplifier (DA) 456. The DA 456 generates a multiplicity of nearly identical, and preferably identical, amplified output signals so that the output signal is replicated on a multiplicity of output connectors, enabling the user to send the identical signal to multiple locations or video system components. In one configuration, the amplitudes of the multiple amplified output signals 460a,b are each within about 98% of the amplitude of the input signal 452. Preferably, the distribution amplifier 456 is configured to provide a gain of from about 1.0× to about 1.05× (or of from about 0 dB to about 0.2 dB) with unity gain being most preferred. In a typical implementation, a 1×2 DA is often required so that the signal from the video source can be routed to two different devices, such as a video monitor and a DVR, simultaneously and without any loss of signal.
As shown in
In one configuration, the signal processing unit 500 measures the sync tip voltage level and adjusts the gain, typically with a control voltage, so that the sync tip level meets the signal standard (for example, −40 IRE for an NTSC video signal). The unit 500 also measures the color burst amplitude of the signal level and adjusts the EQ, typically with a control voltage, so that the color burst level meets the signal standard (for example, +/−20 IRE for an NTSC video signal). The signal processing unit 500 determines the gain and EQ correction voltages and may incorporate a microprocessor that measures signal timing and generates one or more control voltages that are sent to the adjustable gain and EQ 408 and 412. The signal processing unit and monitoring device may be, but are not required to be, combined and controlled by a single microprocessor.
The video signal sync amplitude comparison module 520 measures the amplitude of the sync pulse 204 and sends a correction signal, if needed, to the gain control circuitry 408 and the monitoring device 504. The video signal burst amplitude comparison module 524 measures the amplitude (or peaks) of the black burst portion of the video signal, and sends a correction signal, if needed, to the EQ control circuitry 412 and the monitoring device 504. The timing verification microprocessor 528 measures the vertical timing intervals and sync tip timing, and compares these measured times to a pre-determined time standard. An error signal, that is proportional to the deviation from the desired time standard, is generated and sent to the monitoring device 504.
In any of the above configurations, the unit 300 can include redundant power supply circuitry for increased reliability. This unit may also include a monitoring and control device to switch to or activate the redundant power supply circuitry when the primary power supply means does not meet predetermined performance specifications. Alternatively, the unit may include an uninterruptible power supply (UPS) with alarm means to indicate when the UPS is operating from battery backup.
The above figures are not intended to limit the functions of the signal processing unit to those elements shown, or to indicate that all of these functions are required in every embodiment of the present invention.
A number of variations and modifications of the invention can be used. It would be possible to provide for some features of the invention without providing others.
For example in one alternative embodiment, the various modules or components in the unit 300 are implemented in hardware (such as a logic circuit) and/or in software.
In another alternative embodiment, the signal flow is not restricted to the sequence shown. Other configurations of the same functions are possible, and such variations will be recognized by video equipment design engineers.
The present invention, in various embodiments, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various embodiments, subcombinations, and subsets thereof. Those of skill in the art will understand how to make and use the present invention after understanding the present disclosure. The present invention, in various embodiments, includes providing devices and processes in the absence of items not depicted and/or described herein or in various embodiments hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and\or reducing cost of implementation.
The foregoing discussion of the invention has been presented for purposes of illustration and description. The foregoing is not intended to limit the invention to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the invention are grouped together in one or more embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the invention.
Moreover, though the description of the invention has included description of one or more embodiments and certain variations and modifications, other variations and modifications are within the scope of the invention, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative embodiments to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
The present application claims the benefits of U.S. Provisional Application Serial Nos. 60/563,751, filed Apr. 19, 2004, and 60/662,914 filed Mar. 15, 2005, of the same title and to Patton, et al., each of which is incorporated herein by this reference.
Number | Name | Date | Kind |
---|---|---|---|
4215371 | Baggett et al. | Jul 1980 | A |
4296437 | Geurts | Oct 1981 | A |
4340881 | Stack et al. | Jul 1982 | A |
4544918 | De Haan | Oct 1985 | A |
4575755 | Schoeneberger et al. | Mar 1986 | A |
4651213 | Takimoto | Mar 1987 | A |
4680633 | Gerdes et al. | Jul 1987 | A |
4719584 | Rue et al. | Jan 1988 | A |
4849906 | Chodos et al. | Jul 1989 | A |
4851842 | Iwamatsu | Jul 1989 | A |
4853792 | Katsuta et al. | Aug 1989 | A |
4982191 | Ohta | Jan 1991 | A |
5012330 | Bugni | Apr 1991 | A |
5057932 | Lang | Oct 1991 | A |
5086437 | Tomita | Feb 1992 | A |
5164839 | Lang | Nov 1992 | A |
5218449 | Ko et al. | Jun 1993 | A |
5276517 | Matsuzawa et al. | Jan 1994 | A |
5280356 | Hiramatsu | Jan 1994 | A |
5341173 | Hyeon | Aug 1994 | A |
5422804 | Clark | Jun 1995 | A |
5627898 | Washino | May 1997 | A |
5644263 | Clark | Jul 1997 | A |
5659313 | Dischert et al. | Aug 1997 | A |
5689308 | Nayebi | Nov 1997 | A |
5798802 | Elmis et al. | Aug 1998 | A |
5818672 | Hilbe | Oct 1998 | A |
5867222 | Norris et al. | Feb 1999 | A |
5892555 | Sohn | Apr 1999 | A |
5952945 | Latham et al. | Sep 1999 | A |
5986719 | Nakamura | Nov 1999 | A |
5990741 | Yamamoto et al. | Nov 1999 | A |
5995157 | Retter et al. | Nov 1999 | A |
5995166 | Kawano | Nov 1999 | A |
5995705 | Lang | Nov 1999 | A |
6016113 | Binder | Jan 2000 | A |
6107887 | Zucker et al. | Aug 2000 | A |
6127956 | Agi | Oct 2000 | A |
6147633 | Ukawa et al. | Nov 2000 | A |
6160443 | Maalej et al. | Dec 2000 | A |
6204784 | Hatfield | Mar 2001 | B1 |
6268814 | Kolsrud | Jul 2001 | B1 |
6288658 | Mestdagh et al. | Sep 2001 | B1 |
6307492 | Berranger et al. | Oct 2001 | B1 |
6317473 | Stott et al. | Nov 2001 | B1 |
6320627 | Scott et al. | Nov 2001 | B1 |
6320917 | Stott et al. | Nov 2001 | B1 |
6326911 | Gomez et al. | Dec 2001 | B1 |
6356221 | LeChevalier | Mar 2002 | B1 |
6377196 | Kolsrud et al. | Apr 2002 | B1 |
6417794 | Munoz et al. | Jul 2002 | B1 |
6433723 | Randall | Aug 2002 | B1 |
6459447 | Okada et al. | Oct 2002 | B1 |
6459458 | Balaban | Oct 2002 | B1 |
6492924 | Copley et al. | Dec 2002 | B2 |
6509796 | Nguyen et al. | Jan 2003 | B2 |
6515542 | Wang et al. | Feb 2003 | B1 |
6515600 | Graen | Feb 2003 | B2 |
6525609 | Behzad | Feb 2003 | B1 |
6529248 | Tsyrganovich | Mar 2003 | B1 |
6556158 | Steensgaard-Madsen | Apr 2003 | B2 |
6559788 | Murphy | May 2003 | B1 |
6580465 | Sato | Jun 2003 | B1 |
6584365 | Blanchard | Jun 2003 | B1 |
6611537 | Edens et al. | Aug 2003 | B1 |
6618480 | Polley et al. | Sep 2003 | B1 |
6628730 | Stott et al. | Sep 2003 | B1 |
6640093 | Wildhagen | Oct 2003 | B1 |
6653959 | Song | Nov 2003 | B1 |
6664907 | Knepper et al. | Dec 2003 | B1 |
6665308 | Rakib et al. | Dec 2003 | B1 |
6680680 | Mellot | Jan 2004 | B1 |
6683554 | Nikai et al. | Jan 2004 | B2 |
6684065 | Bult et al. | Jan 2004 | B2 |
6690311 | Lundin et al. | Feb 2004 | B2 |
6703958 | Noro | Mar 2004 | B2 |
6704673 | McTigue | Mar 2004 | B2 |
6734817 | Naka et al. | May 2004 | B2 |
6738002 | Ercan et al. | May 2004 | B2 |
6744320 | Nguyen et al. | Jun 2004 | B2 |
6744832 | Miao | Jun 2004 | B2 |
6759904 | Behzad | Jul 2004 | B2 |
6788230 | Ahn | Sep 2004 | B2 |
6795496 | Soma et al. | Sep 2004 | B1 |
6798286 | Dauphinee et al. | Sep 2004 | B2 |
6798293 | Casper et al. | Sep 2004 | B2 |
6803829 | Duncan et al. | Oct 2004 | B2 |
6816096 | Comer et al. | Nov 2004 | B2 |
6816204 | Limberg | Nov 2004 | B2 |
6822601 | Liu et al. | Nov 2004 | B1 |
6831576 | Geiger et al. | Dec 2004 | B2 |
6833875 | Yang et al. | Dec 2004 | B1 |
Number | Date | Country | |
---|---|---|---|
60563751 | Apr 2004 | US | |
60662914 | Mar 2005 | US |