The present invention relates generally to video systems. Merely by way of example, the methods, systems, and apparatuses described herein have been applied to video processing and delivery of video streams in video cameras, including thermal imaging cameras. The invention has wide applicability to video data and the delivery thereof.
Although video interfaces have been demonstrated, embodiments of the present invention provide functionality not available using conventional techniques. As described herein, a single interface is utilized to provide two video streams that contain different levels of video processing on the same sensor content. In other words, a single video stream is provided by embodiments that contains video imagery with two different levels of video processing from the same sensor.
According to an embodiment of the present invention, a method of operating a video camera is provided. The method includes capturing a scene of imaging data using the video camera, wherein the imaging data is characterized by a first bit depth and processing the imaging data to provide display data characterized by a second bit depth less than the first bit depth. The method also includes framing the imaging data and the display data and outputting the framed imaging and display data.
According to another embodiment of the present invention, a method of operating a thermal imaging system is provided. The method includes capturing a scene of imaging data using a thermal imager, wherein the imaging data is characterized by a first bit depth and processing the imaging data to provide display data characterized by a second bit depth less than the first bit depth. The method also includes processing the imaging data to provide radiometric data characterized by a third bit depth greater than the first bit depth and framing the radiometric data and the display data. The method further includes outputting the framed radiometric and display data.
According to a specific embodiment of the present invention, a thermal imaging system is provided. The thermal imaging system includes one or more optical elements operable to collect infrared light and a camera core optically coupled to the one or more optical elements. The camera core includes an FPA module providing imaging data at a first bit depth, a color conversion module coupled to the FPA module and operable to process the imaging data to provide display data, and a framer coupled to the FPA module and the color conversion module and operable to form a super frame including the imaging data and the display data. The thermal imaging system also includes a communications module coupled to the camera core and an input/output module coupled to the communications module.
Numerous benefits are achieved by way of these techniques over conventional methods. For example, embodiments provide a method to output two video streams from a camera core in a single super frame or video stream without adding any significant cost, power, or cabling/pins onto the core. Some applications include processing of one of the two streams through a video analytics algorithm while the other of the two streams is used as the “viewable” (e.g., human viewable) stream. Using embodiments of the present invention, a single video stream that contains video imagery with two different levels of video processing is provided. This contrasts with conventional methods in which two video streams in the form of two separate packets of data with two different destination ports are used to communicate signals with different levels of processing. In embodiments, of the present invention, a single stream is used that is output at a single destination port. These and other details of embodiments along with many of their advantages and features are described in the following description, claims, and figures.
Embodiments of the present invention provide methods and systems to output a ‘super frame’ that includes both ‘raw’ video (e.g., 14-bit ‘raw’ video, which can be utilized by an analytics engine) as well as contrast enhanced video (e.g., 8-bit contrast enhanced video suitable for use with imaging displays) from a thermal camera core using a single, parallel digital video interface. In other embodiments as described herein, the Super Frame can include 16-bit radiometric data along with 8-bit gray scale data represented, for example, in YUV format. Embodiments of the present invention utilize a ‘super frame’ format to provide both video streams in which the number of pixels per line can be doubled. Thus, embodiments of the present invention can transmit two different images or representations of the same scene in a single frame. In other words, the method provides a clean solution to output two (or more) representations of the same sensor output data that have different levels of video processing applied.
Thus, embodiments of the present invention utilize a camera core that outputs two video streams of different bit depth resolution and provide them, for example, to a video board that outputs an Ethernet IP stream. The sensor data obtained by the video core is processed at multiple levels in the camera core as described herein to provide the two different video streams that are based on the same sensor data. In contrast with conventional techniques, the super frame is utilized to enable a single physical interface to carry two video streams provided by the processing engine in the camera core.
The video signal from the FPA 110 is provided to a non-uniformity correction (NUC) module 112 that corrects for pixel-to-pixel non-uniformities present across the sensor as well as performing optional temperature compensation. The NUC module 112 provides a 14-bit non-uniformity corrected output to multiple modules in the embodiment illustrated in
If the raw 14-bit image after NUC was displayed to a user, the quality would be very poor since the image would be gray and washed out. Accordingly, to improve the display experience, the raw data is provided at the original bit depth to the automatic gain control (AGC)/local area processing (LAP) module 120 in which AGC and/or LAP is performed. This module can also be referred to as an image contrast enhancement module. The AGC/LAP module 120 performs contrast enhancement processing, edge detection, and the like to provide 8-bit video imagery that is suitable for use in user displays. This 8-bit video stream can be displayed to users, providing the desired contrast.
Although 8-bit data is illustrated in
As illustrated in
In one implementation, the 4:2:2 data can be treated as an array of unsigned char values, where the first byte contains the first Y sample, the second byte contains the first U (Cb) sample, the third byte contains the second Y sample, and the fourth byte contains the first V (Cr) sample, as shown in Table 1, with increasing memory addresses proceeding to the right.
If the image is addressed as an array of little-endian WORD values, the first WORD contains the first Y sample in the least significant bits (LSBs) and the first U (Cb) sample in the most significant bits (MSBs). The second WORD contains the second Y sample in the LSBs and the first V (Cr) sample in the MSBs.
The output of the colorizer module 122 is a 16-bit per pixel color video stream that is provided to the framer 130 for framing into the super frame. In other embodiments, other colorization protocols can be used to provide colorized data for framing and eventual display. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
Because some processing devices, for example, video analytics systems, can benefit from receiving the 14-bit video stream rather than the 8-bit video stream, embodiments of the present invention provide a dual video stream output from the camera core as described herein that is suitable for both processing of the 14-bit video stream as well as for display. Referring to
The output of the framer 130 is provided to multiplexer 140, which can select one of the inputs to be provided as an output to the parallel interface 150. The framer is able to buffer up to a line of data in one implementation and more than one line of data in other implementations.
Although
The multiplexer 140 receives a plurality of inputs, for example, the raw 14-bit video data (input 145), the contrast enhanced 8-bit video data (input 142), the colorized 16-bit video data (input 143), and the super frame video data (input 144) and selects one of these inputs as the output that is provided to the parallel interface 150 based on the input provided at the select line SEL. Although framing of the raw data and the colorized data into a super frame is illustrated in
The radiometry module 210 converts the 14-bit intensity data into a 16-bit temperature value, represented by 11 bits to define the integer portion of the temperature value in Kelvin and 5 bits to define the fractional portion of the temperature value in Kelvin. Of course, other configurations can be utilized in which fewer or more bits are used to define the integer and fractional portions of the value. The temperature data is provided to the multiplexer as input 212 and is framed together with either the colorized data (input 143) or the contrast enhanced data (not illustrated but available by providing input 142 to the framer 130) to form the super frame featuring radiometric temperature information. The radiometry module 210 can utilize radiometric lookup tables or other radiometric processing to convert the 14-bit video stream to produce a 16-bit radiometric video stream using the camera temperature, calibration data, and the like.
The framer 130 combines the 16-bit radiometric video stream with one of the other video signals (e.g., the contrast enhanced 8-bit video data (input 142) or the colorized 16-bit video data (input 143) for form the super frame. As illustrated in
Referring to
The framer performs an interleaving function. The 14-bit data is illustrated as two bytes of the following format: PxBy, where x=the pixel number and y=the byte (either 0 or 1).
For example:
P0B0=Pixel 0, byte 0=bits 7-0,
P0B1=Pixel 0, byte 1=bits 15-8 (with bits 13-8 as valid data and bits 14 and 15 set to zero)
The data format illustrated in Table 1 is utilized for each row of the super frame as illustrated in
As illustrated in
The method also includes framing the imaging data and the display data (614) and outputting the framed imaging and display data (616). In some embodiments, the imaging data, the display data, and the framed imaging and display data are provided to a multiplexer that is able to select one of the inputs as an output that is provided to a parallel interface. Accordingly, the imaging data (i.e., the raw data from the FPA) can be provided as an output of the system.
It should be appreciated that the specific steps illustrated in
The method further includes processing the imaging data to provide radiometric data characterized by a third bit depth greater than the first bit depth (654). In some embodiments, the 14-bit intensity data is converted to 16-bit radiometric data providing information on the temperature of a pixel rather than the intensity measured for the pixel. As an example, a lookup table can be used in performing the radiometric conversion from intensity data to temperature data. Additionally, the method includes framing the radiometric data and the display data (656) and outputting the framed radiometric and display data (658). In addition to the framed data, the imaging data and the display data can be provided to a multiplexer that can be used to select the desired output.
It should be appreciated that the specific steps illustrated in
The thermal imaging system also includes a communications module 1130 that is coupled to the camera core. The communications module is operable to interact with network 1160, which can be used to receive thermal imagery from the system, provide control inputs for the system, and the like. Memory 1140 is provided that is able to store data from the camera core, store settings and calibration data used by the camera core, and the like.
The thermal imaging system also includes an input/output module 1150 in the embodiment illustrated in
Embodiments of the present invention can utilize an LVDS interface that supports two or more modes of operation: Camera Link® mode and YUV Super frame mode as a serialized version of the parallel output discussed above. The Camera Link® mode is typically used to interface to Camera Link® frame grabbers. The LVDS video interface supports 4 LVDS data pairs and the LVDS clock pair as outputs. The LVDS timing is shown in Table 2, while the timing diagram is shown in
Blanking time is inserted between each frame while FVAL is low. A line will consist of LVAL going high (valid) for an entire line. Blanking time is inserted between each line while LVAL is low. The amount of horizontal and vertical blanking can change based on operating modes and Camera revisions.
The LVDS Interface supports three interface formats in the embodiment illustrated in
The 14-bit Gray Scale format is used to support the 14-bit and 8-bit gray scale data modes. The 14-bit and 8-bit Gray Scale mapping follows the Camera Link® standard and maps as shown in Table 3. The 24-bit RGB format is used to support the colorization data mode and uses the standard Camera Link® 24-bit RGB format. The 24-bit RGB format can be utilized as an alternative implementation compared to the 4:2:2 color mode discussed previously. As will be evident to one of skill in the art, the 4:2:2 color mode uses 16 bits per pixel, which is less than the 24 bits per pixel used in the 24-bit RGB format. Accordingly, the 4:2:2 color mode can be utilized in place of the 24-bit RGB format. Thus, the super frame can be sent through the LVDS Camera Link® interface using the bit mapping illustrated in Table 3.
In YUV Super frame mode, a 16-bit video stream is mapped into the Camera Link® Interface as shown in Table 3. The YUV Super frame consists of 480 lines with each line containing 1280 values. The first 640 values contain YCbCr generated values for the pixels of that line with the second 640 values containing the pre-AGC values for that line (currently the pre-AGC values are from the frame before the current YCbCr frame, this allows time for analytics to analyze the pre-AGC data so additional overlays can be added to the YCbCr data stream by customer analytics).
Table 4 illustrates timing for several modes of operation according to an embodiment of the present invention. The modes of operation are associated with the four inputs provided to the multiplexer 140 in
It is also understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested to persons skilled in the art and are to be included within the spirit and purview of this application and scope of the appended claims.
This application is a divisional of U.S. patent application Ser. No. 15/988,924, filed May 24, 2018, which is a continuation of U.S. patent application Ser. No. 15/439,831, filed on Feb. 22, 2017, now U.S. Pat. No. 10,009,553, which is a divisional of U.S. patent application Ser. No. 14/536,439, filed on Nov. 7, 2014, now U.S. Pat. No. 9,615,037, which claims priority to U.S. Provisional Patent Application No. 61/901,817, filed on Nov. 8, 2013, entitled “Method and System for Output of Dual Video Stream via a Single Parallel Digital Video Interface,” the disclosures of which are hereby incorporated by reference in their entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
6078849 | Brady et al. | Jun 2000 | A |
6677945 | Lapidous et al. | Jan 2004 | B2 |
8212837 | Chernett et al. | Jul 2012 | B1 |
10009553 | Christison et al. | Jun 2018 | B2 |
20020180731 | Lapidous et al. | Dec 2002 | A1 |
20030007670 | Hamid | Jan 2003 | A1 |
20030169847 | Karellas | Sep 2003 | A1 |
20040232333 | Guldevall | Nov 2004 | A1 |
20040252303 | Giorgianni et al. | Dec 2004 | A1 |
20060023105 | Kostrzewski et al. | Feb 2006 | A1 |
20060067668 | Kita | Mar 2006 | A1 |
20060242186 | Hurley | Oct 2006 | A1 |
20080029708 | Olsen | Feb 2008 | A1 |
20080029714 | Olsen | Feb 2008 | A1 |
20100259626 | Savidge | Oct 2010 | A1 |
20110063427 | Fengler et al. | Mar 2011 | A1 |
20120020551 | Ito et al. | Jan 2012 | A1 |
20120224019 | Samadani et al. | Sep 2012 | A1 |
20120229497 | Tripathi et al. | Sep 2012 | A1 |
20130011078 | Phan | Jan 2013 | A1 |
20130334433 | Spartiotis et al. | Dec 2013 | A1 |
20140078262 | Yun | Mar 2014 | A1 |
20150084942 | Mennen et al. | Mar 2015 | A1 |
20150130949 | Christison et al. | May 2015 | A1 |
20170230591 | Christison | Aug 2017 | A1 |
20180139471 | Ikeda | May 2018 | A1 |
20180220160 | Lu | Aug 2018 | A1 |
Entry |
---|
Wikipedia, Color depth vs Bit depth analysis (Year: 2018). |
Bit depth (image) See https://www.digitizationguidelines.gov/term.php?term=bitdepthimage. |
U.S. Appl. No. 14/536,439 , “Notice of Allowance”, dated Nov. 23, 2016, 10 pages. |
U.S. Appl. No. 14/536,439 , “Restriction Requirement”, dated Sep. 21, 2016, 7 pages. |
U.S. Appl. No. 15/439,831 , “Non-Final Office Action”, dated Oct. 19, 2017, 10 pages. |
U.S. Appl. No. 15/439,831 , “Notice of Allowance”, dated Feb. 27, 2018, 9 pages. |
PCT/US2014/064682 , “International Preliminary Report on Patentability”, dated May 19, 2016, 6 pages. |
PCT/US2014/064682 , “International Search Report and Written Opinion”, dated Feb. 24, 2015, 9 pages. |
U.S. Appl. No. 15/988,924 , “Notice of Allowance”, dated Feb. 21, 2020, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20200288071 A1 | Sep 2020 | US |
Number | Date | Country | |
---|---|---|---|
61901817 | Nov 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15988924 | May 2018 | US |
Child | 16882983 | US | |
Parent | 14536439 | Nov 2014 | US |
Child | 15439831 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15439831 | Feb 2017 | US |
Child | 15988924 | US |