This application claims priority under 35 U.S.C. §119(a) from United Kingdom Patent Application No. 1119488.3, filed on Nov. 11, 2011, the contents of which are incorporated herein by reference.
The present invention relates generally to data compression techniques, and more particularly, to compressing a multivariate dataset.
In computer systems, data compression techniques are commonly applied to datasets so as to reduce the size of the dataset to facilitate its storage, use, visualization or transmission. Some datasets, such as image datasets, may be particularly large and comprise multiple variates.
Compression techniques are commonly used to reduce the dynamic range of the data values of large datasets, such as image datasets. Such compression techniques may be applied, for example, to a dataset comprising image data having dynamic ranges outside that which can be perceived with human sight or displayed on a computer screen. A given compression technique can be arranged to reduce the dynamic range of the image data so that the compressed image data can be displayed while preserving at least some of information from the original dataset that is outside the displayable range.
Existing compression techniques are applied to image data but are either restricted to the number of variates to which they may be applied or may result in loss of information, saturation of image regions or smoothing out of significant details.
In one embodiment of the present invention, a method for compressing a multivariate dataset comprises selecting a dataset comprising a plurality of variates. The method further comprises applying a first compression method to values of a first variate of the dataset. In addition, the method comprises applying, by a processor, a second compression method to values of a second variate of the dataset, where the second compression method is arranged to compress the second variate values relative to a variation of corresponding first variate values.
Other forms of the embodiment of the method described above are in a system and in a computer program product.
The foregoing has outlined rather generally the features and technical advantages of one or more embodiments of the present invention in order that the detailed description of the present invention that follows may be better understood. Additional features and advantages of the present invention will be described hereinafter which may form the subject of the claims of the present invention.
A better understanding of the present invention can be obtained when the following detailed description is considered in conjunction with the following drawings, in which:
Embodiments of the present invention provide a method for compressing a multivariate dataset, the method comprising the steps of: selecting a dataset comprising a plurality of variates; applying a first compression method to the values of a first variate of the dataset; applying a second compression method to the values of a second variate of the dataset, wherein the second compression method is arranged to compress the second variate values relative to the variation of the corresponding first variate values.
The compression of the second variate values may be inversely related to the variation of the corresponding first variate values. The second variate values may be compressed relative to the uncompressed first variate values. The second variate values may be compressed relative to the compressed first variate values. The second variate values may be compressed to a predetermined range. The compressed second variate values may be interpolated to a predetermined set of value ranges. The interpolation may be linear. The dataset may represent image data and the first and second variates comprise pixel amplitude and frequency. The pixel amplitude may represent luminance. The first compression method may comprise a tone mapping method. The tone mapping method may be arranged to perform spatially uniform compression. The tone mapping method may be arranged to perform spatially varying compression.
Other forms of the embodiment of the method described above are in a system and in a computer program product.
Referring now to the Figures,
In one embodiment, the HDR data 110 comprises image data having a high dynamic range of at least two variates in the form of the pixel luminance and pixel frequency. The ranges of these two variates exceed the range of human perception and also the displayable range of the display 104. Image data of such high dynamic range may therefore be referred to as hyper-spectral image data. With reference to
With reference to
Secondly, the dynamic range of the frequency values f of the HDR image data 110 are compressed relative to the compression of the corresponding luminance values L. For each pixel p 203, the luminance difference ΔL between a given pixel p 203 in a given image In 202 and the adjacent pixel p 203 in the subsequent image In+1 is defined as follows:
ΔL(x,y,fn)=|L(x,y,fn+Δf)−L(x,y,fn)|
where fn is the value range in the frequency band of a given image In 202 in the original uncompressed image stack 201; p(x, y, fn) is a pixel p 203 in image In 202; and Δf=fn+1-fn is the frequency shift or difference between any two successive images 1202 in the image stack 201.
A mapping function M(x, y, fn) is then defined for each pixel p 203 as follows:
In the present embodiment, the mapping function for a given pixel p is inversely related to the compression of the luminance value for that pixel p.
With reference to
As a result of the mapping function M, the compression of the second variate for a given pixel is relative to the compression of the first variate for that pixel. In the present embodiment, the degree of compression of the frequency variate is inversely related to the variation of the luminance variate after compression.
The compressed frequency values f′n of the pixels p′ will no longer fall into the frequency-defined images In or of the original uncompressed image stack 201. Each output compressed image will be associated with various frequencies of the original uncompressed images In because the compression of the frequency variate of a given pixel has been performed on a pixel by pixel basis in relation to the luminance compression for the given pixel. In the present embodiment, the compressed frequency values f′n are linearly interpolated into a predetermined number n′ of compressed frequency images In′ for the output compressed image stack 204. In addition to the upper and lower frequency limits f′min and f′max, the user may also determine the number n′ of images In′ in the compressed image stack 204.
In the present embodiment, the number n′ of compressed frequency images In′ is less than the number n of images In in the uncompressed image stack 201. For example, given an uncompressed image stack 201 with a pixel luminance P dynamic range of 106 comprising 1000 images I (n=1000), the lowest frequency image h representing a frequency fmin of 1 THz and the highest frequency image I1000 representing a frequency fmax of 1000 THz, the separation between each image I will be 1 THz assuming equal frequency band differences between images. In other words, each image I covers a 1 THz frequency band. Compressing the dynamic range of the luminance values P to 8 bits for use with 8 bit displays and performing the relative frequency value compression described above with a lower frequency limit f′min specified as 400 THz and an upper frequency limit f′max specified as 800 THz will thus result in a reduction in the number of images n′, from 1000 for the uncompressed image stack 201 to 400 for the compressed image stack 204 if the image I′ band width is maintained at 1 THz.
The processing performed by the HDR compression module 109 (
With reference to
As will be understood by those in the art, embodiments of the present invention may be applied to any suitable dataset comprising two or more variates for relative compression. The relation between the compression of the variates is dependent on the mapping function, which may define any suitable relation between the two or more variates being compressed.
As will be understood by those skilled in the art, any suitable tone mapping or other applicable compression technique may be employed for compressing the value of the first variates in embodiments of the present invention. Local or global tone-mapping operators, that is, spatially varying or spatially uniform operators, may be used depending on the given application.
In another embodiment, while the frequency range of the compressed image stack is reduced, the number n′ of compressed frequency images In′ is user determined so as to be equal to or larger than the number n of images In in the uncompressed image stack 201 (
In another embodiment, no interpolation of the compressed frequency values is performed, instead leaving the compressed frequency values in their raw state in the compressed image stack.
In a further embodiment, the compressed frequency values f′n are non-linearly interpolated into a predetermined number n′ of compressed frequency images In′ for the output compressed image stack. Such non-linear interpolation may be performed in accordance with any suitable predetermined function.
Embodiments of the present invention extend the use of tone mapping from single variable datasets to multi-variable datasets by compressing frequency coordinates relative to the change in the three dimensional tone mapped values. Each output tone mapped compressed plane is thus associated with various frequencies of the original planes in the uncompressed image stack.
Embodiments of the present invention may be further arranged to enable multi-variate tone mapping for applications, such as the tone mapping of colored standard high dynamic range images where both the luminance and color fidelity need to be preserved on significantly lower dynamic range displays. Embodiments of the present invention may therefore be arranged to provide compression for both color and luminance that mitigates the over-saturation or cartooning effects of the tone mapped images. Embodiments of the present invention may be applied to 3D MRI scans, di-tonic MRI or colored MRI.
It will be understood by those skilled in the art that the apparatus that embodies a part or all of the present invention may be a general purpose device having software arranged to provide a part or all of an embodiment of the present invention. The device could be a single device or a group of devices and the software could be a single program or a set of programs.
While the present invention has been illustrated by the description of the embodiments thereof, and while the embodiments have been described in considerable detail, it is not the intention of the applicant to restrict or in any way limit the scope of the appended claims to such detail. Additional advantages and modifications will readily appear to those skilled in the art. Therefore, the present invention in its broader aspects is not limited to the specific details of the representative apparatus and method, and illustrative examples shown and described. Accordingly, departures may be made from such details without departure from the spirit or scope of applicant's general inventive concept.
Referring again to
Computer system 102 may further include a communications adapter 509 coupled to bus 502. Communications adapter 509 may interconnect bus 502 with an outside network thereby allowing computer system 102 to communicate with other similar devices.
I/O devices may also be connected to computer system 102 via a user interface adapter 510 and a display adapter 511. Keyboard 512, mouse 513 and speaker 514 may all be interconnected to bus 502 through user interface adapter 510. A display monitor 515 (e.g., display 104 of
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” ‘module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the C programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the function/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the function/acts specified in the flowchart and/or block diagram block or blocks.
The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
| Number | Date | Country | Kind |
|---|---|---|---|
| 1119488.3 | Nov 2011 | GB | national |
| Number | Name | Date | Kind |
|---|---|---|---|
| 4445138 | Zwirn et al. | Apr 1984 | A |
| 5051916 | Benson | Sep 1991 | A |
| 5363098 | Antoshenkov | Nov 1994 | A |
| 5446504 | Wada | Aug 1995 | A |
| 5668897 | Stolfo | Sep 1997 | A |
| 5675382 | Bauchspies | Oct 1997 | A |
| 5748780 | Stolfo | May 1998 | A |
| 6539391 | DuMouchel et al. | Mar 2003 | B1 |
| 6661845 | Herath | Dec 2003 | B1 |
| 6912317 | Barnes et al. | Jun 2005 | B1 |
| 8111933 | Furihata et al. | Feb 2012 | B2 |
| 8131721 | Kataoka et al. | Mar 2012 | B2 |
| 20030095197 | Wheeler et al. | May 2003 | A1 |
| 20040092802 | Cane et al. | May 2004 | A1 |
| 20040264809 | Uetani | Dec 2004 | A1 |
| 20050012848 | Hayaishi | Jan 2005 | A1 |
| 20050152606 | Wood | Jul 2005 | A1 |
| 20050222775 | Kisra et al. | Oct 2005 | A1 |
| 20060072799 | McLain | Apr 2006 | A1 |
| 20060092271 | Banno et al. | May 2006 | A1 |
| 20060143142 | Vasilescu et al. | Jun 2006 | A1 |
| 20060158462 | Toyama et al. | Jul 2006 | A1 |
| 20070014470 | Sloan | Jan 2007 | A1 |
| 20070130165 | Sjoblom et al. | Jun 2007 | A1 |
| 20070198192 | Hsu et al. | Aug 2007 | A1 |
| 20080278495 | Minamide et al. | Nov 2008 | A1 |
| 20090027558 | Mantiuk et al. | Jan 2009 | A1 |
| 20090028398 | Lundstrom | Jan 2009 | A1 |
| 20090060266 | Sornborger et al. | Mar 2009 | A1 |
| 20090154800 | Kojima et al. | Jun 2009 | A1 |
| 20090251715 | Kita | Oct 2009 | A1 |
| 20090317017 | Au et al. | Dec 2009 | A1 |
| 20100092096 | Bernal et al. | Apr 2010 | A1 |
| 20100183225 | Vantaram et al. | Jul 2010 | A1 |
| 20100202709 | Heavens et al. | Aug 2010 | A1 |
| 20110043603 | Schechner et al. | Feb 2011 | A1 |
| 20110080487 | Venkataraman et al. | Apr 2011 | A1 |
| 20110090370 | Cote et al. | Apr 2011 | A1 |
| 20110113288 | Ramakrishnan et al. | May 2011 | A1 |
| 20110122287 | Kunishige et al. | May 2011 | A1 |
| 20110122308 | Duparre | May 2011 | A1 |
| 20110145238 | Stork | Jun 2011 | A1 |
| 20110156896 | Hoffberg et al. | Jun 2011 | A1 |
| 20110235928 | Strom et al. | Sep 2011 | A1 |
| 20120020581 | Zarom | Jan 2012 | A1 |
| 20120027280 | Ramirez Giraldo et al. | Feb 2012 | A1 |
| 20120081577 | Cote et al. | Apr 2012 | A1 |
| 20120127297 | Baxi et al. | May 2012 | A1 |
| 20120263382 | Robinson et al. | Oct 2012 | A1 |
| 20120310890 | Dodd et al. | Dec 2012 | A1 |
| 20130121572 | Paris et al. | May 2013 | A1 |
| 20130308876 | Panter et al. | Nov 2013 | A1 |
| Number | Date | Country |
|---|---|---|
| 1426961 | Sep 2004 | EP |
| 2010105036 | Sep 2010 | WO |
| 2011002505 | Jan 2011 | WO |
| Entry |
|---|
| Otim, et al., “Modelling of High dynamic range Logarithmic CMOS image Sensors,” IMTC 2004—Instrumentation and Measurement Technology Conference, IEEE, 2004, vol. 1 pp. 451-456. |
| Oliver Rübel et al.—“High performance multivariate visual data exploration for extremely large data”—Proceeding SC '08 Proceedings of the 2008 ACM/IEEE conference on Supercomputing Article No. 51 pp. 1-12. |
| United Kingdom Search Report for Application No. GE31119468.3 dated Mar. 12, 2012. |
| Ward et al., “Subband Encoding of High Dynamic Range Imagery,” http://www.anywhere.com/gward/papers/apgv04/APGVfinal.pdf, APGV '04 Proceedings of the 1st Symposium on Applied perception in graphics and visualization, 2004. |
| Yoshida et al., “Perceptual Evaluation of Tone Mapping Operators with Real-World Scenes,” http://www.mpi-inf.mpg.de/˜yoshida/Yoshida—SPIE2005.pdf, Proc. SPIE 5666, Human Vision and Electronic Imaging X, 192, Mar. 18, 2005. |
| Bonifazzi et al., “A Scanning Device for Multispectral Imaging of Paintings (Proceedings Paper),” http://spie.org/x648.html?product—id=620958&showAbstracts=true&origin—id=x648&pf=true&event—id=640315, Proceedings of SPIE vol. 6062, Jan. 15, 2006, Abstract. |
| Li et al., “High Dynamic Range Compression by Half Quadratic Regularization,” http://dl.acm.org/citation.cfm?id=1819606, ICIP'09 Proceedings of the 16th IEEE international conference on Image processing, 2009, Abstract. |
| Geladi et al., “Multivariate and Hyperspectral Image Analysis,” http://onlinelibrary.wiley.com/doi/10.1002/9780470027318.a8106.pub2/abstract, Jun. 13, 2008, Abstract. |
| Zhang et al., “Estimation of Saturated Pixel Values in Digital Color Imaging,” http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1815481/, JOSA A, vol. 21, Issue 12, 2004, pp. 2301-2310. |
| Cui et al., “Color-to-Gray Conversion Using ISOMAP,” Vis Comput (2010) 26: 1349-1360, Published Online on Dec. 17, 2009. |
| Kotwal et al., “Visualization of Hyperspectral Images Using Bilateral Filtering,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, No. 5, May 2010, First Published Jan. 22, 2010. |
| Number | Date | Country | |
|---|---|---|---|
| 20130124489 A1 | May 2013 | US |