In-vivo imaging device providing data compression

Information

  • Patent Grant
  • 9113846
  • Patent Number
    9,113,846
  • Date Filed
    Thursday, November 18, 2004
    19 years ago
  • Date Issued
    Tuesday, August 25, 2015
    8 years ago
Abstract
A device, system and method may enable the obtaining of in vivo images from within body lumens or cavities, such as images of the gastrointestinal (GI) tract, where the data such as mosaic image data may be transmitted or otherwise sent to a receiving system in a compressed format.
Description
FIELD OF THE INVENTION

The present invention relates to an in-vivo device, system, and method such as for imaging the digestive tract; more specifically, to an in-vivo device, system, and method where information transmitted or sent may be compressed.


BACKGROUND OF THE INVENTION

Devices, systems, and methods for performing in-vivo imaging, for example, of passages or cavities within a body, and for gathering information other than or in addition to image information (e.g. temperature information, pressure information, etc.), are known in the art. Such devices may include, inter alia, various endoscopic imaging systems and various autonomous imaging devices for performing imaging in various internal body cavities.


An in-vivo imaging device may, for example, obtain images from inside a body cavity or lumen, such as the gastrointestinal (GI) tract. Such an imaging device may include, for example, an illumination unit, such as a plurality of light emitting diodes (LEDs) or-other suitable light sources, an imager, and an optical system, which may focus images onto the imager. A transmitter and antenna may be included for transmitting the image and/or other data signals. An external receiver/recorder, for example, worn by the patient, may record and store images and other data Images and other data may be displayed and/or analyzed on a computer or workstation after downloading the data recorded. The number and size of image data per frame and/or other data to be transmitted may be limited by the time period, power, and/or bandwidth that may be required to transmit each image frame and/or other data. Transmission may be wireless, for example, by RF communication or via wire.


Methods for compressing image or video data may be known, for example, compression algorithms such as JPEG and MPEG may be used to compress image and video data.


SUMMARY OF THE INVENTION

An embodiment of the device, system and method of the present invention enables the obtaining of in-vivo images from within body lumens or cavities, such as images of the gastrointestinal tract, where the data, such as mosaic image data, may typically be transmitted or otherwise sent to a receiving system in a compressed format. According to embodiments of the present invention, the mosaic data may be directly compressed, e.g. compressed without first completing and/or partially completing the RGB image data. According to other embodiments of the present invention, the compressed data may be transmitted on the fly.





BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which:



FIG. 1 shows a schematic diagram of an in-vivo imaging system according to embodiments of the present invention;



FIG. 2A shows an exemplary mosaic pixel arrangement according to embodiments of the present invention;



FIG. 2B shows an exemplary red pixel plane of a mosaic pixel arrangement according to embodiments of the present invention;



FIG. 2C shows an exemplary blue pixel plane of a mosaic pixel arrangement according to embodiments of the present invention;



FIG. 2D shows an exemplary first green pixel plane of a mosaic pixel arrangement according embodiments of the present invention;



FIG. 2E shows an exemplary second green pixel plane of a mosaic pixel arrangement according to embodiments of the present invention;



FIG. 3 shows a flow chart describing a method of compressing image data according to an embodiment of the present invention;



FIG. 4 shows a transformation of mosaic data pixel arrangement from an [R, G1, G2, B] color space to an alternate color space according to an embodiment of the present invention;



FIG. 5 shows a flow chart describing a method of compressing image data according to other embodiments of the present invention; and



FIG. 6 shows a flow chart describing a decoding method for decoding compressed data according to an embodiment of the present invention.





It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn accurately or to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity, or several physical components may be included in one functional block or element Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.


DETAILED DESCRIPTION OF THE INVENTION

In the following description, various aspects of the present invention will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled in the art that the present invention may be practiced without the specific details presented herein. Furthermore, well-known features may be omitted or simplified in order not to obscure the present invention.


Embodiments of the system and method of the present invention may be used, for example, in conjunction with an imaging system or device such as may be described in U.S. Pat. No. 5,604,531 to Iddan et al. and/or in application number WO 01/65995 entitled “A Device And System For In Vivo Imaging”, published on 13 Sep. 2001, both of which are hereby incorporated by reference. However, the device, system and method according to the present invention may be used with any device that may provide image and/or other data from within a body lumen or cavity. In alternate embodiments, the system and method of the present invention may be used with devices capturing information other than image information within the human body; for example, temperature, pressure or pH information, information on the location of the transmitting device, or other information.


Reference is made to FIG. 1, showing a schematic diagram of an in-vivo imaging system according to embodiments of the present invention. In an exemplary embodiment, a device 40 may be a swallowable capsule capturing images, for example, images of the gastrointestinal tract. Typically, device 40 may be an autonomous device and may include at least one sensor such as an imager 46, for capturing images, a processing chip or circuit 47 that may process signals generated by the imager 46, one or more illumination sources 42, an optical system 22, a transmitter 41 and a power source 45. In one embodiment of the present invention, the imager 46 may be and/or contain a CMOS imager. In other embodiments, other imagers may be used, e.g. a CCD imager or other imagers. In some embodiments of the present invention, the imager 46 may incorporate dark reference pixels that may enable the imager 46 to subtract dark pixels noise recorded intermittently between image capture using known methods. In other embodiments of the present invention, this feature may not be available. Processing chip 47 and or imager 46 may incorporate circuitry, firmware, and/or software for compressing images and/or other data, e.g. control data. In other embodiments, a compression module 100 and a buffer 49 may be incorporated either in the imager 46, processing chip 47, transmitter 41, and/or a separate component. In some embodiments of the present invention, the buffer 49 may have a capacity that may be substantially smaller than the size of a frame of image data captured, for example, from imager 46. Processing chip 47 need not be a separate component; for example, processing circuitry 47 may be integral to the imager 46, integral to a transmitter 41 and/or other suitable components of device 40. The buffer 49 may, for example, have a capacity of 0-20 kilobytes and may serve to facilitate, for example, a constant bit rate of data from the compression module 100 to the transmitter 41, Other uses of buffer 49 and/or other sizes of buffer 49 may be implemented. The transmitter 41 may, for example, transmit compressed data, for example, compressed image data and possibly other information (e.g. control information) to a receiving device, for example a receiver 12. The transmitter 41 may typically be an ultra low power radio frequency (RF) transmitter with high bandwidth input, possibly provided in chip scale packaging. The transmitter may transmit, for example, via an antenna 48. The transmitter 41 may, for example, include circuitry and functionality for controlling the device 40.


Typically, device 40 may be an autonomous wireless device that may be, for example, swallowed by a patient and may traverse a patient's GI tract. However, other body lumens or cavities may be imaged or examined with device 40. Device 40 may transmit image and possibly other data in a compressed format to, for example, components located outside the patient's body, which may, for example, receive and process, e.g. decode, the transmitted data. Preferably, located outside the patient's body in one or more locations, may be a receiver 12, preferably including an antenna or antenna array 15, for receiving image and possibly other data from device 40, a receiver storage unit 16, for storing image and other data, a data processor 14 with CPU 13, a data processor storage unit 19, a data decoding module 150 for decompressing data, and an image monitor and/or display 18, for displaying, inter alia, the images transmitted by the device 40 and recorded by the receiver 12. In one embodiment of the present invention, receiver 12 may be small and portable. In other embodiments receiver 12 may be integral to data processor 14 and antenna array 15 may be electrically communicated to receiver 12 by, for example, wireless connections. Other suitable configurations of receiver 12, antenna 15 and data processor 14 may be used. Preferably, data processor 14, data processor storage unit 19 and monitor 18 may be part of a personal computer, workstation, or a Personal Digital Assistant (PDA) device, or a device substantially similar to a PDA device. In alternate embodiments, the data reception and storage components may be of other suitable configurations. Further, image and other data may be received in other suitable manners and by other sets of suitable components.


Embodiments of the system and method of the present invention may provide transmission of compressed image data and possibly other data from, for example, in-vivo device 40. Compression as described herein may facilitate transmission of images at a higher frame rate without increasing, for example, the bandwidth of transmitter 41, may facilitate transmission with a lower bandwidth, and/or may facilitate increasing transmission gain per bit without increasing the overall energy required from the power source 45. Data other than or in addition to image data may be compressed and transmitted. For example, control information may be compressed. In-vivo autonomous devices, for example, device 40, may typically have limited space and power provision and therefore it may be desirable to minimize the processing power and buffer size and/or memory that may be required for compressing data. In some embodiments of the present invention, it may be desirable to accomplish compression of image and other data with memory capability.


Typically, the image data recorded and transmitted may be digital color image data although other image formats (e.g. black and white image data) may be used. In some embodiments of the present invention, each frame of uncompressed image data may include, for example, 256 rows of 256 pixels each, and each pixel may include data for color and brightness, according to known methods. In some embodiments of the present invention, the image data may be mosaic image data, for example, a pixel group, may be represented by a mosaic of for example four pixels, each pixel in the group may correspond to primaries such as red, green, and/or blue. One primary, e.g. green, may be represented twice. Other pixel arrangements and sizes may be used. Other sizes of pixel groups, including other numbers of pixels, may be used.


Reference is now made to FIG. 2 showing an exemplary mosaic pixel arrangement according to an embodiment of the present invention. In FIG. 2 every pixel group 250 may be represented by a mosaic of four pixels, for example, a red pixel 210, a blue pixel 220, a first green pixel 230, and a second green pixel 240. Corresponding pixel planes: red, blue, first green and second green planes according to the pixel arrangement shown in FIG. 2A are shown in FIG. 2B, 2C, 2D and 2E respectively. Known methods, e.g. known interpolation methods, may be used to create a complete Red, Green and Blue (RGB) pixel image that may include an RGB value in each of the pixel positions 299. A mosaic may include other numbers of pixels.


The data compression module 100 and decoder module 150 may use various data compression methods and systems. The data compression methods used may be lossless or lossy. Lossless data compression may enable precise (typically, with no distortion) decoding of the compressed data. The compression ratio of lossless methods may however be limited. Lossy compression methods may not enable precise decoding of the compressed information. However the compression ratio of lossy methods may typically be much higher than that of lossless methods, for example lossy compression may result in compression ratios greater than two. In many cases the data distortion of lossy methods may be non-significant and/or non-discernable with the human eye. Typically, known compression algorithms may compress or decrease the original size of the data by storing and/or transmitting only differences between one or more neighboring values, for example, pixels. In general, differences in color and intensity of images data may typically occur gradually over a number of pixels, and therefore the difference between neighboring pixels may be a smaller quantity as compared to the value of each pixel. In some embodiments of the present invention, a compression ratio of at least four may be desired.


Known, compression algorithms may include JPEG and/or MPEG compression that may typically be used for image and video compression and may have an option to operate according to either a lossless or lossy scheme. However such algorithms may typically require substantial processing power and memory, and may typically require full RGB data (as opposed to mosaic data) as input. Other known compression algorithms (that may typically be lossless), e.g., Binary Tree Predictive Coding (BTPC), Fast Efficient Lossless Image Compression System (FELICS), and Low Complexity Context-Based Lossless Image Compression Algorithm (LOCO-I), etc., may require relatively lower processing power and memory but may still require memory to, for example store tables and other data. In some embodiments of the present invention, known lossy and/or lossless methods may be revised and/or implemented with pre-processing of data to enable lossy compression to be implemented with reduced processing power and memory requirements, and to be compatible with mosaic image data input.


In some embodiments of the present invention, the performance of known compression algorithms may be improved by considering known characteristics of the system and the environment that may be imaged. In one example, the resolution of an image may be limited by, for example, optical system 22. Knowledge of the known limitations in the resolution may be, for example, incorporated into compression algorithms, e.g. by defining one or more parameters, so that higher performance may be achieved for the specific system being used. In other embodiments, a priori knowledge of the color scheme, or other characteristics of the image data may be incorporated in the compression algorithm. In other embodiments pre-processing may be implemented to increase performance of the compression algorithm. Performance may be based on compression ratio, processing, and/or memory requirement. Embodiments of the present invention describe a system and method for compression of image and other data at high compression rates, e.g. compression ratios of 4-8, with substantially little processing power and memory requirements.


Reference is now made to FIG. 3 showing a method for compressing data according to one embodiment of the present invention. In block 300 image data may be obtained. In other embodiments data other than and/or in addition to image data may be obtained and compressed. Image data 300 may be mosaic image data, for example, as is shown in FIG. 2, complete RGB color image data, or other suitable formats of image data. Typically, for in-vivo devices, image data may be mosaic image data as may be described herein. In block 310 dark reference pixels may be subtracted from corresponding pixels. In other embodiments of the present invention, dark reference pixels may not be provided for each captured image. In some embodiments of the present invention, dark image information may be obtained using other known methods. For example, a dark image may be captured for a plurality of captured images. Subtraction of the dark image may be performed during or post decoding of the captured image using suitable methods. Dark images may be, for example, interpolated using suitable methods to estimate the dark image noise correspond to each of the captured images. In some embodiments of the present invention, compression and/or processing of data may result in shifting or distortion of image data. Compressing the dark images using similar steps and parameters as compression of the captured images may maintain the correspondence between the captured and dark images so that proper subtraction of dark image noise may be achieved. In block 320, the pixel plane may be divided, for example, into mono-color pixel planes, for example, planes shown in FIG. 2B-2E In one embodiment of the present invention, compression algorithms, for example, known predictive types of compression, may be performed on each of the mono-color pixel planes after filling the missing information on each of the planes. Other suitable compression algorithms besides known predictive type algorithms may be implemented. In some embodiments of the present invention, the mono-color plane may be filled using known algorithms, e.g. known interpolation methods to obtain data corresponding to a full RGB image. In some embodiments, compression may be performed directly on the mosaic plane (FIG. 2A) or the mono-color planes (FIG. 2B-2E) with out completing the RGB image. Known image compression algorithms, for example JPEG, may require a full RGB image before compression data. When applying this algorithm to mosaic image data, compression may require extra processing to fill and/or complete missing data as well as extra storing capability to store the larger size data. In some embodiments of the present invention, a compression method for directly compressing mosaic image data, e.g. data that is not full RGB data, without requiring a complete or substantially complete RGB image may be provided. In block 330 neighboring pixels required for pixel comparison may be defined and/or located (e.g. with pointers) as the closest pixels available having the same color. In block 340, a transformation may be performed to transform, for example, the RGB plane and/or coordinates (or [R, G1, G2, B] coordinates) of the image into alternate coordinates, according to embodiments of the present invention. Sample coordinates and/or dimensions may include, for example, Hue, Saturation, and Value [H, S, V] coordinates, [Y, I, Q] planes commonly used for television monitors, or [Y, U, V] commonly used in JPEG compression. In other embodiments, other dimensions and/or coordinates suitable for images captured in-vivo may be used, e.g. images of in-vivo tissues. In some embodiments of the invention, data may be a combination of image and non-image data, for example one or more dimensions of the data may represent non-image data.


Reference is now made to FIG. 4 showing an exemplary transformation according to an embodiment of the present invention using, for example, four coordinates, e.g. [Y, Cr, Cb, Gdiff], corresponding to, for example, the four pixels in the pixel group 250 shown in FIG. 2A. Other coordinates, transformations, and defined pixels and/or pixels groups may be used. Y may be, for example, representative of the intensity of an image, Cr may be, for example, representative of the color red, Cb may be, for example, representative of the color blue, and Gdiff may be, for example, representative of the difference between the first and second green pixels. Other transformations and/or other coordinates may be used in other embodiments of the present invention. Referring back to FIG. 3, pre-processing of data (block 345) may be performed, for example, one or more of the dimensions and/or coordinates may be discarded. Discarding a dimension may enable, for example, simplification of the computations subsequently required and/or reduction in the quantity of data to be handled and transmitted, e.g. increase the compression ratio. In block 350, a compression algorithm may be implemented. In some embodiments of the present invention, compression may be performed on the fly, for example, in units of a few lines at a time, e.g. four lines of pixels. In other examples, compression on the fly may be performed with more or less than four lines of pixels. Typically compression performed by compression module 100 may be based on a variety of known predictive codes, e.g. (BTPC, FELICS, LOCO-I) that may typically use information from, for example, two or more neighboring pixel, e.g. a pixel above and the pixel to the left. Other suitable methods of lossless and/or lossy compression may be implemented. To increase the performance of the compression algorithm used, e.g. increase the compression ratio; pre-processing (block 345) and/or post procession (block 355) of the data may be performed. For example, one or more Least Significant Bits (LSBs) in one or more dimension and/or parameters of the data may be discarded, e.g. during pre-processing, to increase the compression ratio in one or more dimensions of the data. In other examples, a priori knowledge of, for example, typical color may be implemented to, for example, emphasis and/or de-emphasis particular dimensions of an image. In yet other examples, a priori knowledge of the characteristics of the typical object imaged may be implemented to emphasis and/or de-emphasis details or local changes that may be known to or not to typically occur. For example, for imaging of in-vivo tissue, certain sharp changes in color may not be typical and if encountered in image data may, for example, be de-emphasized. In still yet other examples, a priori knowledge of the particular optical system used may be implemented to emphasize or de-emphasize details encountered. For example, if a sharp detail may be encountered and the optical system, for example optical system 22, may be known not to provide the capacity to discern such sharp details, this detail may be de-emphasized by for example discarding the LSB, performing smoothing, decimation, etc. Emphasis and/or de-emphasis may be implemented by, for example, pre-processing (prior to compression), post-processing, and/or defining parameters of the compression algorithm. In one embodiment of the present invention, one or more dimension of the image may, for example, be discarded. For example, if one or more of the dimension may be known to produce a relatively flat image in the environment being imaged, that dimension may, for example, be discarded and no computations may be performed in that dimension. This may serve to, for example, increase the compression ratio, decrease the memory required to perform compression, and simplify the coding so that less processing may be needed. In other embodiments of the present invention, processing required as well as the memory required for the coding may be reduced by, for example, customizing known algorithms, for example, by eliminating the adaptive part of the coding that may require large tables of accumulated data. By implemented a priori knowledge of typical images and/or data that may normally be captured with, for example, a particular sensing device capturing images, for example, in a particular environment, the adaptive part of the coding may be reduced to a minimum. Data other or in addition to image data may be compressed.


In block 360, compressed data may be transmitted. Compressed data as well as coding for the decoder and/or control data may be transmitted, for example in a compressed format. Due to compression each line may have a variable bit length. In some embodiments of the present invention, a buffer 49 may stall data until a predetermined quantity of data may be accumulated for transmission. In some embodiments of the present invention, a buffer 49 may temporarily stall data to adapt the output from the compression that may have a variable bit rate to a constant bit rate transmission. Transmission may be in portions substantially smaller than an image frame, for example, a portion may be one or more lines of data, e.g. 512 bits. Typically, entire images need not be stalled and/or stored. Data transmitted may be stored and subsequently decoded (block 370). In other embodiments of the present invention, decoding may be performed on the fly directly upon transmission. If dark reference pixels weren't subtracted during image capture, dark image information may be subtracted, for example, post decoding. The decoded mosaic image may be completed (block 380), e.g., by using known methods, e.g. known interpolation methods. The image may be display (390).


Typically, the data compression module 100 and decompression module 150 may include circuitry and/or software to perform data compression. For example, if the data compression module 100 or decompression module 150 are implemented as a computer on a chip or ASIC, data compression module 100 or decompression module 150 may include a processor operating on firmware which includes instructions for a data compression algorithm. If data decompression module 150 may be implemented as part of data processor 14 and/or CPU. 13, the decompression may be implemented as part of a software program. Other suitable methods or elements may be implemented.


In some embodiments of the present invention, the rate of transmission may be, for example, approximately 1.35 Megabits per second. Other suitable rates may be implemented. Compression may significantly reduce the rate of transmission required or significantly increase the quantity of information that may be transmitted at this rate. After compression, and before transmission, randomization may occur (performed, for example, by the transmitter 41). Namely, the occurrence of the digital signals (“0” and “1”) may be randomized so that transmission may not, for example, be impeded by a recurring signal of one type. In some embodiments of the present invention, an Error Correction Code (ECC) may be implemented before transmission of data to protect the data transmitted, for example, Bose Chaudhuri Hocquenghem (BCH) may be used.


Reference is now made to FIG. 5 showing a flow chart describing a method of compression for in-vivo data according to other embodiments of the present invention. In block 500 data, e.g. mosaic type data may be obtained. The mosaic type data may be, for example, a 256×256 pixel image. Other size data and data other than image data may be used, for example, a 512×512 pixel image may be obtained. In block 510 the mosaic image may be divided into its separate planes, for example four separate planes. The planes may be separated as shown in FIG. 2B-2E according to the pixel arrangement shown in FIG. 2A. Other pixels arrangement may be used, for example, a mosaic pixel arrangement with two blue pixels for every red and green pixel or two red pixels for every blue and green pixel, or other colors may be included in the pixel arrangement. In other examples, there may be other pixel arrangements based on more or less than four pixels. Planes may not be actually separated, instead pointers may be defined or reference may be noted of where the pixels with the same colors may be located. In block 520 a transformation may be implemented to transform, for example, the four planes to other dimensions and/or coordinates. In one embodiment of the present invention an exemplary transformation shown in FIG. 4 may be used. In other embodiments of the present invention, other dimensions may be defined and/or used, for example, one of the dimensions described herein or other known dimensions. In yet other embodiments of the present inventions, transformation need not be implemented, for example, compression may be performed on the original coordinates. In FIG. 4 the Y dimension may be representative of the intensity and in some embodiments may be regarded as containing a large part of the information. As such the pre-processing the data in, for example, the Y dimension may be different from the other dimensions present. One of the methods that may be implemented to decrease the size of the data may be to discard one or more LSB(s), for example one LSB, as is shown in block 530. Other suitable methods may be employed to decrease the size of the data e.g. increase the resultant compression ratio. In one embodiment of the present invention, the Y dimension may be further compressed using known lossless compression algorithms, for example, FELICS (block 540). Other suitable compression algorithms may be used. In one embodiment of the present invention, for one or more of the other dimensions, a decision (block 550) may be made to determine the pre-processing to be implemented. In one example, the decisions may be based on the current size data stalled in the buffer 49, or based on other parameters, or may be predetermined. In one example, if the decision is to compress while maintaining high quality data, the data may be compressed and/or preprocessed less aggressively, e.g. resulting in a lower compression ratio. For high quality data compression, the data may, for example, be offset as in block 560, one or more LSB(s) may be discarded and the data may be compressed using any known compression algorithms, for example, FELICS (block 540). Other compression algorithms may be implemented. For lower quality data compression, the data may, for example, first be smoothed (block 570) using known methods. In one example, a smoothing filter with a 2×2 window may be implemented. In other examples other smoothing methods may be used. Subsequent to smoothing the data may be decimated (block 580) to reduce the size. For example, decimation may be used to reduce data that in one dimension that may be originally 128×128 bytes to, for example, 64×64 bytes. In other examples, the data may be decimated to other suitable sizes. In block 560 data may be offset and truncated. One or more LSB(s) may be discarded (block 530) to decrease the size of the data before implementing a compression algorithm for example, FELICS. In other embodiments of the present invention, one or more blocks may not be implemented. For example, one or more of blocks 530, 560, 570, 580 may or may not be implemented. In other embodiments, other pre-processing may be implemented. In some embodiments of the present invention it may be desired to further reduce the processing power required to compress transmitted data. In one example, one or more of the dimensions defined by transform (520) may be discarded. Discarding a dimension of the defined data may serve to reduce the processing power required for compression and may increase the resultant compression ratio. In one example, the dimension defined by Gdiff may be, for example, discarded. In other examples, other dimensions or more than one dimension may be discarded. Other methods of increasing compression ratio, decreasing required processing power, and/or decreasing required memory capacity may be implemented.


Known compression algorithms, for example FELICS, may require tables of accumulated data to define parameters that may, for example, be used to adapt tie algorithm to, for example, selecting the optimal parameter for Golomb and Rice codes thus having minimal length compressed data. Sustaining tables of accumulated data may require memory to store the tables of accumulated data as well as extra processing power to define the required adaptation, e.g. compute the required parameter. In some embodiments of the present invention, known compression algorithms may be revised to due away with tables of accumulated data. In one embodiment of the present invention, constant values for parameters that may be otherwise determined based on one or more tables of accumulated data, e.g. two-dimensional array of data may be defined. The predetermined value of the parameter may be based on, for example, a priori information on the quality of the data expected due to the optical system used, due to the characteristics of the objects being imaged, for example, color, level of detail, etc. Other considerations may be used to define the parameters. In one embodiment of the present invention, when implementing a FELICS algorithm, the known parameter K in Golomb and Rice codes may be defined, for example, as K=1 for the Y dimension, and K=0 for other dimensions. Other suitable values of K for one or more dimensions may be defined. In other embodiments of the present invention, tables of accumulated data may be used but limited to a predetermined size so as to limit the memory and the processing required.


Compressed data may be transmitted to, for example, an external receiver 12, in portions. Compressed data that may be of variable size may pass through a buffer 49 before transmission. Once the portion defined for transmission is filled, data accommodated in the buffer 49 may be transmitted. The data size of the defined portions may be the size of one or more images or may be the size of a few lines of data. The buffer 49 may provide for use of a constant bit rate transmitter for transmitting data of variable size. In one embodiment of the invention, compression data may be transmitted on the fly by transmitting portions equaling, for example, approximately two lines of pixel data while the buffer size may be in the order of magnitude of, for example, four lines of pixel data. Other suitable buffer sizes and transmission portions may be used. In some embodiments of the present invention, overflowing of the buffer 49 may be avoided by implementing a feed back mechanism between the buffer 49 and the compression algorithm. The feedback mechanism may control the compression ratio of the algorithm based on the rate that the buffer 49 may be filling up. Other methods of transmitting and using buffers may be implemented.


Data transmitted may be received by an external receiver, for example receiver 12. Decoding and/or decompression may be performed on receiver 12 or on data processor 14 where data from the receiver may be subsequently downloaded. In exemplary embodiment data decompression module 150 may be a microprocessor or other micro-computing device and may be part of the receiver 12. In alternate embodiments the functions of the data decompression (decoding) module 150 may be taken up by other structures and may be disposed in different parts of the system; for example, data decompression module 150 may be implemented in software and/or be part of data processor 14. The receiver 12 may receive compressed data without decompressing the data and store the compressed data in the receiver storage unit 16. The data may be later decompressed by, for example data processor 14.


Preferably, compression module 100 may be integral to imager 46. In other embodiments of the present invention, the data compression module 100 may be external to the imager 46 and interface with the transmitter 41 to receive and compress image data; other units may provide other data to data compression module 100. In addition, the data compression module 100 may provide the transmitter 41 with information such as, for example, start or stop time for the transfer of image data from the data compression module 100 to the transmitter 41, the length or size of each block of such image data, and the rate of frame data transfer. The interface between the data compression module 100 and the transmitter 41 may be handled, for example, by the data compression module 100.


In alternate embodiments, tile data exchanged between the data compression module 100 and the transmitter 41 may be different, and in different forms. For example, size information need not be transferred. Furthermore, in embodiments having alternate arrangements of components, the interface and protocol between the various components may also differ. For example, in an embodiment where a data compression capability is included in the transmitter 41 and the imager 46 may transfer un-compressed data to the transmitter 41, no start/stop or size information may be transferred.


Reference is now made to FIG. 6. Typically the method of decoding compressed data may be based on the method of compression, by generally reversing the steps taken for compression of the data, and pre-processing and/or post-processing of the data. For example, when using FELICS compression, the known decoding of FELICS may be implemented to decode the compressed data (block 640). For decoding of the Y plane and/or dimension described in FIG. 5, one or more random bit(s) of noise may be added (block 630) to replace the previously discarded one or more LSB(s). The random bits may be generated by any of the known algorithms for generating random bits. In some embodiments of the present invention known pseudo-random bits may be used, for example, pseudo-random noise quantization using a dithering method. Other methods of producing pseudo-random bits may be used. In still other embodiments of the present invention bits generated from a suitable algorithm may be used. Decoding of the other dimensions and/or planes, e.g. [Cr, Cb, Gdiff], may be accomplished in a similar manner. Decoding based on the implemented compression algorithm, for example FELICS (block 640), may be performed on each of the dimensions. Any LSBs that may have been discarded may be replaced (block 635) as described herein. Offsetting may be reversed (block 660) and interpolation (block 680), for example, linear interpolation may be performed to restore, for example, decimated data to their original size. In other embodiments of the present invention, one or more of the planes, e.g. Gdiff, may not be used, e.g. transmitted, and as such not decoded. In block 620, the data from all the dimensions may be transformed to, for example, their original coordinates, for example to RG1G2B coordinates. In block 610 the image may be restored, e.g. the individual color panes may be combined onto one plane (FIG. 2A), and filled (block 690) to a true RGB image by known interpolation and/or other suitable methods so that for example, there may be an RGB value for each pixel. Compression and decoding of the mosaic image as opposed to the true RGB image may serve to minimize and/or reduce the processing power, rate, and memory required to compress image data. Subsequent to decoding, dark reference image frames may be subtracted from corresponding images if required.


Embodiments of the present invention may include apparatuses for performing the operations herein. Such apparatuses may be specially constructed for the desired purposes (e.g., a “computer on a chip” or an ASIC), or may comprise general purpose computers selectively activated or reconfigured by a computer program stored in the computers. Such computer programs may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.


The processes presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the desired method. The desired structure for a variety of these systems appears from the description herein. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.


Unless specifically stated otherwise, as apparent from the discussions herein, it is appreciated that throughout the specification discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, or the like, typically refer to the action and/or processes of a computer or computing system, or similar electronic computing device (e g., a “computer on a chip” or ASIC), that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.


It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined only by the claims, which follow:

Claims
  • 1. A method for transmitting in-vivo data comprising: obtaining mosaic image data in a first set of coordinates comprising a plurality of colors wherein each color defines a plane of the image data;defining a location of neighboring pixels for each plane;transforming, using a processor, each plane of the image data to an alternate plane in a second set of coordinates;selecting a compression ratio to be used;preprocessing each alternate plane by discarding a least significant bit of the transformed data, to increase a compression ratio;if a high compression ratio is selected, further preprocessing the data by smoothing and decimating;compressing each alternate plane of the image data subsequent to preprocessing;storing the compressed image data in a buffer; andtransmitting the compressed image data.
  • 2. The method according to claim 1 wherein the first set of coordinates comprises: [R, G1, G2, B].
  • 3. The method according to claim 1 wherein the second set of coordinates comprises [Y, Cr, Cb, Gdiff].
  • 4. The method according to claim 1 wherein defining the location of neighboring pixels is by defining pointers.
  • 5. The method according to claim 1 wherein compressing is by FELICS.
  • 6. The method according to claim 1 further comprising: determining a second pre-process to perform on a plane of the image data before compression.
  • 7. The method according to claim 6 wherein said second pre-process performed on the image data is selected from a group consisting of: smoothing, decimating, and offsetting and truncating.
  • 8. The method according to claim 6 wherein the pre-process is determined for each image based on current size of data stalled in the buffer.
  • 9. The method according to claim 6 wherein the pre-process is predetermined for each plane in the second set of coordinates.
  • 10. The method according to claim 1 comprising subtracting a reference dark pixel from a corresponding pixel.
  • 11. The method according to claim 1 wherein the transmitting is by constant bit rate.
  • 12. The method according to claim 1 wherein said preprocessing comprises discarding one or more coordinates of the alternate plane of the image data.
  • 13. The method according to claim 1 wherein said preprocessing comprises discarding two or more Least Significant Bits (LSBs) of the image data.
  • 14. The method according to claim 1 comprising controlling the compression ratio based on a rate that the buffer is filling up.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation-in-part of U.S. patent application Ser. No. 10/202,626, filed Jul. 25, 2002 now abandoned, entitled “Diagnostic Device Using Data Compression”, which in turn claims benefit from prior provisional application No. 60/307,605 entitled “Imaging Device Using Data Compression” and filed on Jul. 26, 2001 each of which are incorporated by reference herein in their entirety.

US Referenced Citations (181)
Number Name Date Kind
3971362 Pope et al. Jul 1976 A
3984628 Sharp Oct 1976 A
4243652 Francis Jan 1981 A
4273431 Farmer et al. Jun 1981 A
4278077 Mizumoto Jul 1981 A
4310228 Terada Jan 1982 A
4323916 Dischert et al. Apr 1982 A
4428005 Kubo Jan 1984 A
4532918 Wheeler Aug 1985 A
4539603 Takeuchi et al. Sep 1985 A
4631582 Nagasaki Dec 1986 A
4642678 Cok Feb 1987 A
4646724 Sato et al. Mar 1987 A
4668860 Anthon May 1987 A
4689621 Kleinberg Aug 1987 A
4689689 Saito et al. Aug 1987 A
4698664 Nichols et al. Oct 1987 A
4786982 Wakahara et al. Nov 1988 A
4834070 Saitou May 1989 A
4841291 Swix et al. Jun 1989 A
4844076 Lesho et al. Jul 1989 A
4936823 Colvin Jun 1990 A
5032913 Hattori et al. Jul 1991 A
5042494 Alfano Aug 1991 A
5126842 Andrews et al. Jun 1992 A
5187572 Nakamura et al. Feb 1993 A
5202961 Mills et al. Apr 1993 A
5209220 Hiyama et al. May 1993 A
5241170 Field Aug 1993 A
5279607 Schentag et al. Jan 1994 A
5319471 Takei et al. Jun 1994 A
5351161 MacKay et al. Sep 1994 A
5355450 Garmon et al. Oct 1994 A
5373322 Laroche Dec 1994 A
5381784 Adair Jan 1995 A
5382976 Hibbard Jan 1995 A
5406938 Mersch et al. Apr 1995 A
5418565 Smith May 1995 A
5467413 Barrett Nov 1995 A
5486861 Miyamoto et al. Jan 1996 A
5493335 Parulski et al. Feb 1996 A
5506619 Adams Apr 1996 A
5519828 Rayner May 1996 A
5523786 Parulski Jun 1996 A
5594497 Ahem et al. Jan 1997 A
5603687 Hori et al. Feb 1997 A
5604531 Iddan et al. Feb 1997 A
5629734 Hamilton May 1997 A
5643175 Adair Jul 1997 A
5652621 Adams Jul 1997 A
5678568 Uchikubo et al. Oct 1997 A
5697384 Miyawaki Dec 1997 A
5697885 Konomura et al. Dec 1997 A
5730702 Tanaka et al. Mar 1998 A
5751340 Strobl et al. May 1998 A
5798846 Tretter Aug 1998 A
5812187 Watanabe Sep 1998 A
5819736 Avny et al. Oct 1998 A
5819740 Muhlenberg Oct 1998 A
5827190 Palcic et al. Oct 1998 A
5830141 Makram-Ebeid et al. Nov 1998 A
5833603 Kovacs et al. Nov 1998 A
5875280 Takaiwa et al. Feb 1999 A
5908294 Schick et al. Jun 1999 A
5929901 Adair et al. Jul 1999 A
5956467 Rabbani et al. Sep 1999 A
5987179 Riek et al. Nov 1999 A
5993378 Lemelson Nov 1999 A
5999662 Burt et al. Dec 1999 A
6014727 Creemer Jan 2000 A
6054943 Lawrence Apr 2000 A
6088606 Ignotz et al. Jul 2000 A
6095989 Hay et al. Aug 2000 A
6124888 Terada et al. Sep 2000 A
6125201 Zador Sep 2000 A
6167084 Wang et al. Dec 2000 A
6173317 Chaddha et al. Jan 2001 B1
6175757 Watkins Jan 2001 B1
6177984 Jacques Jan 2001 B1
6184922 Saito et al. Feb 2001 B1
6208354 Porter Mar 2001 B1
6229578 Acharya et al. May 2001 B1
6233476 Strommer May 2001 B1
6240312 Alfano et al. May 2001 B1
6266454 Kondo Jul 2001 B1
6289165 Abecassis Sep 2001 B1
6304284 Dunton et al. Oct 2001 B1
6310642 Adair et al. Oct 2001 B1
6314211 Kim et al. Nov 2001 B1
6328212 Metlitasky et al. Dec 2001 B1
6351606 Yamakazi Feb 2002 B1
6356276 Acharya Mar 2002 B1
6364829 Fulgham Apr 2002 B1
6389176 Hsu et al. May 2002 B1
6414996 Owen et al. Jul 2002 B1
6452633 Merrill et al. Sep 2002 B1
6492982 Matsuzaki et al. Dec 2002 B1
6498948 Ozawa et al. Dec 2002 B1
6501862 Fukuhara et al. Dec 2002 B1
6504990 Abecassis Jan 2003 B1
6560309 Becker et al. May 2003 B1
6600517 He et al. Jul 2003 B1
6607301 Glukhovsky et al. Aug 2003 B1
6636263 Oda Oct 2003 B2
6661463 Geshwind Dec 2003 B1
6667765 Tanaka Dec 2003 B1
6690412 Higo Feb 2004 B1
6697109 Daly Feb 2004 B1
6697540 Chen Feb 2004 B1
6709387 Glukhovsky et al. Mar 2004 B1
6772003 Kaneko et al. Aug 2004 B2
6847392 House Jan 2005 B1
6847736 Itokawa Jan 2005 B2
6865718 Montalcini Mar 2005 B2
6904308 Frisch et al. Jun 2005 B2
6937291 Gryskiewicz Aug 2005 B1
6939290 Iddan Sep 2005 B2
6944316 Glukhovsky et al. Sep 2005 B2
6950690 Meron Sep 2005 B1
6972791 Yomeyama Dec 2005 B1
6976229 Balabanovic Dec 2005 B1
7009634 Iddan et al. Mar 2006 B2
7039453 Mullick et al. May 2006 B2
7044908 Montalbo et al. May 2006 B1
7057664 Law et al. Jun 2006 B2
7116352 Yaron Oct 2006 B2
7209170 Nishino et al. Apr 2007 B2
7236191 Kalevo et al. Jun 2007 B2
7319781 Chen et al. Jan 2008 B2
7486981 Davidson Feb 2009 B2
7495993 Wang Feb 2009 B2
7505062 Davidson et al. Mar 2009 B2
20010017649 Yaron Aug 2001 A1
20010019364 Kawahara Sep 2001 A1
20010035902 Iddan et al. Nov 2001 A1
20020042562 Meron et al. Apr 2002 A1
20020054290 Vurens May 2002 A1
20020068853 Adler Jun 2002 A1
20020087047 Remijan Jul 2002 A1
20020093484 Skala et al. Jul 2002 A1
20020103417 Gazdzinski Aug 2002 A1
20020109774 Meron et al. Aug 2002 A1
20020171669 Meron et al. Nov 2002 A1
20020173718 Frisch et al. Nov 2002 A1
20020177779 Adler et al. Nov 2002 A1
20020193664 Ross et al. Dec 2002 A1
20020198439 Mizuno Dec 2002 A1
20030020810 Takizawa et al. Jan 2003 A1
20030023150 Yokoi et al. Jan 2003 A1
20030043263 Glukhovsky et al. Mar 2003 A1
20030060734 Yokoi et al. Mar 2003 A1
20030085994 Fujita et al. May 2003 A1
20030117491 Avni et al. Jun 2003 A1
20030139661 Kimchy et al. Jul 2003 A1
20030151661 Davidson Aug 2003 A1
20030156188 Abrams, Jr. Aug 2003 A1
20030158503 Matsumoto Aug 2003 A1
20030167000 Mullick Sep 2003 A1
20030174208 Glukhovsky et al. Sep 2003 A1
20030181788 Yokoi et al. Sep 2003 A1
20030208107 Refael Nov 2003 A1
20030213495 Fujita et al. Nov 2003 A1
20030216670 Beggs Nov 2003 A1
20030229268 Uchiyama et al. Dec 2003 A1
20040027500 Davidson et al. Feb 2004 A1
20040111031 Alfano et al. Jun 2004 A1
20040215059 Homan et al. Oct 2004 A1
20040225190 Kimoto et al. Nov 2004 A1
20040225223 Honda et al. Nov 2004 A1
20040242962 Uchiyama Dec 2004 A1
20040249274 Yaroslavsky Dec 2004 A1
20040249291 Honda et al. Dec 2004 A1
20050038321 Fujita Feb 2005 A1
20050159643 Zinaty et al. Jul 2005 A1
20050165279 Adler et al. Jul 2005 A1
20050182295 Soper et al. Aug 2005 A1
20050283065 Babayoff Dec 2005 A1
20060082648 Iddan et al. Apr 2006 A1
20060158512 Iddan et al. Jul 2006 A1
20060164511 Krupnik Jul 2006 A1
20060195014 Seibel et al. Aug 2006 A1
Foreign Referenced Citations (65)
Number Date Country
34 40 177 May 1986 DE
S47-4376 Feb 1972 JP
S47-41473 Dec 1972 JP
S 55-121779 Sep 1980 JP
S 57-45833 Mar 1982 JP
S 63-70820 Mar 1988 JP
S 63-226615 Sep 1988 JP
03-220865 Sep 1991 JP
4109927 Apr 1992 JP
1992-144533 May 1992 JP
5015515 Jan 1993 JP
6-038201 Feb 1994 JP
07-274164 Oct 1995 JP
7-275200 Oct 1995 JP
09-037243 Feb 1997 JP
H 10-65131 Mar 1998 JP
10-134187 May 1998 JP
11-211997 Aug 1999 JP
H 11-290269 Oct 1999 JP
2000-278693 Oct 2000 JP
2000-295667 Oct 2000 JP
2000-324513 Nov 2000 JP
2000358242 Dec 2000 JP
2001-025004 Jan 2001 JP
2001-028709 Jan 2001 JP
20010017649 Jan 2001 JP
2001-112740 Apr 2001 JP
2001-170002 Jun 2001 JP
2001-298366 Oct 2001 JP
2001-299692 Oct 2001 JP
2002-165227 Jun 2002 JP
2002-247365 Aug 2002 JP
2002-290744 Oct 2002 JP
2002-320235 Oct 2002 JP
2002-369142 Dec 2002 JP
2003-038424 Feb 2003 JP
2004536644 Feb 2003 JP
2003070728 Mar 2003 JP
2004-153744 May 2004 JP
2004-167163 Jun 2004 JP
2004-326991 Nov 2004 JP
2005026807 Jan 2005 JP
2005-143668 Jun 2005 JP
2005-156217 Jun 2005 JP
2006-140642 Jun 2006 JP
2008-079746 Apr 2008 JP
WO 9733513 Sep 1997 WO
WO 9940587 Aug 1999 WO
WO 9960353 Nov 1999 WO
WO 0022975 Apr 2000 WO
WO 0076391 Dec 2000 WO
WO 0106926 Feb 2001 WO
WO 0135813 May 2001 WO
WO 0150180 Jul 2001 WO
WO 0150941 Jul 2001 WO
WO 0165995 Sep 2001 WO
WO 0187377 Nov 2001 WO
WO 02054932 Jul 2002 WO
WO 02080753 Oct 2002 WO
WO 03009739 Feb 2003 WO
WO 03010967 Feb 2003 WO
WO03010967 Feb 2003 WO
WO 2004082472 Sep 2004 WO
WO 2004088448 Oct 2004 WO
WO 2005062715 Jul 2005 WO
Non-Patent Literature Citations (32)
Entry
Japanese Office Action of Application No. 2003-516219 mailed on Aug. 12, 2008.
Korean Office Action of Application No. 10-2004-7001173 mailed on Nov. 26, 2008.
International Search Report for PCT/IL2004/000287 dated Mar. 16, 2005.
International Search Report for PCT/IL02/00621 dated Dec. 6 2002.
The Radio Pill, Rowlands, et al , British Communications and Electronics, Aug. 1960, pp. 598-601.
Wellesley company sends body montiors into space—Crum, Apr. 1998.
Wireless transmission of a color television moving image from the stomach using a miniature CCD camera. light source and microwave transmitter Swain CP, Gong F, Mills TN Gastrointest Endosc 1997;45:AB40.
BBC News Online—Pill camera to ‘broadcast from the gut’, Feb. 21, 2000, www news bbc co uk.
Biomedical Telemetry, R Stewart McKay, published by John Wiley and Sons, 1970.
Office Action for U.S. Appl. No. 11/087,606 mailed on Aug. 28, 2009.
Office Action for U.S. Appl. No. 11/087,606 mailed Jul. 12, 2013.
Office Action for Japanese Patent Application No. 2006-507593 dated Jun. 28, 2011.
Keren et al., “Restoring Subsampled Color Images”, Machine Vision and Applications, vol. 11, pp. 197-202, 1999.
Liao et al., “Interpolation Filter Architecture for Subsampled Image”, Consumer Electronics, Digest of Technical Papers, International Conference on Rosemont, pp. 376-377, Jun. 2-4, 1992.
Supplementary European Search Report for European Application No. 04 72 4102 mailed on Mar. 3, 2010.
Japanese Office Action for Japanese Patent Application No. 2008-502556 mailed Oct. 18, 2011.
Office Action for Japanese Patent Application No. 2006-507593 dated Mar. 31, 2010.
Office Action for U.S. Appl. No. 11/087,606 mailed on Mar. 31, 2009.
Office Action for U.S. Appl. No. 10/551,436 mailed Mar. 3, 2009.
Final Office Action for U.S. Appl. No. 10/551,436 mailed Aug. 4, 2008.
Office Action for U.S. Appl. No. 10/551,436 mailed Feb. 22, 2008.
Final Office Action for U.S. Appl. No. 10/551,436 mailed Sep. 12, 2007.
Office Action for U.S. Appl. No. 10/551,436 mailed Mar. 13, 2007.
Video Camera to “TAKE”—RF System Lab, Dec. 25, 2001.
“Synchronized nQUAD Technology” www.cartesiantech.com, 1998-2000.
Yang et al., “Two Image Photometric Stereo Method”, SPIE, vol. 1826, Intelligent Robots and Computer Vision XI, 1992.
www.dynapel.com, Motion Perfect® product literature, printed Jul. 22, 2003.
www.zdnet.co.uk/pcmag/trends/2001/04/06.html, “Perfect motion on the Net”—Cliff Joseph, printed Dec. 25, 2001.
Muresan, D.D. and T.W. Parks, “Optimal recovery approach to image interpolation”, IEEE Proc. ICIP., vol. 3, 2001, pp. 7-10.
Gunturk, B. et al., “Color plane interpolation using alternating projections,” IEEE Transactions on Image Processing, vol. 11, 2002, pp. 997-1013.
Kimmel, “Demosaicing: Image reconstruction from color ccd samples”, IEEE Transactions onImage Processing, vol. 8, 1999, pp. 1221-1228.
Office Action for corresponding U.S. Appl. No. 11/087,606, mailed Mar. 24, 2015.
Related Publications (1)
Number Date Country
20050159643 A1 Jul 2005 US
Provisional Applications (1)
Number Date Country
60307605 Jul 2001 US
Continuation in Parts (1)
Number Date Country
Parent 10202626 Jul 2002 US
Child 10991098 US