Claims
- 1. A method of processing image data for display on a pixelated imaging device, the method comprising:
pre-compensation filtering an image input to produce pre-compensation filtered pixel values, the pre-compensation filtering being performed with a filter having a transfer function that approximates the function that equals one divided by a pixel transfer function, at least over a frequency range whose upper limit does not exceed the Nyquist frequency of the pixelated imaging device; and displaying the pre-compensation filtered pixel values on the pixelated imaging device.
- 2. A method according to claim 1, in which the method further comprises:
pre-compensation filtering an image input for each of a plurality of superposed pixelated imaging devices, at least two of which are unaligned, to produce multiple sets of pre-compensation filtered pixel values; and displaying the multiple pre-compensation filtered pixel values on the plurality of superposed pixelated imaging devices.
- 3. A method according to claim 2, in which the method further comprises:
displaying the multiple pre-compensation filtered pixel values on six imaging devices, the six imaging devices being positioned in four spatial phase families, the first and third spatial phase families each corresponding to a separate imaging device, each of which is fed green chrominance values, the second and fourth spatial phase families each corresponding to a pair of aligned imaging devices, each pair having an imaging device that is fed blue chrominance values and an imaging device that is fed red chrominance values.
- 4. A method according to claim 3, wherein the four spatial phase families are diagonally offset from each other by one-quarter of a diagonal pixel dimension of the pixelated imaging device.
- 5. A method according to claim 2, in which the method further comprises:
displaying the multiple pre-compensation filtered pixel values on three imaging devices, the three imaging devices being positioned in two spatial phase families, the first spatial phase family corresponding to an imaging device that is fed green chrominance values, the second spatial phase family corresponding to an imaging device that is fed blue chrominance values and to an aligned imaging device that is fed red chrominance values.
- 6. A method according to claim 5, wherein the two spatial phase families are diagonally offset from each other by one-half of a diagonal pixel dimension of the pixelated imaging device.
- 7. A method according to claim 1, wherein the step of pre-compensation filtering comprises:
filtering the image input with a pre-compensation filter having a transfer function that equals the result of gain-limiting and clipping the function that equals one divided by a pixel transfer function.
- 8. A method according to claim 7, wherein the pre-compensation filter transfer function is clipped at a frequency that does not exceed the Nyquist frequency of the pixelated imaging device.
- 9. A method according to claim 7, wherein the pre-compensation filter transfer function is clipped at a frequency that exceeds the Nyquist frequency of the pixelated imaging device.
- 10. A method according to claim 7, wherein the method comprises:
pre-compensation filtering image input data that is in a perception-based format to yield a filtered perception-based pixel value for each pixel of the pixelated imaging device; and converting each filtered perception-based pixel value to a corresponding color value, for each pixel of the pixelated imaging device.
- 11. A method according to claim 1, wherein the method further comprises:
calculating the coefficients of a finite impulse response filter; and pre-compensation filtering the image input with the finite impulse response filter.
- 12. A method according to claim 1, wherein the step of pre-compensation filtering comprises:
filtering the image input with a pre-compensation filter having a transfer function that equals the result of gain-limiting and clipping a function equal to: 6H[u,V]=1{Sinc[u]*Sinc[V]}where a function Sinc[x] is defined as: 7Sinc[x]={1,x=0sin[x]x,x≠0and “*” denotes convolution, and u,V are spatial frequency variables.
- 13. A method according to claim 2, wherein the step of pre-compensation filtering comprises:
filtering the image input with a pre-compensation filter having a transfer function that equals the result of gain-limiting and clipping a function that equals one divided by a pixel transfer function.
- 14. A method according to claim 13, wherein the pre-compensation filter transfer function is clipped at a frequency that does not exceed the Nyquist frequency of the pixelated imaging device.
- 15. A method according to claim 13, wherein the pre-compensation filter transfer function is clipped at a frequency that exceeds the Nyquist frequency of the pixelated imaging device.
- 16. A method according to claim 13, wherein the method comprises: pre-compensation filtering image input data that is in a perception-based format to yield a filtered perception-based pixel value for each pixel of each pixelated imaging device of the plurality of pixelated imaging devices; and
converting each filtered perception-based pixel value to a corresponding color value, for each pixel of each pixelated imaging device of the plurality of superposed pixelated imaging devices.
- 17. A method according to claim 2, wherein the method further comprises:
calculating the coefficients of a finite impulse response filter, the coefficient calculations being adjusted for a spatial phase shift between the at least two unaligned superposed pixelated imaging devices; and pre-compensation filtering the image input with the finite impulse response filter.
- 18. A method according to claim 2, wherein the step of pre-compensation filtering comprises:
filtering the image input with a pre-compensation filter having a transfer function that equals the result of gain-limiting and clipping a function equal to: 8H[u,V]=1{Sinc[u]*Sinc[V]}where a function Sinc[x] is defined as: 9Sinc[x]={1,x=0sin[x]x,x≠0and “*” denotes convolution, and u,V are spatial frequency variables.
- 19. A method according to claim 2, wherein the method further comprises:
unaligning at least two of the superposed pixelated imaging devices by spatially phase-shifting the at least two imaging devices with respect to each other in two spatial dimensions.
- 20. A method according to claim 19, wherein the step of unaligning the at least two imaging devices comprises:
spatially phase-shifting at least two arrays of square pixels with respect to each other, the spatial phase-shift having equal horizontal and vertical components.
- 21. A method according to claim 19, wherein the step of unaligning the at least two imaging devices comprises:
spatially phase-shifting at least two arrays of rectangular pixels with respect to each other, the spatial phase-shift having horizontal and vertical components of differing magnitudes.
- 22. A method according to claim 19, wherein the method further comprises:
spatially aligning at least two imaging devices of the superposed pixelated imaging devices.
- 23. A method according to claim 2, wherein the method further comprises:
unaligning at least two of the superposed pixelated imaging devices with respect to each other, by using imaging devices having differing properties chosen from the group consisting of: optical properties, physical properties, scale, rotational angle, aspect ratio, and relative liner offset.
- 24. A method of displaying an image, the method comprising:
feeding a plurality of image input data sets to a time-multiplexing optical display device, each image input data set comprising pixel values, the image input data sets corresponding to at least two superposed unaligned display data sets; using the time-multiplexing optical display device at a first time to display a pixel value corresponding to a first display data set of the at least two superposed unaligned display data sets; and using the time-multiplexing optical display device at a second time to display a pixel value corresponding to a second display data set of the at least two superposed unaligned display data sets.
- 25. A method according to claim 24, wherein the time-multiplexing optical display device moves its optics between the first time and the second time.
- 26. A method according to claim 25, the method further comprising:
using the time-multiplexing optical display device to display pixel values corresponding to six display data sets, the six display data sets being positioned in four spatial phase families, the first and third spatial phase families each corresponding to a separate display data set composed of green chrominance values, the second and fourth spatial phase families each corresponding to a pair of aligned display data sets, each pair having a display data set composed of blue chrominance values and a display data set composed of red chrominance values.
- 27. A method according to claim 26, wherein the four spatial phase families are diagonally offset from each other by one-quarter of a diagonal pixel dimension of the display data sets.
- 28. A method according to claim 25, the method further comprising:
using the time-multiplexing optical display device to display pixel values corresponding to three display data sets, the three display data sets being positioned in two spatial phase families, the first spatial phase family corresponding to a display data set composed of green chrominance values, the second spatial phase family corresponding to a display data set composed of blue chrominance values and to an aligned display data set composed of red chrominance values.
- 29. A method according to claim 28, wherein the two spatial phase families are diagonally offset from each other by one-half of a diagonal pixel dimension of the display data sets.
- 30. A method according to claim 25, the method further comprising:
pre-compensation filtering each of the plurality of image input data sets to produce pre-compensation filtered pixel values, the pre-compensation filtering being performed with a filter having a transfer function that equals the result of gain-limiting and clipping a function that equals one divided by a pixel transfer function.
- 31. A method according to claim 30, wherein the pre-compensation filter transfer function is clipped at a frequency that does not exceed the Nyquist frequency of the display data sets.
- 32. A method according to claim 30, wherein the pre-compensation filter transfer function is clipped at a frequency that exceeds the Nyquist frequency of the display data sets.
- 33. A method according to claim 30, wherein the method comprises:
pre-compensation filtering image input data sets that are in a perception-based format to yield a filtered perception-based pixel value for each pixel of each image input data set; and converting each filtered perception-based pixel value to a corresponding color value, for each pixel of each image input data set.
- 34. A method according to claim 30, wherein the step of pre-compensation filtering comprises filtering each of the image input data sets with a pre-compensation filter having a transfer function that equals the result of gain-limiting and clipping a function equal to:
- 35. A method according to claim 25, wherein the at least two superposed unaligned display data sets are square pixel arrays, spatially phase-shifted from each other by equal amounts in the horizontal and vertical directions.
- 36. A method of displaying an image, the method comprising:
processing an image input data set to produce pixel values for display on a plurality of superposed pixelated imaging devices, at least two of which are unaligned; and displaying the pixel values on the plurality of superposed pixelated imaging devices.
- 37. A method according to claim 36, the method further comprising:
displaying the pixel values on six imaging devices, the six imaging devices being positioned in four spatial phase families, the first and third spatial phase families each corresponding to a separate imaging device, each of which is fed green chrominance values, the second and fourth spatial phase families each corresponding to a pair of aligned imaging devices, each pair having an imaging device that is fed blue chrominance values and an imaging device that is fed red chrominance values.
- 38. A method according to claim 37, wherein the four spatial phase families are diagonally offset from each other by one-quarter of a diagonal pixel dimension of the pixelated imaging device.
- 39. A method according to claim 36, the method further comprising:
displaying the pixel values on three imaging devices, the three imaging devices being positioned in two spatial phase families, the first spatial phase family corresponding to an imaging device that is fed green chrominance values, the second spatial phase family corresponding to an imaging device that is fed blue chrominance values and to an aligned imaging device that is fed red chrominance values.
- 40. A method according to claim 39, wherein the two spatial phase families are diagonally offset from each other by one-half of a diagonal pixel dimension of the pixelated imaging device.
- 41. A method according to claim 36, wherein the method further comprises:
unaligning at least two of the superposed pixelated imaging devices by spatially phase-shifting the at least two imaging devices with respect to each other in two spatial dimensions.
- 42. A method according to claim 41, wherein the step of unaligning the at least two imaging devices comprises:
spatially phase-shifting at least two arrays of square pixels with respect to each other, the spatial phase-shift having equal horizontal and vertical components.
- 43. A method according to claim 41, wherein the step of unaligning the at least two imaging devices comprises:
spatially phase-shifting at least two arrays of rectangular pixels with respect to each other, the spatial phase-shift having horizontal and vertical components of differing magnitudes.
- 44. A method according to claim 41, wherein the method further comprises:
spatially aligning at least two imaging devices of the superposed pixelated imaging devices.
- 45. A method according to claim 36, wherein the method further comprises:
unaligning at least two of the superposed pixelated imaging devices with respect to each other, by using imaging devices having differing properties chosen from the group consisting of: optical properties, physical properties, scale, rotational angle, aspect ratio, and relative liner offset.
- 46. A method of image sensing, the method comprising:
sensing light from the image with a set of superposed pixelated imaging devices, at least two of which are unaligned.
- 47. A method according to claim 46, the method further comprising:
splitting light from the image into components using a beam splitter; and directing each component for reception by one of the superposed pixelated imaging devices.
- 48. A method according to claim 47, the method further comprising:
splitting the light into components using a diachroic prism, each component corresponding to a separate color frequency band.
- 49. A method according to claim 48, the method further comprising:
directing each separate color component for reception by a different one of the superposed pixelated imaging devices.
- 50. A method according to claim 49, the method further comprising:
splitting the light from the image into six color frequency ranges.
- 51. A method according to claim 49, the method further comprising:
processing the received components by solving for a lowest energy signal, for a whole sensed image, that satisfies constraints provided by color component values received by each of the superposed pixelated imaging devices.
- 52. A method according to claim 51, the method further comprising:
solving for a lowest energy luminance and color difference signal.
- 53. A method according to claim 49, the method further comprising:
processing received color component values from each superposed pixelated imaging device by adjusting for the sagittal and tangential frequency response of each device at its associated color frequency.
- 54. A method according to claim 49, the method further comprising:
processing received color component values by weighting each color component based on human perception of luminance, Cb, and Cr signals.
- 55. A method according to claim 49, the method further comprising:
processing received color components to adjust for the two-dimensional frequency response and spatial phase of the superposed imaging device by which it was received.
- 56. A method of image sensing, the method comprising:
sensing light from the image with a time-multiplexing pixelated imaging device, at a first time and a first location; moving the time-multiplexing device to a second location such that its pixelated sensors are spatially phase-shifted from, and superposed with, the spatial location they occupied when the time-multiplexing device was at the first location; and sensing light from the image with the time-multiplexing pixelated imaging device at a second time, at the second location.
- 57. A method according to claim 56, the method further comprising:
splitting light from the image into components using a beam-splitter; and directing the components for separate reception by the time-multiplexing imaging device at different locations, including at least the first and second locations.
- 58. A method according to claim 57, the method further comprising:
processing the received components by solving for a lowest energy signal, for a whole sensed image, that satisfies constraints provided by color component values received by the time-multiplexing imaging device at each of the different locations.
- 59. A method according to claim 57, the method further comprising:
processing received color component values by weighting each color component based on human perception of luminance, Cb, and Cr signals.
- 60. A method of recording a motion picture image, the method comprising:
splitting light from the image into components using a beam splitter; directing each component for reception by one of a set of superposed pixelated imaging devices, at least two of which are unaligned; and separately recording a component value received by each superposed pixelated imaging device.
- 61. A method of recording a motion picture image, the method comprising:
splitting light from the image into components using a beam splitter; directing each component for reception by one of a set of superposed pixelated imaging devices, at least two of which are unaligned; recording a luminance signal combining component values received by the superposed pixelated imaging devices; and recording two color difference signals combining component values received by the superposed pixelated imaging devices.
- 62. A method according to claim 61, the method further comprising:
recording the luminance signal with a resolution that is twice a resolution, in both dimensions, of the color difference signals.
- 63. A method according to claim 62, the method further comprising:
recording signals obtained by three superposed pixelated imaging devices.
- 64. A method according to claim 62, the method further comprising:
recording signals obtained by six superposed pixelated imaging devices.
- 65. A method of playing back a recorded motion picture image, the method comprising:
filtering and interpolating the recorded image; and displaying the filtered and interpolated image on a set of superposed pixelated imaging devices, at least two of which are unaligned.
- 66. A method according to claim 65, the method further comprising:
dividing the recorded image's energy amongst the superposed pixelated imaging devices, the division being weighted amongst the imaging devices in accordance with human color perception.
- 67. A method of playing back a recorded motion picture image, the method comprising:
filtering and interpolating the recorded image; and displaying the filtered and interpolated image using a time-multiplexing imaging device, the time-multiplexing device moving between at least two display positions to create a set of superposed pixelated displays, at least two of the displays being unaligned.
- 68. A method according to claim 67, the method further comprising:
dividing the recorded image's energy amongst the superposed pixelated displays, the division being weighted amongst the displays in accordance with human color perception.
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of our application Ser. No. 09/775,884, filed Feb. 2, 2001 under attorney docket number 2418/121, and claims the benefit of our provisional application serial No. 60/179,762, filed Feb. 2, 2000. The disclosure of both of these related applications is hereby incorporated herein by reference.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60179762 |
Feb 2000 |
US |
Continuations (1)
|
Number |
Date |
Country |
Parent |
09775884 |
Feb 2001 |
US |
Child |
10228627 |
Aug 2002 |
US |