Images from satellites and other remote sensors (“remote sensors”) are used in decision making in a variety of applications and fields including: Agriculture, Cartography, Conservation, Disaster Planning, Education, Electric/Gas, Environmental, Geology, Health and Human Services, Law Enforcement, Local Government, Minerals, Military, Natural resources, Oceans/Seas, Petroleum, Pipeline, Planning, Public Safety, Telecommunications, Tourism, Transportation, Water/Wastewater, and Weather. Applications can range from mapping terrain in dimensional models to tracking the growth of agricultural crops. As these applications expand, more and more algorithms are being developed and software is proliferating to manipulate the data. Also, more and more spectrum data is being used in a combined form according to a matrix, thereby delivering knowledge, rather than information, to solve questions posed by both civilian and military decision makers. Decision makers need to have the information compiled and presented in a clear concise manner. Remote sensor information, including satellite images, aerial images, ballon images, or any other remotely sensed spectral or non-spectral (e.g. infrared images) today are incomplete, where the remote sensor information either portrays flat space and flatland, or stenographic images that require stereo-glasses to view. In either case, an “expert” is required to give his or her opinion as to the content of the remote sensor information. Because of the limits of the remote sensor information, decision makers do not have the complete visual information to personally make an informed decision. The coupe de oeil, power of the glance, for many decision makers, is the difference between success and failure.
To overcome the problem of decision makers not having complete visual information from remote sensor information to personally make an informed decision, the principles of the present invention provide for a system and method that produces clear, concise images resulting from multiple remote sensor information that give the decision maker integral visual information that can be rapidly and readily understood. Images produced by the system may be (i) spatially fused images (e.g., three dimensional images), (ii) time-fused images (e.g., time sequence of images), and (iii) spectrally fused images (e.g., images obtained at different bands of the electromagnetic spectrum). The images are considered “fused images.”
To generate the fused images, the system may combine computer algorithms with optics, material science, remote sensing technology (e.g., satellite technology), printing technology, and advanced knowledge of the interpretation of different electromagnetic spectrum data. A fused image obtained from the system can solve many of today's leading questions posed to satellite experts, thereby enabling non-remote sensing experts to better understand the image without the assistance of a remote sensing expert. The fused images are formatted so a decision maker can look at remote sensor information in the fused image and make an intelligent decision without relying on reams of expert reports, which is commonly performed today. In essence, the expert reports are built into the fused image as visual information. Objects, elements, etc., captured in remote sensor images, may have three spatial dimensions to enable non-remote sensing experts to more readily understand the information contained in the images.
One embodiment of a system for producing remote sensed images may include an input/output unit platform configured to receive multiple sets of remotely sensed information. A digital processing platform may be configured to execute one or more algorithms to process the sets of remotely sensed information to generate a single image including at least two sets of the processed remotely sensed information. A printer may be in communication with the digital processing platform and be configured to receive and print the single image, where the printed single image is configured to enable a viewer to individually view each set of the processed remotely sensed information. Each set of the processed remotely sensed information may be from a different wavelength band. The remotely sensed information may be three dimensional. In addition or alternatively, the remotely sensed information may be time sequenced. In one embodiment, the remotely sensed information is printed on or adhered to a micro-lens array that enables a viewer to see each of the remotely sensed images by altering an angle through which the viewer looks at the single image through the micro-lens array.
One embodiment of a method for generating visual information from remotely sensed information may include collecting a first remotely sensed image. A second remotely sensed image may be collected. The first and second images may be processed to orient the images in substantially the same orientation. The first and second images may be printed onto a single material, and the single material may be configured to enable a viewer to individually view each remotely sensed image at a different angle when viewing the material. In one embodiment, the remotely sensed information is printed on or adhered to a micro-lens array that enables a viewer to see each of the remotely sensed images by altering an angle through which the viewer looks at the single image through the micro-lens array.
The disclosed invention will be described with reference to the accompanying drawings, which show exemplary embodiments of the invention and which are incorporated in the specification hereof by reference, wherein:
Human cognitive processing of three spatial dimensions increases visual reasoning. Data fusion or image fusion is a combination of sensed information. A fused image may include (i) spatially fused images (e.g., three dimensional images), (ii) time sequence or time-fused images, or (iii) spectral fusion or combination of images from different regions of the electro-magnetic spectrum (“EMS”) (e.g., visual and non-visual spectrum images), as described in U.S. Pat. No. 6,781,707, which is incorporated herein by reference in its entirety. Image fusion may make available visual data collected over a time period, such as over many years, to a viewer. Image fusion is defined as two or more images interphased together so that each image can be distinctly viewed at different angles of viewing. The images may be the same view at different times “time fusion” or the same image viewed with different electromagnetic spectrum wavelengths “EMS Fusion”. Three spatial dimensions is a phrase used for describing true view three dimensions. In satellite imagery, the term “3D” is a perspective view of an image on a computer screen, which is differentiated as to what the eye actually would see in capturing a scene. The same applies to other forms of remotely sensed information. A three spatial dimension image captures the true view. By using image fusion, the viewer may see the visual data in a way that is comfortable for a human to visually interpret the visual data. By providing a viewer with both quantitative and qualitative visual data, the viewer is provided with greater clarity to assist the viewer with decision-making.
Satellite technology is well established in terms of sensors, hardware, and data collection.
The satellite image data may be transmitted via satellite signals 106 to ground stations located around the world. The satellite signals 106 may include single or multi-image electromagnetic spectrum data collected using remote sensors. The satellite signals 106 may be analog or digital and be communicated using any communications protocol as understood in the art. A ground station 108 may receive the satellite signals 106 and collect and store the satellite image data communicated therein. The satellite image data can then be relayed to a remote sensor information processing system 110 via a network 112. In one embodiment, the satellite image data is communicated via the network 112 in the form of data packets 114, where the network 112 may be a local area network (LAN) or wide area network (WAN) (e.g., the Internet), or otherwise.
Although
The remote sensor information may include single point or multi-point image data and can be a collection of spectrum data from a single point in time or a collection of spectrum data over time. The remote sensor information may be any or all of spatially, time, or spectrally fused data, where each set of data may be considered an image. The remote sensor information may be transformed or manipulated into usable form by internal algorithms. For example, a data set may be of a non-visual spectral region of the electromagnetic spectrum, and the internal algorithms may generate a visual representation of the non-spectral data set. It should be noted that a usable form is relative to formats compatible within the language of system algorithms in accordance with the principles of the present invention. Mathematical adjustment of images may also be completed to insure overlap and be orthogonally correct. These mathematical adjustment algorithms to orient or otherwise align images are known in the art and within the public domain. The algorithms used may vary depending upon (i) the satellite from which the data is taken, (ii) the wavelengths used, and (iii) the data manipulation performed for the resulting visual data presentation. The “compatibilized” data may be turned into a pixel array prescribed by the internal algorithms. Inherent within the system are algorithms described by U.S. Pat. Nos. 6,781,707, 6,894,809, and 7,019,865, which are incorporated herein by reference in their entirety.
The manipulated data, which may be interphased, may then be printed onto paper and overlaid with a specially designed micro-optic lens array or printed onto a micro-optic array through the use of a specially designed printer, as described in U.S. Pat. No. 6,709,080, which is incorporated herein by reference in its entirety. The term “interphased” means a manipulation of images, generally by a computer, where the images are segmented into lines and then interphased into a single image. A single image may be composed of individual lines in a prescribed array such that when the micro-optical lens array is over-laid, the prescribed multi-dimensional image may be viewed. The system 200 is referred to as Level One, which is a descriptive name of the level of technical sophistication used in producing the described image. Inherent in this system are the appropriate algorithms, as described above, to format the data and manipulate the data to the desired results.
Output data 214 may be presented as multi-dimensional imagery and/or fused data. In remote sensing imaging, the ability to display depth of field in hardcopy format is not otherwise commercially available. The ability to reach beyond visual images into the spectrum not visible to the human eye (e.g., gamma, infra-red, sonar/radar, and x-ray) further allows for data streaming to be collected from remote sensors and incorporated into a single image on a single sheet, where the image includes multiple pages of information. The single image may be uniquely meaningful to decision maker(s) who may or may not be experts in interpreting remote sensor images. The system 200 enhances expert analysis by incorporating their knowledge into the algorithms presented above into associated software platform. In addition, the system 200 provides for knowledge as to data collected from different EMS wavelengths. Analytical knowledge is incorporated as the EMS and time sequence images can be viewed and, based on the images, decisions can be made. The knowledge of what images to use comes from the expert, the meaning of each individual image comes from the expert, but the collective conclusion comes from the decision maker. Thus, expert knowledge embedded into the system 200 enables decision makers to interpret the information without the assistance of an expert, thereby improving understanding and efficiency for the decision maker.
The system 200 provides for both immediacy and ease of use as the system 200 provides for real time feedback in the form of a hardcopy and no or little training to operate, as compared to current technology and availability. The power of the view of a fused image cannot be underestimated, where the viewer is the ultimate decision maker. Below are examples where the decision ion maker may readily understand and interpret satellite images in accordance with the principles of the present invention.
As previously described, data or image fusion may involve spatial information (e.g., 3D), a combination of visual information as a time sequence, or spectrally diverse information, such as a combination of images from the same or different regions of the electro-magnetic spectrum collected by remote sensors. Thus, data fusion can answer such questions as, “where is the best topographical structure to explore for oil, gas and water?” Again, remote sensing experts can interpret remote sensor images and reports to provide an answer, but non-experts generally do not have the ability to fully interpret the remote sensor images and, thus, cannot make educated decisions without the assistance of one or more remote sensor experts.
The system 200 and software 204 may compile remote sensor information, such as images, satellite images over a time sequence or in one or more frequency band and provide an output format, such as a single fused image, that enables a decision maker, who may or may not be an expert, in interpreting satellite images. A single, fused image may reinforce reports and enhance conclusions for the decision maker. The fused image by itself may even answer specific questions that are otherwise difficult to answer using multiple satellite images. Furthermore, spatially real-time hardcopy maps can be generated with highlighted sites to explore for teams going to a remote area. While a computer can simulate multi-dimensional images on a monitor, a team working in areas without power or who desire to insure against a computer crash in the field, may find a hardcopy map to be far more reliable. Multi-dimensional models can be generated from sonic or MRI data at a site to further pinpoint exact locations where, for instance, oil or gas may be found. The use of multi-dimensional image maps may lessen the number of exploratory wells and map the field for the best location to establish a well, thereby saving time and money for the user.
A multi-dimensional hardcopy is better than conventional maps and reports due to the spectral and reflective color details inherent in the images. By way of the system described herein, not only are two channels being included in a satellite image, but also a multiplicity of channels is being added. The number of channels used may be determined by the complexity of the problem being solved and the design of the micro-optical material used. Typically, the more complex a problem is, more information is generally used to address the answer. The greater the amount of information, the larger the amount of channels of information used in the interphasing program, where each image may be broken into discrete lines and then the lines are interphased to line up behind the micro-optical material. To obtain the best fidelity of the image, the information may be presented off the array to the viewer's eyes in the appropriate sequence and in the right frames. Optical ray tracing techniques and knowledge of the printer resolution are utilized to design the optimum lens array configuration to present the information to the viewer appropriately.
The system 200 provides the ability to time fuse data. The ability to time fuse data allows sequences of events to show as a motion, which enables the decision maker to gain a better grasp of the direction of events or to anticipate the evolution of a sequence. Also, using time fusion, before and after sequences can be observed for information, such as insurance damage, progress of projects, and impact of man-made structures on the environment. Within the algorithms described above is the ability to tie multi-views into multi-dimensional imagery with controlled parallax. The algorithm control the parallax relative to the lens system being used, pixels per inch output by a special printer, and rules developed from prior depth of field work, as further described in U.S. Pat. Nos. 4,086,585 and 4,124,291, which are incorporated herein by reference in their entirety. Ability to control the parallax and depth of field means the resultant imagery is accurate and capable of being used to gain measurement calculations. The image overlay also allows accurate depth maps to be created, which can lead to further refinement and views by creating on-screen virtual three-dimensional imagery. The software may be used to create a hardcopy of the images. The system 200 provides the ability to use ray tracing techniques to create multi-dimensional images from different views of the same scene.
Thus, if a specific solution for a stated problem is sought, then the matrix may be used to determine which bandwidth(s) may be used to formulate a solution. For example, for a given region, water is rationed and the water authorities want to know what sub-regions need to receive water for irrigation. (Stated Problem) From the above exemplary band matrix, the first three bands may be combined into a visual image. (Image one) Then, the image produced from Band 4 may be used to see what type of plants are in the sub-regions and gain info on the moisture in the soil. (Image two) Image three may be taken from Band 7 to assess plant moisture and combine with an image (Image four) from Band 5 or 6 to view vegetation stress. These four images may then be fused into one multi-spectral image by the system described herein. The decision maker may then be able to view the information on the multi-spectral image by rotating the multi-dimensional image to change the viewing angle. As questions arise in the decision maker's mind, further rotation back and forth provides a quick answer. This image may be combined with a time fusion of the region from any of the bands, such as Band 7, determining the rate of loss of moisture in the vegetation. This example represents one example of the use of the system in accordance with the principles of the present invention. Different matrices may be formulated depending on the stated problem. Each stated problem may have a different set of multi-dimensional images for solving the stated problem.
In one embodiment, the processing system 110 (
Once the appropriate bandwidths are chosen, data from remote sensors can be downloaded into the system 300 and fused visual information can be generated so a person can gather information to proceed to determine a solution to a current problem. The system can also generate a multi-dimensional hardcopy of the area in question for further review and study. The multi-dimensional hardcopy of the area in question may be used to solve the given problem by decision makers with little or no technical support. The hyper-spectral (i.e., multi-spectral) display may act as written output of an analytical device. The system, in accordance with the principles of the present invention, provide for intermediate steps that may be performed, such as forming the matrix of bands available and then choosing the bands, to answer a specific problem.
Using a systematic approach from the posing of a question to the postulation of the remote sensed information to answer the posed question to the manipulation of the data to create a hyper-spectral display and final to output the display may be employed in accordance with the principles of the present invention. The systematic approach may be performed by combining the remote sensor software program or platform that delivers light to a special printer that uses a special MicrOptical™ material (as described herein and within the patents incorporated by reference) with special inks to create the finished visual display of data. The special ink may be formulated to adhere to the lens array formed of the MicrOptical™ material, for example. Alternatively, the ink may be printed on another material and adhered to the lens array. Thus, the system, as described herein, simplifies the use of satellite data. Each platform within the system may be tailored to answer a problem based on the expert's input, where the expert's input is embedded in the software such that the decision maker may receive input data from a variety of different and well specified source(s), such as an existing satellite or a new satellite.
The system 300 or platform that includes internal algorithms may compile and fuse remotely sensed information such as image data, and send the compiled and fused remotely sensed information to a specially designed output device that prints on specially designed material to create a visual display that contains “fused quantitative data that visually answers the question.” Based on the expert's input, which is embedded into the software program created to answer the posed question and the Level Two system of
1. A multi-variant optical medium 402 represents an optical base that allows the system to be decoupled by a viewer's eye. Two types of material may generally be utilized. One material is a high fidelity, low attenuation angle material used for multi-dimensional displays. The second material is a high fidelity, high angle material used for fused systems. The materials can be either adhesively backed for lamination or coated for ink receptivity. In one embodiment, a 60 lens per inch wide angle lens design may be used for the time fusion and EMS fusion. A 100 lens per inch wide angle lens design with approximately 34 to 36 degree attenuation angle for the “True View” data display may be used. Both of these lenses may be cylindrical. However, other lens designs may be utilized that perform the same or equivalent functionality. The system variables may include the dots per inch (DPI) of the printer, the number of frames being viewed, the thickness of the material, the index of refraction of the lens, the angle of attenuation of the lens, and the shape of the lens. These parameters are mathematically related and known to those skilled in the art. One embodiment of the material may include be manufactured using one of the processes described in U.S. Pat. No. 5,362,351 or U.S. Pat. No. 6,060,003, the contents of which are incorporated herein by reference in their entirety. High fidelity refers to material that has an attenuation angle between approximately 32 and approximately 38 degrees. This angle range is well suited for “True View” data display and yields sharp in focused images. Higher attenuation angles tend to distort the boundaries of objects and are not as focused. A wider angle material works well for fused images as the cross over between images tends to be more controllable.
2. A special printer may be utilized to optically align the medium to the print head. The printer may be designed to use either light or ultrasound to detect the lens pattern. By sensing the peaks of the lenses and providing feedback to the printer head, the printer aligns and registers the micro optical material. This registration enables controlling the placement of the dots to maximize the fidelity (i.e., sharpness). Utilising light and sensors, the lens spacing or pitch of the medium is sensed and fed back to the print head so a rasterized image is aligned to the medium. Special dot patterns are used to yield the highest fidelity to the image. The printer may include arrays that are designed for plane (X-Y) images. Other arrays may be utilized in accordance with the principles of the present invention.
3. Special inks may be used with the system. These inks give high fidelity, low spread and high saturation to the print. The inks are formulated to work well with a coating on the back of a print medium. The inks also are durable and waterproof. Several factors may be involved in an ink system to operate in accordance with the principles of the present invention. First, a coating may be applied to plastic as opposed to paper. Second, printing may be performed on the back of the plastic such that light travels through the plastic and then travel back out to the viewer to see the image. Normal printing is on the surface and the eyes receive the reflected light directly. In one embodiment, a total system of coating is printed or otherwise deposited on the back of the micro optical material so the ink sticks to the coating. Plastic that has low surface energy causes very little ink to stick without first treating or coating the plastic surface, and the ink should have to have stronger pigmentation to overcome the transmittance through the plastic lens sheet twice. The stronger pigmentation may be held with the smallest dot size possible, which is equivalent to standard dot sizes.
At step 610, an expert system containing algorithms for visual data conversion may be provided. As previously described, the expert system may be created based on a problem posed by a customer. One or more remote sensor expert may provide the system with information to convert visual data. At step 612, an interphase algorithm may be utilized to generate a fused image of the remotely sensed data. The interphase algorithm may be found in U.S. Pat. Nos. 6,781,707, 6,894,904, and 7,019,865, which are herein incorporated by reference in their entirety. At step 614, printer algorithms may be utilized to print the fused image. The printer algorithms may be found in U.S. Pat. No. 6,709,080, which is incorporated herein by reference in its entirety. At each step shown, an algorithm may be performed, thereby resulting in remotely sensed data 602 being processed and printed to enable a decision maker to be able to view multiple images to be combined and printed onto a single sheet. It should be understood that the process shown is not limited, but merely sets forth one embodiment in accordance with the principles of the present invention.
The above description has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the illustrative embodiments in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art.
Number | Date | Country | Kind |
---|---|---|---|
TO2007/A000620 | Aug 2007 | IT | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US08/10036 | 8/25/2008 | WO | 00 | 7/27/2010 |