Not applicable.
Light behaves as a wave, which can be characterized by its amplitude and phase. However, the current imaging sensors such as complementary metal oxide semiconductor (CMOS) sensors completely lose the phase information and limit the design of conventional imaging systems to mapping all information to only the amplitude of the incoming field. This mapping is not always feasible and results in many limitations. In contrast, the goal of wavefront sensing is to simultaneously measure the amplitude and phase of an incoming optical field. The combination of these two pieces of information enables the retrieval of the optical field at any plane, which provides a larger latitude and more flexibility in the design of imaging systems.
In one aspect, embodiments disclosed herein generally relate to a system for a wavefront imaging sensor with high resolution (WISH) comprises a spatial light modulator (SLM), a plurality of image sensors and a processor. The system further includes the SLM and a computational post-processing algorithm for recovering an incident wavefront with a high spatial resolution and a fine phase estimation. In addition, the image sensors work both in a visible electromagnetic (EM) spectrum and outside the visible EM spectrum.
In another aspect, embodiments disclosed herein relate to a method for a WISH imaging including illuminating a target with a coherent light source, modulating an incident wavefront from the target by projecting multiple random phase patterns on a SLM, and capturing corresponding a plurality of intensity images using a plurality of image sensors. The method further includes acquiring sequential pairs of the phase patterns on the SLM and captured plurality of intensity images, processing an acquired data using a computational post-processing algorithm, and recovering a high-resolution wavefront based on the computational post-processing algorithm.
In another aspect, embodiments disclosed herein relate to a non-transitory computer readable medium storing instructions. The instructions are executable by a processor and comprise functionality for illuminating a target with a coherent light source, modulating an incident wavefront from the target by projecting multiple random phase patterns on a SLM, and capturing corresponding a plurality of intensity images using a CMOS sensor. The instructions further include acquiring sequential pairs of the phase patterns on the SLM and captured plurality of intensity images, processing an acquired data using a computational phase-retrieval algorithm, and recovering a high-resolution wavefront based on the computational post-processing algorithm.
Other aspects and advantages of one or more embodiments disclosed herein will be apparent from the following description and the appended claims.
Specific embodiments will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
In the following detailed description of embodiments, numerous specific details are set forth in order to provide a more thorough understanding.
However, it will be apparent to one of ordinary skill in the art that embodiments may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
In the following description, any component described with regard to a figure, in various embodiments of the present disclosure, may be equivalent to one or more like-named components described with regard to any other figure.
For brevity, at least a portion of these components are implicitly identified based on various legends. Further, descriptions of these components will not be repeated with regard to each figure. Thus, each and every embodiment of the components of each figure is incorporated by reference and assumed optionally present within every other figure having one or more like-named components. Additionally, in accordance with various embodiments of the present disclosure, any description of the components of a figure is to be interpreted as an optional embodiment, which may be implemented in addition to, in conjunction with, or in place of the embodiments described with regard to a corresponding like-named component in any other figure. In the figures, black solid collinear dots indicate that additional components similar to the components before and/or after the solid collinear dots may optionally exist.
Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before,” “after,” “single,” and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements, if an ordering exists.
The term data structure is understood to refer to a format for storing and organizing data.
Introduction
Traditional wavefront sensors fall into two groups. The first group is based on geometrical optics. Shack-Hartmann wavefront sensor (SHWFS) is the most frequently used geometric design, which builds an array of lenses in front of a CMOS sensor. Each lens provides measurements of the average phase slope (over the lensed area) based on the location of the focal spot on the sensor. To achieve high phase accuracy, many pixels are required per lens to precisely localize the spot. Thus, although the CMOS sensor has millions of pixels, the spatial resolution of the measured complex field is very low. Currently, commercial SHWFSs offer up to 73×45 measurement points, which is useful to estimate only smooth phase profiles such as air turbulence. The second group is designed based on diffractive optics. The phase information is encoded into interferometric fringes by introducing a reference beam. However, these interferometric systems have the following two limitations: (a) the systems are bulky and heavy due to the increased optical complexity, and (b) the systems are highly sensitive to micrometer-scale vibrations.
Our key insight is to capitalize upon the field of computational imaging, which provides an elegant framework to codesign advanced computational algorithms and optics to develop new solutions to traditional imaging techniques and design a non-interferometric, high-resolution (multimegapixel) system. Many other limits that were considered fundamental have been overcome by this joint design approach of the present disclosure. For example, superresolution microscopes such as PALM and STORM achieve subdiffraction-limit imaging by combining photoswitchable fluorophores with high-accuracy localization algorithms. Fourier Ptychography offers a high space-bandwidth product using an LED array microscope with phase retrieval algorithms. Non-line-of-sight imaging enables one to look around the corner by utilizing the time-of-flight setups and 3D reconstruction algorithms.
The traditional wavefront sensors are recognized to suffer from low spatial resolution and/or high vibration-sensitivity to directly measure the phase. The present disclosure may avoid these drawbacks by combining optical modulation and computational optimization. Specifically, two cutting-edge technologies are used. First, the current high-performance CMOS technology enables the production of high-resolution, high-frame-rate image sensors and spatial light modulators (SLMs). Second, recent advances in the phase retrieval algorithms and computational power enable to efficiently solve large-scale optimization problems. By combining these two technological advances, high-resolution intensity measurements may be recorded and indirectly recover the phase using the phase retrieval algorithms.
One or more embodiments are inspired by recent efforts of various research groups to measure the wavefront computationally using sequential captures with an SLM. However, the current techniques suffer from two limitations that are aimed to directly address. First, the spatial resolution of the acquired wavefronts is limited. Second, these systems are not optimized for acquisition speed, which makes the sensor incapable of imaging dynamic scenes. On the other hand, while existing single-shot wavefront sensors achieve high frame rate recording, they typically rely on assumptions such as the sparsity and severely limit the applicability of these systems to generic applications.
One or more embodiments of the present disclosure may provide a wavefront imaging sensor with high resolution (WISH), which offers multimegapixel resolution, high frame rate, and robustness to vibrations as shown in a system 100 of
Results
To validate the proposed wavefront sensing technique, a table-top prototype is constructed as shown in a system 200 of
During the acquisition, multiple phase modulation patterns were projected onto the SLM 110. The SLM patterns modulated the incoming optical field before propagating towards the sensor 104, which recorded 2D images that corresponded to the intensity of the field at the sensor plane. The phase information of the modulated field was not recorded. Multiple uncorrelated measurements were recorded with different SLM patterns to enable the algorithm to retrieve the phase. In an ideal setting, the SLM pattern should be fully random to diffract the light to all pixels of the sensor to improve the convergence and accuracy of the iterative retrieval algorithm. However, the cross-talk effect from the SLM 110 becomes a serious issue, especially for high-frequency patterns, which deteriorates the quality of the recovered image. Moreover, due to the finite size of the sensor 104, the SLM 110 should only diffract light to the level that the sensor can capture most of the signal. In one or more embodiments for experiment, the SLM patterns are first generated by low-resolution random matrices and subsequently interpolated to match the SLM resolution (see the Methods section for more details on SLM pattern designing).
Mathematically, for each measurement IL captured with the corresponding random phase modulation ΦSLMi, the forward model is as follows:
√{square root over (Ii)}=|Pz(ΦSLMi·u)| (1) (1)
where u is the unknown field that falls on the SLM. The symbol “·” denotes Hadamard product to represent the elementwise multiplication between the phase on the SLM and the field. Pz is the propagation operator (at the propagating distance z), which is modeled as Fresnel propagation (see Methods).
To estimate field u from K measurements, the following optimization problem is formed:
This is a phase retrieval problem, which is nonlinear and nonconvex. There are many quality algorithms to solve such problem. Here, Gerchberg-Saxton (GS) algorithm is applied to recover the field u by alternating projections between the SLM and the sensor plane, as illustrated in a system 214 of
To correctly recover the unknown field, a minimum number of measurements K is required for the algorithm to converge. Intuitively, a more complicated field u requires more measurements as an input. When the prior information of the unknown object is available, such as the sparsity or support, potentially far fewer measurements are required. In one or more embodiments, no constraint is applied to the unknown field to make our sensor valid for objects with an arbitrary phase distribution. More discussion and the number of measurements for each experiment is listed in the Method Section.
The resolution of the WISH is determined by the pixel size of the SLM δSLM, pixel size of the camera sensor δsensor, and distance z between them. As shown below, in most cases when δSLM is larger than δsensor, the resolution is limited by δsensor, as long as z is sufficiently large to enable each sensor pixel to receive the field from multiple SLM pixels. As a result, although smooth SLM patterns (i.e., large effective SLM pixel size) are used in our experiment, WISH offers the full sensor resolution.
To experimentally demonstrate how the WISH works, a fingerprint is imaged on a glass microscope slide with dusting powder, which is placed ˜76 mm from the sensor. As shown in
Since the WISH offers the unique capability to simultaneously measure the amplitude and phase in high resolution, it may become a powerful tool to solve the inverse imaging problems with deterministic or even random transfer functions. In some embodiments, the WISH may be applied to the full gamut of spatial scales. In one or more embodiments, in the telescopic scale, for the first time, the diffraction-limited high-resolution imaging is demonstrated using a large-aperture but low-quality Fresnel lens. In one or more embodiments, in the macroscopic scale, the utility of the WISH is shown for obtaining high-resolution images of objects obscured by scattering. In one or more embodiments, in the microscopic scale, the WISH may be converted into a lensless microscope for biological imaging with high spatial and temporal resolution.
In some embodiments, many optical imaging or computer vision applications such as astronomical observation, satellite imaging, and surveillance, the imaging device is located very far from the object. However, to capture a photograph of a person 1 km away using a conventional sensor, for example, requires a telephoto lens, which contains dozens of single lenses to accommodate for diffraction blur and aberrations, as shown in a system 300 of
To demonstrate this capability, images of objects at 1.5 meters away using a 76.2-mm diameter Fresnel lens are captured as shown in
u
3
=P
z
(F·Pz
After the complex field u3 has been retrieved by the WISH, object x can be obtained by backward propagation as follows:
x=L
−1
·P
−z
(F−1·(P−z
However, F contains unknown aberrations and must be calibrated beforehand. During the calibration, the object is removed so that the incident light directly shines on the Fresnel lens. In this case that x=1, the corresponding field u30 is recovered by the WISH on the SLM plane. Then, based on Eq. 3, the lens field is calculated as follows:
F=(Pz
The calibration process is required only once for a given lens and does not need to be repeated as the object or setup changes.
For the quantitative evaluation, a standard 1951 USAF resolution test chart is used as the object shown
Additionally, two prepared biological microscope slides from AmScope are tested to demonstrate that microfeatures can be recovered at 1.5 m. Compared to the USAF target, these samples are more challenging because they are not binary, which reduces the contrast between the foreground and the background. In
Seeing through fog or beneath the skin is an extremely challenging task due to scattering. As shown in a system 400 of
First, the diffuser is calibrated by illuminating it with collimated light from the far side. The WISH measures the scattered field on the SLM plane v40, and the diffuser can be calculated as follows:
D=P
−z
(Φlens·P−z
After the calibration, a hidden object is placed behind the diffuser. Based on the recovered field v4 from the WISH, the field of the object can be recovered by numerical backward propagation as follows:
x=P
−z
(D−1·P−z
To test the system, various objects are imaged through a 25.4-mm diameter diffuser (Edmund Optics 47-988). The objects were placed 80 cm behind the diffuser and illuminated by a collimated laser beam. After light from the object passes through the diffuser, the wavefront is converged by a 50.8-mm-diameter lens with 180-mm focal length (Thorlabs AC508-180-A) and captured by the WISH (z3=18 cm).
The calibrated phase profile of the diffuser is plotted in
Images of the USAF resolution chart are captured to evaluate the performance of the reconstruction as shown in
Similar to Katz et al., it is straight-forward to expand method of one or more embodiments of the present disclosure from the transmissive mode to the reflective mode, which may be useful for applications such as looking around the corner with a diffused wall.
By bringing samples near the sensor, the WISH can be converted into a lensless microscopy system. Lensless imaging techniques can result in extremely lightweight and compact microscopes. Pioneering work has demonstrated the applications in holography, fluorescence and 3D imaging. Although a lens-based microscope has a tradeoff between the field of view (FOV) and resolution, lensless microscopy offers high-resolution while maintaining a large FOV.
In one or more embodiments, the WISH is tested as a lensless microscope by measuring a standard resolution target and biological samples. As shown in
The ability to observe a dynamic scene is also crucial to understand the behavior of the live samples. By optimizing the syncing between the SLM and the sensor, acquisition speeds of up to 20 Hz is achieved (see Methods). During the reconstruction, eight frames are input to the algorithm with a sliding window of two frames, which results in a recovered video with a 10-Hz frame-rate. Assuming that the change between neighboring frames is small, the converged reconstruction from the previous frame may be used as the initialization of the next frame, which significantly speeds up the reconstruction. As an illustration, a video of a Caenorhabditis elegans living on agar is captured. Several frames from the reconstructed video are shown in
Discussion
In one or more embodiments, the computational-imaging-based method WISH is a high-resolution, non-interferometric wavefront sensor, which shifts the complexity from hardware to algorithm and offers the ability to measure highly variant optical fields at more than 10-megapixel resolution. Experimentally, it is shown that the WISH may recover both objects at high resolution and perform diffraction-limited reconstruction in highly distorted optical systems. The versatility of the sensor of one or more embodiments of the present disclosure may significantly improve the performance of existing technologies such as adaptive optics and microscopy while providing a new tool for emerging fields including imaging through scattering media and biomedical and scientific imaging. Designing an optimization framework to automatically separate the object and aberrations without calibration is of great interest to applications such as autonomous driving (in challenging weather) and imaging beneath the skin.
Materials and Methods
SLM patterns design. The SLM pattern should satisfy three requirements. First, to improve convergence and reduce noise, the field from multiple SLM pixels should be able to randomly interfere. Second, to increase the signal-to-noise ratio, the field should not be scattered too much to ensure that the sensor collects most of the scattered light. Third, to reduce the impact of the cross-talk effect, the pattern should be locally smooth. For each SLM pattern in the experiment of one or more embodiments of the present disclosure, first a 192×108 random matrix with a uniform distribution of 0-1 is generated. Then, the matrix is unsampled by a factor of 10 using bicubic interpolation in MATLAB to create grayscale images of resolution 1920×1080. These grayscale images were used as the SLM patterns.
Numerical Propagation Model. The numerical propagation is modeled as a Fresnel propagation (FP) as follows:
The output field U(r2) is computed (from right to left) by multiplying the input field by a quadratic phase (Q), Fourier transforming (), scaling by a constant phase (V) and multiplying by another quadratic phase factor (Q). Although the angular-spectrum propagation (ASP) is more accurate in theory, both FP and ASP gave nearly the same result in current setup of one or more embodiments. Additionally, FP has two advantages: (1) there is only one Fourier transformation (FT) instead of two in ASP, which reduces the computation in the iterative algorithm, and (2) the grid spacing in the input and output planes must be identical for ASP, while FP may have different spacings in the input and output planes. Thus, FP may save unnecessary sampling for the case when the input and output fields have notably different sizes (e.g., recovering the wavefront of a large-aperture Fresnel lens from the WISH).
Image acquisition and reconstruction. In the experiment, 32 patterns were used for the setup with the Fresnel lens (
During the reconstruction, the data was split into batches, where each batch contained four SLM patterns and their corresponding measurements. All batches were individually processed in an NVIDIA Tesla K80 GPU with 12 GB RAM and averaged in each iteration.
Video recording for dynamic scenes. The LETO SLM provides a synchronization signal at 60 Hz, which is used to trigger the CMOS sensor. Due to the delay between sending the phase pattern and refreshing it on the SLM, the SLM patterns were changed at 20 Hz and kept only the last frame for every three frames captured from the sensor to ensure that the captured image was stable.
WISH Algorithm
Without loss of generality, let's consider the 1D case and ignore boundary effects. The forward model is
√{square root over (Ii)}=|PzΦSLMiu|=|Aiu| (9) (9)
For multiple SLM patterns, the measurement matrix can be stacked together.
To estimate the field u, the optimization becomes
Since A is not a square matrix, pseudo-inverse A+ is used here
A* is the conjugate transpose of A. Based on the definition of Ai, the conjugate transpose is
A
i*=(ΦSLMi)−1Pz−1=(ΦSLMi)−1P−z (13)(13)
A good property of the propagation operator is that its inverse is the backward-propagation operator.
Now, if the complex field on the sensor is defined as yi=√{square root over (Ii)}exp(jθi), the recovered field û t is written as follows
This formula says that the estimated signal is the average of all different measurements backward-propagated before the SLM pattern.
The iterative algorithm works as explained below (
Required Number of Measurements
To estimate the field u correctly, it is critical to pick the number of measurements K properly. Here, a quantitative evaluation on how K affects the recovered results in 2-D simulations is shown. Specifically, the unknown field is a 64×64 random complex matrix with 1 μm pixel size. Both the SLM and sensor have 512×512 pixels with 1 μm pixel size. The propagation distance between the SLM and the sensor is 500 μm and numerical propagation is calculated by the angular spectrum method. Gaussian noise with 0.01 standard variation is added to all measurements. The error is defined as follows,
As shown in
Wavefront Sensor Resolution Analysis
In order to analysis the sensor resolution, the structure of the forward model (Eq. 9) is considered. When the propagation distance z is short, Pz is a band matrix since one pixel of the incident field will fall on a local region of the sensor after its propagation. As z increases, the width of the band increases. When z is large enough that Fraunhofer approximation1 is satisfied, Pz becomes a Fourier transformation. To make setup compact, z is kept to be short (˜30 mm). SLM is a diagonal matrix to have element-wise multiplication with the field u. The combined matrix PzΦSLMi is called the measurement matrix Ai, which is essentially a weighted propagation matrix where each column is multiplied by the phase from the SLM. To be able to recover an unknown pixel on u, multiple uncorrelated measurements may be applied to this pixel when different random SLM patterns are projected.
Three different scenarios about the pixel of unknown field δfield, SLM δSLM, and sensor δsensor, are discussed below to determine the resolution limit. The measurement matrixes for different conditions are plotted in
1) δSLM−δfield=δsensor
In this case, each diagonal element of ΦSLM can be changed independently, which means that the weighting on the measurement matrix can also be adjusted freely (
2) δSLM=M δfield=M δsensor
Since the SLM has a larger pixel size, each SLM pixel modulates M field pixels in the same way. M adjacent elements on the diagonal of ΦSLM are same. And the measurement matrix is weighted by block. When M is small, each row is still modulated by more than one SLM pixel (
3) δsensor=M δfield=M δSLM
When the sensor pixel size is large, it means that the recorded signal is the sum of all sub-pixel region. As for the measurement matrix, M rows are added together as shown in
Based on the discussion 1) to 3), the resolution is
δfield=min(δsensor, δSLM) (16) (16)
Next, 2-D simulation to support resolution analysis of some embodiments are discussed. In the simulation, the unknown field is a 64×64 random complex matrix with δfield=1 μm. The entire sensor size is 512 μm×512 μm. Propagation is simulated by the angular spectrum method. Gaussian noise with 0.01 standard variation is added to all measurements. The error is defined in Eq. 15.
First, to show how SLM pixel size affects the reconstruction, δsensor is fixed at 1 μm and the number of SLM patterns is fixed to be 16, vary δSLM as well as the propagation distance z to see the reconstruction error. As shown in
Second, to show how sensor pixel size affects the reconstruction, δSLM is fixed at 1 μm and z is fixed at 500 μm, vary δsensor and number of SLM patterns to see the reconstruction error. Given fixed sensor size, larger pixel size means fewer measurements, leading to large reconstruction error. To increase the number of measurements, more SLM patterns are necessary for accurate reconstruction. Results are shown in
In one or more embodiments, simulation and reconstruction, sampling requirements have to be satisfied for accurate calculation. One special case is to represent the large-aperture Fresnel lens.
The phase of the Fresnel lens can be regarded as perfect quadratic phase with large aberrations. Similar to quadratic phase, the spatial frequency increases linearly as the radius increases, which means a large-size needs an extremely high sample rate. For example, supposing the Fresnel lens in our experiment (D=3 inch, f=10 inch) has perfect quadratic phase, a 40,000×40,000 complex matrix is needed to represent it without aliasing artifacts. It requires over 25 G memory to store such a matrix in MATLAB, and more extra memory to operate the matrix (e.g., perform an FFT). Also, our wavefront sensor only has about 10-megapixel resolution. It is impossible to measure a Fresnel lens which has 1.6 billion unknowns.
Therefore, to recover the field of the unknown object falling on the Fresnel lens plane, one key insight here is that, for long-distance imaging where the distance between the object and the lens is much larger than the focal length of the lens, the object field contains much smaller spatial frequency than the Fresnel lens does on the Fresnel lens plane. Based on the Eq. 4, the modulation of the Fresnel lens F is removed from the entire field P−z
Model for Scattering Media
In one or more embodiments, in the section of imaging through scattering media, rather than regarding the diffuser as a transmission matrix, which blindly maps inputs to outputs, diffuser is physically modeled as a thin plane with random aberrations. In this way, the number of unknowns are dramatically reduced from O(N2) to O(N), where N is the resolution of the input field (i.e., N=107). As a result, the number of input fields for calibration reduces from 107 to 1, making it feasible for the experiment. This thin diffuser assumption is valid for scattering media such as thin tissue and diffuse wall, indicating exciting applications in imaging beneath the skin and looking around the corner. For more complicated scattering material, diffuser may be modeled as a series of 2D scattering slices between which light propagates, which has been proved useful for 3D reconstructions. Combining multi-slices light-propagation model with the WISH is an interesting direction for future work.
Field of View Evaluation
In one or more embodiments, when the incident light rays hit the WISH at a large angle, the Fresnel propagation (FP) and the phase shift of the SLM are not accurate. In the experiment, the largest FOV is about ±6°. Currently, the SLM is the main limit. As for the propagation model, as stated in the method section, although the angular-spectrum propagation (ASP) is more accurate in theory, both FP and ASP gave nearly the same result in our current setup. It means that for our current setup the Fresnel approximation is still satisfied. Specifically, how the error from large incident angle effects three applications is evaluated below.
Long-distance, diffraction-limited imaging with a Fresnel lens: In this case, the largest
F-number used is 0.1 (as the FOV is about ±6°). However, since main objective is long-distance imaging, the resolution is decided by the size of the Fresnel lens instead of the F-number. By scaling up both the focal length and the size of the Fresnel lens equally, the spatial resolution of the target may improve.
Imaging through scattering media: Although the diffuser diffracts light in all directions, the size of the SLM limits the FOV. Thus, the portion of light measured by the WISH has a small incident angle. Because of the random nature of the diffuser, signals from the object in all frequencies are collected and get a reasonably good estimation.
Lensless microscopy: Ideally, the sample may be brought closer to the SLM for higher resolution. In such a microscopic setting if the sample size is much smaller than the SLM, the angle of the light from the sample to a particular pixel on the SLM is fixed. The phase shift of the SLM for this angle is recalibrated before putting into the algorithm. Combining with ASP, a high-resolution reconstruction may be obtained. However, in the current setup, due to the beam splitter, the incident angle is small.
Reconstruction of 3D Objects
Although current results focus on 2D targets, the method of one or more embodiment may be able to achieve 3D reconstruction. There are two ways to achieve it. First, depth is estimated based on the recovered amplitude. The field to multiple depths are back propagated and find out the correct depth based on the gradient of the recovered amplitude (details are discussed below). As an example, three 2D bars (named as A, B, C) are located at 49.8 mm, 50 mm, and 50.2 mm. The size of each bar is 840 μm×210 μm.
For out-of-focus images, the blurring effect brings the variation closer (i.e., smoothing the boundary and introducing fringes in the smooth region), which reduces the variance of gradients. As shown in
Second, depth is estimated based on the recovered phase. To do so, current setup of some embodiments are changed from transmissive mode to reflective mode, meaning that the incident light bounces back from the object instead of passing through it. Under the circumstances, the depth map is estimated by the phase distribution.
Embodiments may be implemented on a computing system. Any combination of mobile, desktop, server, router, switch, embedded device, or other types of hardware may be used. For example, as shown in
The computer processor(s) (1202) may be an integrated circuit for processing instructions. For example, the computer processor(s) may be one or more cores or micro-cores of a processor. The computing system (1200) may also include one or more input devices (1210), such as a touchscreen, keyboard, mouse, microphone, touchpad, electronic pen, or any other type of input device.
The communication interface (1212) may include an integrated circuit for connecting the computing system (1200) to a network (not shown) (e.g., a local area network (LAN), a wide area network (WAN) such as the Internet, mobile network, or any other type of network) and/or to another device, such as another computing device.
Further, the computing system (1200) may include one or more output devices (1208), such as a screen (e.g., a liquid crystal display (LCD), a plasma display, touchscreen, cathode ray tube (CRT) monitor, projector, or other display device), a printer, external storage, or any other output device. One or more of the output devices may be the same or different from the input device(s). The input and output device(s) may be locally or remotely connected to the computer processor(s) (1202), non-persistent storage (1204), and persistent storage (1206). Many different types of computing systems exist, and the aforementioned input and output device(s) may take other forms.
Software instructions in the form of computer readable program code to perform embodiments of the disclosure may be stored, in whole or in part, temporarily or permanently, on a non-transitory computer readable medium such as a CD, DVD, storage device, a diskette, a tape, flash memory, physical memory, or any other computer readable storage medium. Specifically, the software instructions may correspond to computer readable program code that, when executed by a processor(s), is configured to perform one or more embodiments of the disclosure.
The computing system (1200) in
Although not shown in
The nodes (e.g., node X (1222), node Y (1224)) in the network (1220) may be configured to provide services for a client device (1226). For example, the nodes may be part of a cloud computing system. The nodes may include functionality to receive requests from the client device (1226) and transmit responses to the client device (1226). The client device (1226) may be a computing system, such as the computing system shown in
The computing system or group of computing systems described in
Based on the client-server networking model, sockets may serve as interfaces or communication channel end-points enabling bidirectional data transfer between processes on the same device. Foremost, following the client-server networking model, a server process (e.g., a process that provides data) may create a first socket object. Next, the server process binds the first socket object, thereby associating the first socket object with a unique name and/or address. After creating and binding the first socket object, the server process then waits and listens for incoming connection requests from one or more client processes (e.g., processes that seek data). At this point, when a client process wishes to obtain data from a server process, the client process starts by creating a second socket object. The client process then proceeds to generate a connection request that includes at least the second socket object and the unique name and/or address associated with the first socket object. The client process then transmits the connection request to the server process. Depending on availability, the server process may accept the connection request, establishing a communication channel with the client process, or the server process, busy in handling other operations, may queue the connection request in a buffer until the server process is ready. An established connection informs the client process that communications may commence. In response, the client process may generate a data request specifying the data that the client process wishes to obtain. The data request is subsequently transmitted to the server process. Upon receiving the data request, the server process analyzes the request and gathers the requested data. Finally, the server process then generates a reply including at least the requested data and transmits the reply to the client process. The data may be transferred, more commonly, as datagrams or a stream of characters (e.g., bytes).
Shared memory refers to the allocation of virtual memory space in order to substantiate a mechanism for which data may be communicated and/or accessed by multiple processes. In implementing shared memory, an initializing process first creates a shareable segment in persistent or non-persistent storage. Post creation, the initializing process then mounts the shareable segment, subsequently mapping the shareable segment into the address space associated with the initializing process. Following the mounting, the initializing process proceeds to identify and grant access permission to one or more authorized processes that may also write and read data to and from the shareable segment. Changes made to the data in the shareable segment by one process may immediately affect other processes, which are also linked to the shareable segment. Further, when one of the authorized processes accesses the shareable segment, the shareable segment maps to the address space of that authorized process. Often, one authorized process may mount the shareable segment, other than the initializing process, at any given time.
Other techniques may be used to share data, such as the various data described in the present application, between processes without departing from the scope of the disclosure. The processes may be part of the same or different application and may execute on the same or different computing system.
Rather than or in addition to sharing data between processes, the computing system performing one or more embodiments of the disclosure may include functionality to receive data from a user. For example, in one or more embodiments, a user may submit data via a graphical user interface (GUI) on the user device. Data may be submitted via the graphical user interface by a user selecting one or more graphical user interface widgets or inserting text and other data into graphical user interface widgets using a touchpad, a keyboard, a mouse, or any other input device. In response to selecting a particular item, information regarding the particular item may be obtained from persistent or non-persistent storage by the computer processor. Upon selection of the item by the user, the contents of the obtained data regarding the particular item may be displayed on the user device in response to the user's selection.
By way of another example, a request to obtain data regarding the particular item may be sent to a server operatively connected to the user device through a network. For example, the user may select a uniform resource locator (URL) link within a web client of the user device, thereby initiating a Hypertext Transfer Protocol (HTTP) or other protocol request being sent to the network host associated with the URL. In response to the request, the server may extract the data regarding the particular selected item and send the data to the device that initiated the request. Once the user device has received the data regarding the particular item, the contents of the received data regarding the particular item may be displayed on the user device in response to the user's selection. Further to the above example, the data received from the server after selecting the URL link may provide a web page in Hyper Text Markup Language (HTML) that may be rendered by the web client and displayed on the user device.
Once data is obtained, such as by using techniques described above or from storage, the computing system, in performing one or more embodiments of the disclosure, may extract one or more data items from the obtained data. For example, the extraction may be performed as follows by the computing system (1200) in
Next, extraction criteria are used to extract one or more data items from the token stream or structure, where the extraction criteria are processed according to the organizing pattern to extract one or more tokens (or nodes from a layered structure). For position-based data, the token(s) at the position(s) identified by the extraction criteria are extracted. For attribute/value-based data, the token(s) and/or node(s) associated with the attribute(s) satisfying the extraction criteria are extracted. For hierarchical/layered data, the token(s) associated with the node(s) matching the extraction criteria are extracted. The extraction criteria may be as simple as an identifier string or may be a query presented to a structured data repository (where the data repository may be organized according to a database schema or data format, such as XML).
The extracted data may be used for further processing by the computing system. For example, the computing system of
The computing system in
The user, or software application, may submit a statement or query into the DBMS.
Then the DBMS interprets the statement. The statement may be a select statement to request information, update statement, create statement, delete statement, etc. Moreover, the statement may include parameters that specify data, or data container (database, table, record, column, view, etc.), identifier(s), conditions (comparison operators), functions (e.g. join, full join, count, average, etc.), sort (e.g. ascending, descending), or others. The DBMS may execute the statement. For example, the DBMS may access a memory buffer, a reference or index a file for read, write, deletion, or any combination thereof, for responding to the statement. The DBMS may load the data from persistent or non-persistent storage and perform computations to respond to the query. The DBMS may return the result(s) to the user or software application.
The computing system of
For example, a GUI may first obtain a notification from a software application requesting that a particular data object be presented within the GUI. Next, the GUI may determine a data object type associated with the particular data object, e.g., by obtaining data from a data attribute within the data object that identifies the data object type. Then, the GUI may determine any rules designated for displaying that data object type, e.g., rules specified by a software framework for a data object class or according to any local parameters defined by the GUI for presenting that data object type. Finally, the GUI may obtain data values from the particular data object and render a visual representation of the data values within a display device according to the designated rules for that data object type.
Data may also be presented through various audio methods. In particular, data may be rendered into an audio format and presented as sound through one or more speakers operably connected to a computing device.
Data may also be presented to a user through haptic methods. For example, haptic methods may include vibrations or other physical signals generated by the computing system. For example, data may be presented to a user using a vibration generated by a handheld computer device with a predefined duration and intensity of the vibration to communicate the data.
The above description of functions presents only a few examples of functions performed by the computing system of
While the disclosure has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the disclosure as disclosed herein. Accordingly, the scope of the disclosure should be limited only by the attached claims.
This Application claims the benefit of U.S. Provisional Application 62/840,965 filed on Apr. 30, 2019.
This invention was made with Government Support under Grant Numbers IIS-1652633 and IIS-1730574 awarded by the National Science Foundation and Grant Numbers HR0011-16-C-0028 and HR0011-17-C-0026 awarded by the Defense Advanced Research Projects Agency. The government has certain rights in this invention.
Number | Date | Country | |
---|---|---|---|
62840965 | Apr 2019 | US |