Embodiments of the invention relate to image and video compression. More specifically, embodiments of the invention relate to the compression of light field image data as input for light field imaging systems.
Depth perception in the human visual system (HVS) relies on several depth cues. These cues can be categorized as either psychological (e.g., perspective, shading, lighting, relative size, occlusion and texture gradient, etc.) or physiological depth cues (e.g., vergence, accommodation, motion parallax, binocular disparity, etc.). While psychological depth cues provide a relative understanding of the depth in a light field, physiological depth cues provide absolute depth information. Commercially available three-dimensional (3D) displays often use a subset of the physiological depth cues to enhance the light field viewing experience.
Glasses-based 3D displays have been gaining popularity since the introduction of glasses-based 3D televisions (TVs) sold by all major TV manufacturers. A shortcoming of the currently available technology is paradoxically the actual use of 3D glasses, which glasses can be categorized as either active or passive. In general, glasses-based technology is known to be uncomfortable for viewers to use for long time periods and poses challenges for people who require prescription glasses.
Existing autostereoscopic displays use directional modulators (such as parallax barriers or lenticular sheets) attached to a display surface to create a 3D effect without requiring glasses. Commercially available autostereoscopic displays typically use horizontal parallax to present 3D information to the viewer. Deficiencies of this form of display technology include a limited viewing angle and a limited resolution per view, each of which results in a lower quality 3D image. In addition, within the viewing angle of such displays, the user is required to keep his or her head vertical. Otherwise, the 3D effect would disappear.
Long viewing times in both glasses-based 3D displays and in horizontal parallax-only light field displays typically cause discomfort due to a physiological effect known as “vergence accommodation conflict” (VAC). See, e.g., Hoffman, D., Girshick, A., Akeley, K. & Banks, M. (2008), “Vergence-accommodation conflicts hinder visual performance and cause visual fatigue”, Journal of Vision 8 (3), 33. VAC is caused by the fact the viewer's eyes are focused on the display surface plane but also need to converge away from it in order to perceive objects that are depicted at different depths, and thus viewer discomfort occurs.
A more natural 3D effect is achieved using full parallax 3D display technology. In addition to horizontal parallax, full parallax 3D display technology includes vertical parallax such that a vertical movement of the viewer provides a different view of the 3D scene. Full parallax displays generally have an order of magnitude or more views than horizontal parallax-only displays. Arranging these views densely creates a very natural 3D image that does not change when a user moves or tilts his or her head, and also eliminates VAC by providing correct accommodation and vergence cues. 3D displays that eliminate the VAC may be referred to as “VAC-free” 3D displays.
The main challenge associated with the aforementioned full parallax 3D displays is that the increase in modulated image resolution required to render full parallax 3D images with wide viewing angles creates a new impairment for the display system, namely, a dramatically increased amount of image data. The generation, acquisition, transmission and modulation (or display) of very large image data sets required for a VAC-free full parallax light field display requires a data rate in the tens of terabits per second (Tbps).
A brief inspection of light field input images shows the ample inherent correlation between the light field data elements (known as holographic elements or “hogels”) and compression algorithms that have been proposed to deal with this type of data in the prior art. See, e.g., M. Lucente, “Diffraction-Specific Fringe Computation for Electro-Holography”, Doctoral Thesis Dissertation, MIT Depart. of Electrical Engineering and Computer Science, September 1994. However, as can be appreciated by those skilled in the art, only a limited number of the compression methods described in the prior art can practically be implemented in real-time and none of these methods can render and/or compress the amount of data required to drive a full parallax VAC-free display in real-time.
For example, currently, the most advanced video compression format, H.264/AVC, can compress ultra-high resolution video frames (4,096×2,304 @ 56.3, or 0.5 Gpixels/sec.) at a data bit rate of approximately 3 Gbits/sec. See, e.g., ISO/IEC 14496-10:2003, “Coding of Audiovisual Objects—Part 10: Advanced Video Coding,” 2003, also ITU-T Recommendation H.264 “Advanced video coding for generic audiovisual services”. H264/AVC fails to achieve sufficient compression needed for the useable transmission of light field image data, much less if the light field is refreshed in real time at a 60 Hz video rate where data rates can reach up to 86 Tbps.
Current compression standards do not exploit the high correlation that exists both in horizontal and vertical directions in a full parallax light field image. New compression standards targeting 3D displays are being developed. Nevertheless, they are targeting horizontal parallax only, a limited number of views, and usually require an increased amount of memory and related computational resources. Compression algorithms must balance image quality, compression ratio and computational load. As a general rule, a higher compression ratio in an encoder increases the computational load, making real-time implementation difficult. If both high compression and decreased computational load is required, then image quality is sacrificed. A compression solution that is able to simultaneously provide high image quality, a high compression ratio and relatively low computational load is highly desired.
Embodiments of the invention are illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
Various embodiments and aspects of the inventions will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the invention and are not to be construed as limiting the invention. Numerous specific details are described to provide a thorough understanding of various embodiments of the present invention. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present inventions.
Reference in the specification to “one embodiment”, “an embodiment” or “some embodiments” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment. Random access refers to access (read/write) to a random offset of a file at least once during a read/write input/output operation.
Aspects of the invention herein introduce light field compression methods that overcome the drawbacks of the prior art, thereby making it feasible to create VAC-free full parallax 3D displays that utilize the compression methods of this invention for compressed light field imaging systems to reduce the required data rate, the processing requirements in both encoding and decoding and also power consumption for the entire imaging system. Additional advantages of the invention will become apparent from the following detailed description of various embodiments thereof that proceeds with reference to the accompanying drawings.
As is known, the transmission of large data sets can be facilitated with the use of a compressed data format. In conventional light field systems, the entire light field is first captured, and then it is compressed (or encoded) using either conventional image/video compression algorithms or light-field specific encoders. The compressed data can then be transmitted, stored and/or reconditioned for the light field display, where it is decompressed (or decoded) and modulated (examples of prior art light field compression systems are disclosed in, for instance, U.S. Pat. No. 8,401,316 B2, and U.S. Publication No. US2013/0077880).
Light fields can be compressed using a multi-view compression (MVC) standard. See, e.g., A. Vetro, T. Wiegand, G. Sullivan, “Overview of the stereo and multiview video coding extensions of the H.264/MPEG-4 AVC standard”, Proceedings of the IEEE, vol. 99, no. 4, April 2011. Using the MVC standard, the hogels are interpreted as frames of a multi-view sequence and the disparity between images is estimated and encoded. The block-based disparity estimation generates inaccuracies that are encoded by a block-based encoder, and the compression performance grows linearly with the number of images.
To improve multi-view coding, new coding standards are considering the adoption of techniques from the field of computer vision. See, e.g., ISO/IEC JTC1/SC29/WG11, Call for Proposals on 3D Video Coding Technology, Geneva, Switzerland, March 2011. With the use of per-pixel depth information, reference images can be projected to new views and the synthesized images can be used instead of the costly transmission of new images. This technique requires increased computational resources and local memory on the decoder side, posing a challenge for its real-time implementation. Prior art compression tools are also targeting their use in horizontal-only multiview sequences and do not exploit the geometric arrangement of integral images.
Methods developed exclusively for light field image compression include a vector quantization method described by Levoy et al., “Light Field Rendering”, Computer Graphics, SIGGRAPH 96 Proceedings, pp. 31-42, 1996, and video compression-based methods described by Magnor et al., “Data Compression for Light-Field Rendering”, IEEE Transaction on Circuits and Systems for Video Technology, v. 10, n. 3, April 2000, pp. 338-343. The use of vector quantization is limited and cannot achieve high compression performances such as those presented by Magnor et al., which methods are similar to a multiview compression algorithm where the geometrical regularity of the images is exploited for disparity estimation. However, these methods require an increased amount of local memory and are not well-suited for real-time implementation.
Along with the problem of image data compression, there is a related issue of image data acquisition. The generation of the entire light field for encoding requires large amounts of processing throughput and memory, and many samples may be discarded at the compression stage. A recently developed technique referred to as “Compressed Sensing” (CS) attempts to address this problem. The underlying principal behind Compressive Sensing is that a signal that is highly compressible (or equivalently sparse) in some transform domains can be minimally sampled using an incoherent basis and still be reconstructed with acceptable quality. See, e.g., Candès, E., Romberg, J., Tao, T., “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information”, IEEE Trans. Inform. Theory 52 (2006) 489-509. See also, e.g., David Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, Volume 52, Issue 4, April 2006, Pages: 1289-1306.
This new paradigm shifts the complexity from the acquisition to the reconstruction process, which results in the need for more complex decoders. This tendency is aligned with the trend of computational displays which present computational capability directly in the display devices. Displays that have computational capacity and are able to deal directly with compressed image data are known to those skilled in the art of image processing and light field technology as “compressive displays”. See, e.g., Gordon Wetzstein, G., Lanman, D., Hirsch, M., Heidrich, W., and Raskar, R., “Compressive Light Field Displays”, IEEE Computer Graphics and Applications, Volume 32, Issue 5, Pages: 6-11, 2012; Heide, F., Wetzstein, G., Raskar, R. and Heidrich, W., “Adaptive Image Synthesis for Compressive Displays”, Proc. of SIGGRAPH 2013 (ACM Transactions on Graphics 32, 4), 2013. See also, e.g., S. Guncer, U.S. Publication No. US2010/0007804, Image Construction Method Based Video Display System, Jan. 14, 2010; S. Guncer, U.S. Patent Publication No. US2010/0225679, Multi-Pixel Addressing Method for Video Display System, Sep. 9, 2010.
In Graziosi et al., “Depth assisted compression of full parallax light fields”, IS&T/SPIE Electronic Imaging. International Society for Optics and Photonics (Mar. 17, 2015), a synthesis method that targets light fields and uses both horizontal and vertical information was introduced. The above method adopts aspects of a method called Multiple Reference Depth-Image Based Rendering (MR-DIBR) and utilizes multiple references with associated disparities to render the light field. In this approach, disparities are first forward warped to a target position. Next, a filtering method is applied to the warped disparities to mitigate artifacts such as cracks caused by inaccurate pixel displacement. The third step is the merging of all of the filtered warped disparities. Pixels with smaller depths (i.e., those closest to the viewer) are selected. Finally, the merged elemental image disparity is used to backward warp the color from the references' colors and to generate the final synthesized elemental image.
Prior art light field compression methods using depth image-based rendering (DIBR), while efficient for compression of elemental images, are unable to incorporate occlusion and hole-filling functions necessary to provide high quality light field images at acceptable compression ratios. An example of such a prior art DIBR compression method is disclosed in, for instance, U.S. Publication No. 2016/0360177 entitled, “Methods for Full Parallax Compressed Light Field Synthesis Utilizing Depth Information”, the entire contents of which are incorporated herein by reference.
As detailed in U.S. Publication No. 2016/0021355, “Preprocessor for Full Parallax Light Field Compression”, the disclosure of which are incorporated herein by reference, MR-DIBR enables the reconstruction of other perspectives from reference images and from reference disparity maps. Reference images and reference disparity maps are initially selected via a “visibility test” in one embodiment. The visibility test makes use of: 1) the distance of the objects from the modulation surface, and 2) the display's field of view (“FOV”), to determine and define the reference images and disparity maps used by the method.
In general, a scene that contains objects that are farther from the modulation surface tends to result in a smaller number of reference images and reference disparity maps as compared to a scene that contains objects that are closer to the modulation surface. Smaller numbers of reference images and reference disparity maps result in a higher compression ratio. In general, however, higher compression ratios also mean greater degradation in the decoded image.
Accordingly, the prior art fails to adequately address the need for high compression ratio, high quality, low computational load light field data compression as is required for practical implementation of VAC-free full parallax, and wide viewing angle 3D display technologies.
Aspects of the invention improve upon a method of light field compression, for example compressed rendering and MR-DIBR. The general concept is to further compress the output of the light field compression method (e.g., reference elemental images and depth or disparity maps) as well as the residuals of synthesized elemental images using video compression methods such as High Efficiency Video Coding (HEVC).
In the prior art, reference images, reference disparity maps, and residuals are generally converted to seed images and seed disparity maps, and then further compressed before being sent to a display. In one aspect of the invention, reference images, reference disparity maps, and residuals are directly encoded without being converted to seed images and seed disparity maps. In addition, aspects of the invention include various methods for adjusting the bit rate by adjusting the amount of compression at different stages of the process.
According to one aspect of the invention, the method receives pre-processing information that includes subimages associated with a scene. The method performs a first compression operation on the pre-processing information to generate reference information. The method further performs a second compression operation on the reference information and residual information to output compressed information. The compressed information includes compressed reference information and compressed residual information.
According to another aspect of the invention, the method receives pre-processing information that includes subimages associated with a scene. The method performs a first compression operation on the pre-processing information to generate reference information. The method performs a second compression operation on the reference information to output compressed reference information. The method performs a first decompression operation on the compressed reference information to output first decompressed reference information. The method performs a second decompression operation on the first decompressed reference information to generate first set of synthesized images.
Pre-processing engine 105 may capture, acquire, receive, create, format, store and/or provide light field input data (or scene/3D data) 101, which may represent an object or a scene, to be utilized at different stages of a compression operation (as discussed in more detail herein below). To do so, pre-processing engine 105 may generate a priori (or pre-processing) information associated with light field input data 101, for example object locations in the scene, bounding boxes, camera sensor information, target display information and/or motion vector information. Moreover, in some embodiments, pre-processing engine 105 may perform stereo matching and/or depth estimation on the light field input data 101 to obtain a representation of the spatial structure of a scene, for example one or more depth maps (or disparity maps) and/or subimages (or subaperture images) associated with the object or scene.
In one embodiment, pre-processing engine 105 may convert the light field input data 101 from data space to display space of light field display device 111. Conversion of the light field input data 101 from data space to display space may be needed for the light field display device 111 to show light field information in compliance with light field display characteristics and the user (viewer) preferences. When the light field input data 101 is based on camera input, for example, the light field capture space (or coordinates) and the camera space (or coordinates) are typically not the same, and as such, the pre-processing engine 105 may need to convert the data from any camera's (capture) data space to the display space. This is particularly the case when multiple cameras are used to capture the light field and only a portion of the captured light field in included in the viewer preference space. This data space to display space conversion is done by the pre-processing engine 105 by analyzing the characteristics of the light field display device 111 and, in some embodiments, the user (viewer) preferences. Characteristics of the light field display device 111 may include, but are not limited to, image processing capabilities, refresh rate, number of hogels and anglets, color gamut, and brightness. Viewer preferences may include, but are not limited to, object viewing preferences, interaction preferences, and display preferences.
In one embodiment, pre-processing engine 105 may take the display characteristics and the user preferences into account and convert the light field input data 101 from data space to display space. For example, if the light field input data 101 includes mesh objects, then pre-processing engine 105 may analyze the display characteristics (such as number of hogels, number of anglets, and FOV), analyze the user preferences (such as object placement and viewing preferences), calculate bounding boxes, motion vectors, etc., and report such information to the light field display system 107. In one embodiment, data space to display space conversion may include data format conversion and motion analysis in addition to coordinate transformation. In one embodiment, data space to display space conversion may involve taking into account the position of the light modulation surface (display surface) of the light field display device 111, and the object's position relative to the display surface.
Compression logic 109 may receive the a priori (or pre-processing) information from pre-processing engine 105 for compression. For example, compression logic 109 may execute one or more compression methods at different stages using the a priori information in order to generate compressed information (e.g., reference and/or residual information). In one embodiment, the compression methods may be based on image-based rendering (IBR), depth image-based rendering (DIBR), and/or multiple-reference depth image-based rendering (MR-DIBR). In one embodiment, the compression methods may, additionally or alternatively, be based on one or more image compression standards such as Joint Photographic Experts Group (JPEG), JPEG 2000, JPEG XS, or video compression standards (also referred to as video compression methods, video compression algorithms, or video compression codecs), such as Moving Picture Experts Group (MPEG), H.264, High Efficiency Video Coding (HEVC), Theora, RealVideo, RV40, VP9, AV1, Audio Video Interleaved (AVI), Flash Video (FLV), RealMedia, Ogg, QuickTime, and/or Matroska. Compression logic 109 may then communicate the compressed information, for example over a network (not shown), such as the Internet or cloud service, to decompression logic 113 to perform decompression operations. In one embodiment, the compressed information may be stored in a storage device (not shown) to be retrieved (or loaded) by decompression logic 113. The storage device, for example, may be a hard disk drive (HDD), solid state device (SSD), read only memory (ROM), random access memory (RAM), or optical storage media.
As further shown in
In one embodiment, decompression logic 113 may execute one or more decompression methods on the compressed information, which may be retrieved from the storage device, in order to generate decompressed information (e.g., reference and/or residual information). Additionally or alternatively, decompression logic 113 may further decompress some of the decompressed information (e.g., reference information) to produce synthesized images (e.g., elemental images or hogel images). Using the synthesized images and part of the decompressed information (e.g., residual information), decompression logic 113 may reconstruct the original object or scene represented by light field input data 101. The reconstructed images of the object or scene may be transmitted to display logic 115 to display, modulate or render on light field display device 111. As with the compression methods previously discussed, in one embodiment, the decompression operations may be based on IBR, DIBR, and/or MR-DIBR. In one embodiment, the decompression operations may, additionally or alternatively, be based on one or more image compression standards such as JPEG, JPEG 2000, JPEG XS, or one or more video compression standards, such as MPEG, H.264, HEVC, Theora, RealVideo, RV40, VP9, AV1, AVI, FLV, RealMedia, Ogg, QuickTime, and/or Matroska.
It should be appreciated that while
Referring to
At block 202, the processing logic performs a first compression operation on the pre-processing information. For example, using depth maps and/or subimages (or subaperture images) from the pre-processing information, one or more light field compression methods (e.g., IBR, DIBR, or MR-DIBR) may be performed to generate reference information (as shown at block 203). The reference information, in one embodiment, may include reference images (e.g., elemental images or hogel images) and corresponding reference disparity maps.
Because there remain significant similarities among the reference elemental images in DIBR, for example, further compression is possible to improve bandwidth efficiencies. The same logic also applies to the disparity map operation. The elemental images and disparity maps from different spatial/angle locations can be rearranged in successive sequences and treated as temporal frames to be encoded by a video codec.
One of the biggest issues of any DIBR algorithm, however, is the generation of holes and cracks due to inaccuracy in depth values, round-off errors and object disocclusion. MR-DIBR reduces the holes significantly due to using multiple references; however, synthesized images can still be different from the original images. The differences between the original and estimated values of synthesized elemental images are defined as residual images, which can also be encoded by a video codec. By encoding the reference elemental images, disparity maps, and residual images with a video codec, the overall distortion can range from lossy to lossless with corresponding bit rate tradeoffs in fine-grained steps.
Accordingly, at block 204, the processing logic performs a second compression operation on the reference information and residual information, for example residuals of synthesized images, such as synthesized elemental or hogel images. As previously described, one or more image compression standards such as JPEG, JPEG 2000, JPEG XS, or one or more video compression standards, such as MPEG, H.264, HEVC, Theora, RealVideo, RV40, VP9, AV1, AVI, FLV, RealMedia, Ogg, QuickTime, and/or Matroska, may be executed to compress (or encode) the reference information and residual information, thereby outputting compressed information (as shown at block 205), which may include compressed reference and residual information.
Referring to
Referring to
As shown in
As further shown in
As shown in
MR-DIBR encoder 603 may receive and compress one or more depth maps 601 and subaperture images 602 at an adjustable bit rate in order to generate reference EIs 604 and reference disparity maps 605. Reference EIs 604 and reference disparity maps 605 may be encoded by video encoder 606 (e.g., JPEG, JPEG 2000, or JPEG XS encoder, or MPEG, H.264, HEVC, Theora, RealVideo, RV40, VP9, AV1, AVI, FLV, RealMedia, Ogg, QuickTime, or Matroska encoder) at the adjustable bit rate to produce encoded (or compressed) information (which may include encoded reference EIs and encoded reference disparity maps). In one embodiment, video encoder 606 may include multiple HEVC encoders (e.g., two HEVC encoders) to encode the reference EIs 604 and reference disparity maps 605. The encoded information is communicated to bit rate calculator 607, video decoder 610, and decoding stage 550. Bit rate calculator 607 may serve to calculate an overall bit rate. For example, in one embodiment, output from each of the HEVC encoders may be added together to calculate the overall bit rate. The overall bit rate and PSNR from PSNR calculator 618 may be provided to bit rate allocator 608 to allocate (or determine) a bit rate for MR-DIBR encoder 603, video encoder 606, and video encoder 627.
Still referring to
Using the disclosed encoding scheme, an important system parameter is the rate distribution among the various encoding blocks, for example the MR-DIBR encoder 503 and video encoder 508. In one embodiment, adjustable parameters in MR-DIBR encoder 503 and MR-DIBR encoder 603 may include the number of reference EIs and corresponding disparity maps. In one embodiment, adjustable parameters in the video encoders 508, 606 and 627 are those intrinsic to an image encoding algorithm (e.g., JPEG, JPEG 2000, or JPEG XS) or video encoding algorithm (e.g., MPEG, H.264, HEVC, Theora, RealVideo, RV40, VP9, AV1, AVI, FLV, RealMedia, Ogg, QuickTime, or Matroska). In one embodiment, adjustable parameters in video encoder 508 of synthesized EIs residuals include those of the HEVC intrinsic parameters as well as the decision of whether to encode or ignore the residual errors. In theory, it may be desired to allocate relatively more rate budget on encoding the MR-DIBR reference elemental images and corresponding disparity maps (e.g., MR-DIBR encoders 503 and 603. This is due to the propagation nature of errors in the reference images and disparity map to the synthesized images. In practice, the optimal allocation varies by (types of) light field images. Moreover, different implementations of the proposed combined MR-DIBR and video (e.g., HEVC) encoding may exist and provide tradeoffs of complexity versus performance (in terms of rate-distortion) based on user-defined selections of performance metrics.
It should be appreciated that while the foregoing discussion may imply that the number of residuals are equal to the total number of images needed to be synthesized from the reference images and disparity maps, such however is not necessarily the case. In some embodiments, the number of residuals can be smaller than the number of images that need to be synthesized. For example, if there is a quality threshold that requires all synthesized images to be above 30 dB, only the images that fall below the threshold may get residuals to improve their quality to above 30 dB, while the synthesized images that have a PSNR value of greater than 30 dB do not get any residuals. The number of residuals can also be determined by other factors, for example, raising the quality of a certain number of the lowest quality synthesized images, such as 10 or 10% of the number of synthesized images, etc.
In another embodiment, the residuals are used in a manner similar to the reference images and reference disparity maps. Once the images that have reference residuals are synthesized, they may become reference images for the unsynthesized images (e.g., hogels, subimages, subaperture images, or elemental images) placed in close proximity to them.
In some embodiments, one way to select these reference residuals is to place them between the original reference images (e.g., hogels, elemental images). This is because the synthesis error may become larger as the synthesized images (e.g., hogels, elemental images) move farther from the reference images. By placing a reference residual between all reference images, the total synthesis error in the process may be reduced, for example, by about 50%. If two reference residuals are placed between any two reference images, the synthesis error may be reduced, for example, by 66%.
In some embodiments, different methods can be used to determine the number of reference residuals required in the encoding operation. A first method is to add as many residuals as necessary to reduce the synthesis error in the scene to less than 0.5 pixels. A synthesis error of less than 0.5 pixels can be considered lossless. This is achieved by the following formula:
#residual references=[(maximum synthesis error in unit of pixels)/0.5]−1
If the image distance between the two reference images is smaller than the number found by the above formula, then all the possible residuals should be used.
A second method to determine the number of reference residuals uses the bit rate requirement. If the bit rate demands significant compression, the number of reference residuals can be kept relatively low, and if the bit rate demands a small amount of compression then the number of reference residuals may be increased.
In another embodiment, residuals for the synthesized images may include a disparity map component in addition to the image component. The disparity component of the residual is calculated by subtracting the warped disparity from original disparity data.
As shown in
Typically, the input/output devices 1510 are coupled to the system through input/output controllers 1509. The volatile RAM 1505 is typically implemented as dynamic RAM (DRAM) which requires power continuously in order to refresh or maintain the data in the memory. The non-volatile memory 1506 is typically a magnetic hard drive, a magnetic optical drive, an optical drive, or a DVD RAM or other type of memory system which maintains data even after power is removed from the system. Typically, the non-volatile memory will also be a random access memory, although this is not required.
While
The processes or methods depicted in the preceding figures may be performed by processing logic that comprises hardware (e.g. circuitry, dedicated logic, etc.), software (e.g., embodied on a non-transitory computer readable medium), or a combination of both. Although the processes or methods are described above in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in a different order. Moreover, some operations may be performed in parallel rather than sequentially.
Embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of embodiments of the invention as described herein.
In the foregoing specification, embodiments of the invention have been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the invention as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
This application claims the benefit of U.S. Provisional Application No. 62/514,521 filed on Jun. 2, 2017, the disclosure of which is incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
62514521 | Jun 2017 | US |