The present disclosure relates to systems and methods for compressing and decompressing data, for example to increase an effective capacity of storage media or to decrease the bandwidth used for transmitting data over a communications medium.
As storage capacity/network bandwidth have increased, so has its demand. One approach to accommodating this increased demand is through data compression.
Methods, systems and apparatuses for hybrid encoding and decoding of binary data are disclosed. In some embodiments, a data encoding system includes a non-transitory memory, a processor, a digital-to-analog converter (DAC) and a transmitter. The non-transitory memory stores a predetermined file size threshold. The processor is in operable communication with the memory, and is configured to receive a first data. The processor detects a file size associated with the first data. When the file size is below the predetermined file size threshold, the processor compresses the first data using a variable length codeword (VLC) encoder, to generate a second data. When the file size is not below the predetermined file size threshold, the processor compresses the first data using a hash table algorithm, to generate a second data. The DAC is configured to receive a digital representation of the second data from the processor and convert the digital representation of the second data into an analog representation of the second data. The transmitter is coupled to the DAC and is configured to transmit the analog representation of the second data.
In some embodiments, a method includes receiving a first data, and selecting one of a VLC encoder or a hash table algorithm from a memory, based on a size of the first data, the memory storing both the VLC encoder and the hash table. The method also includes transforming, using the selected one of the VLC encoder or the hash table algorithm, the first data into a second data including a compressed version of the first data. The method also includes sending a digital representation of the second data to a converter that causes the second data to be transmitted (e.g., via one or more of a wireless transmission, a wired transmission, or an optical transmission) after receiving the second data. When the VLC encoder is selected, the method can also include storing an uncompressed version of the first data.
Example features, structure and operation of various embodiments are described in detail below with reference to the accompanying drawings.
It seems that the more storage capacity/network bandwidth that exist, the more a “need” exists and the more useful data compression becomes. Data compression techniques can be divided into two major categories: lossy and lossless. Lossless data compression techniques are employed when it is particularly important that no information is lost in the compression/decompression process. Lossy data compression techniques are typically employed in processing applications such as the transmission and storage of digital video and audio data that can tolerate some information loss (e.g., since human vision is forgiving of potential artifacts). Lossy data compression techniques typically yield greater compression ratios than their lossless counterparts. Over the past 30 years, lossy data compression methods have gained tremendous importance for their use in video conferencing, streaming to a wide variety of devices, and home entertainment systems. Most other applications employ lossless data compression techniques.
For applications using data types such as video, it is possible to achieve compression ratios of 150:1 for Quarter Common Intermediate Format (QCIF) @15 fps over 64 Kbps (typically used in wireless video telephony applications) or 1080P High Definition (HD) @60 fps at 20 Mbps over broadband networks. These applications typically use the modern International Telecommunication Union (ITU) H.264 video compression standard, resulting in high quality video. However, for data types/files such as documents, spreadsheets, database files, etc., lossless data compression is generally strongly preferred. Compression ratios for lossless methods are typically much lower than those for lossy methods. For example, lossless compression ratios canrange from 1.5:1 for arbitrary binary data files, to 3.0:1 for files such as text documents, in which there is substantially more redundancy.
Transmitting compressed data takes less time than transmitting the same data without first compressing it. In addition, compressed data uses less storage space than uncompressed data. Thus, for a device with a given storage capacity, more files can be stored on the device if the files are compressed. As such, two of the primary advantages for compressing data are increased storage capacity and decreased transmission time.
Embodiments of the present disclosure set forth novel methods for accomplishing data compression in lossless and/or lossy contexts. For example, methods, systems and apparatuses for hybrid encoding and decoding of binary data are disclosed. In some embodiments, a data encoding system includes a non-transitory memory, a processor, a digital-to-analog converter (DAC) and a transmitter. The non-transitory memory stores a predetermined file size threshold. The processor is in operable communication with the memory, and is configured to receive a first data. The processor detects a file size associated with the first data. When the file size is below the predetermined file size threshold, the processor compresses the first data using a variable length codeword (VLC) encoder, to generate a second data. When the file size is not below the predetermined file size threshold, the processor compresses the first data, using a hash table algorithm, to generate a second data. The DAC is configured to receive a digital representation of the second data from the processor and convert the digital representation of the second data into an analog representation of the second data. The transmitter is coupled to the DAC and is configured to transmit the analog representation of the second data.
Data compression techniques typically employ a branch of mathematics known as information theory. Data compression is linked to the field of information theory because of its concern with redundancy. If the information represented/encoded by a message is redundant (where redundant information is defined as information whose omission does not reduce the information encoded in the output file), the message can be shortened without losing the information it represents.
Entropy (or “Shannon entropy”) is a term that can be used to convey how much information is encoded in a message. A message having high entropy may be said to contain more information than a message of equal length/size having low entropy. The entropy of a symbol in a message can be defined as the negative logarithm of its probability of occurrence in the message. The information content of a character, in bits, is expressed as the entropy using base-two logarithms:
E
symbol(X)=−log2(probability Of symbol(X))
where:
The entropy of an entire message, which is equivalent to the average minimum number of bits (H(X)) used to represent a symbol, is the sum of the entropy of each symbol occurring in the message:
Given a symbol set {A, B, C, D, E}, where the symbol occurrence frequencies (Pi) are:
{A=0.5 B=0.2 C=0.1 D=0.1 E=0.1},
the average minimum number of bits used to represent one of these symbols is:
H(X)=[−(0.5 log2(0.5)+0.2 log2(0.2)+(0.1 log2(0.1)*3)]
H(X)=−[−0.5+(−0.46438)+(−0.9965)]
H(X)=−[−1.9]
H(X)=1.9
Rounding up gives 2 bits/per symbol. Thus, as an example, a 10 character string, AAAAABBCDE is optimally encoded using 20 bits. Such encoding would allocate fewer bits to the more frequently occurring symbols (e.g., A and B) and longer bit sequences to infrequent symbols (C, D, E).
Although in the foregoing example, from A Guide to Data Compression Methods by Solomon (2013), the contents of which are incorporated by reference herein in their entirety for all purposes, the frequency of the symbols happens to match their frequency in the string, this will often not be the case in practice, Thus, there are two ways to apply the Shannon entropy equation (which provides a lower bound for the compression that can be achieved):
A variant on the above technique, known as dictionary coding, uses a slightly different approach to data compression. In approaches using dictionary coders (also referred to as “substitution coders”), one or more portions of the data to be compressed is first scanned to determine which characters, or character strings, occur most frequently. The identified Characters, and character strings, are placed in a dictionary and assigned predetermined codes having code lengths that are inversely proportional to the probability of occurrence of the characters, or character strings. The characters and character strings are read from the data file, matched up with their appropriate dictionary entry, and coded with the appropriate code. A variant of the dictionary coding scheme adapts the dictionary based on changing frequencies of occurrence of characters and character strings in the data. A few of these dictionary-based algorithms are described in further detail below.
In addition to dictionary coding, two known, pioneering lossless data compression methods are Huffman coding and arithmetic coding. These methods are considered near-optimal according to Shannon's theorem (also referred to as the “noisy-channel coding theorem”), with arithmetic coding typically having a slight edge over Huffman encoding in terms of compression ratio. However, Huffman coding is significantly more efficient in terms of encoding/decoding times than arithmetic encoding.
Huffman coding is based on the frequency of occurrence of a symbol within a given message. The principle is to use a lower number of bits to encode the data that occurs more frequently. The average length of a Huffman code depends on the statistical frequency with which the source produces each symbol from its syllabary. A Huffman code dictionary, which associates each data symbol with a codeword, has the property that no codeword in the dictionary is a prefix of any other codeword in the dictionary. The basis for this coding is a code tree that assigns short codewords to frequently occurring symbols and long codewords to symbols that are rarely used. An example Huffman tree is provided in
Arithmetic coding bypasses the idea of replacing an input symbol with a specific code. Instead, it takes a stream of input symbols and replaces it with a single floating-point number in the range of 0 to 1. The number of bits used to encode each symbol varies according to the probability assigned to that symbol. Low probability symbols may use many bits, while high probability symbols use fewer bits. During arithmetic coding, each symbol is assigned to an interval. Starting with the interval [0 . . . 1), each interval is divided into several subintervals having sizes proportional to the probability of their corresponding symbols.
The subinterval from the coded symbol is then taken as the interval for the next symbol. The output is the interval of the last symbol. Arithmetic coding is model-based, in that it relies on a model to characterize the symbols it is processing (i.e., to tell the encoder what the probability of a symbol is in the message). If the model produces an accurate probability of the symbols in the message, the symbols will be encoded very close to optimally. If, however, the model produces an inaccurate probability of the symbols in the message, the encoder may actually increase the size of the message, rather than compress it.
A popular dictionary coding algorithm, known as LZ77, was published in “A Universal Algorithm for Sequential Data Compression,” IEEE Transactions on Information Theory (May 2, 1977) by Abraham Lempel and Jacob Ziv, the content of which is incorporated by reference herein in its entirety for all purposes. The LZ77 algorithm uses a sliding window across the data to be compressed. The window contains a dictionary, a byte to be compressed and a “look ahead buffer” that slides to the right, as shown in
As shown in the first row of
One of many algorithms derived from LZ77 is known as the Lempel-Ziv-Welch (LZW) algorithm. It was originally developed by Ziv and Lempel, and was subsequently improved by Welch. Popular text compressors such as “Zip” and the Unix file compression utility “Compress” are based on LZW. LZW is also used in the popular GIF image format. Although the compression ratios achieved with LZW are lower than those for other compression algorithms, such as Huffman and arithmetic encoding discussed above, it remains popular due to its ease of implementation. LZW compression uses a code table, for example including 4096 codes. Codes 0-255 in the code table are assigned to represent single bytes from the input message. When encoding begins, the code table contains only the first 256 entries, with the remainder of the table being blank/empty. Compression is then accomplished using codes 256 through 4095, to represent the sequences of bytes. As the encoding proceeds, LZW identifies repeated sequences in the message, and adds them to the code table. Decoding is later performed by reading each code from the compressed file and translating it based on the code table, to identify the character or characters it represents.
A variety of other lossless data compression methods are set forth, for example, in History of Lossless Data Compression Algorithms, 2014, accessible at https://ethw.org/History_of_Lossless_Data_Compression_Algorithms, the content of which is incorporated by reference herein in its entirety for all purposes.
Encoders of the present disclosure compress data using a hybrid hash table/Variable Length Codeword (VLC) encoder, to achieve levels of data compression that are understood to have heretofore never been accomplished. The hybrid encoder invokes one of at least two algorithms, e.g., selected empirically based on the file size of the data to be compressed. Files determined to have a small size are compressed using the VLC algorithm, and files determined to have a large size are compressed using the hash table algorithm. Methods, systems and apparatus are disclosed herein for reducing the size of strings of binary data. In some embodiments, a method of removing redundancy from a stream of binary data includes parsing a predetermined number of bits from a received stream of binary data, and assigning either hash table or VLC codewords to segments extracted from the binary data. In other embodiments, a method of compressing a stream of binary data can include parsing a predetermined number of bits from a received stream of binary data, and assigning either fixed-length or variable-length codewords to symbols extracted from the binary data. In both such embodiments, the system is adaptive in that the hash table encoder's dictionary is updated for each codeword produced, and the VLC's table is tuned based on the statistics of the symbols in the stream of binary data. The hash table algorithm assigns a fixed length hash to a programmable number (e.g., four or more) of input bytes, thereby improving the compression ratio. The VLC encoder replaces input bytes with codewords, where short codewords are substituted for the most frequently occurring bytes, and longer codewords are substituted for less frequently occurring symbols. Systems of the present disclosure can also include decoders, and methods for decompressing and reproducing a copy of the original, uncompressed strings of binary data are also set forth herein.
In some embodiments, a system includes at least a processor, a memory, and a CODEC for compressing/decompressing, respectively, a raw or compressed data stream received from or originating from a file or network. The system is configured to receive a binary string of data and partition the binary string into one or more binary segments prior to compression. The system then compresses the binary segments using a hybrid hash table/VLC encoder. In other words, the hybrid encoder includes both a hash table encoder and a encoder. The hash table encoder compresses data by assigning fixed length codewords to one or more bytes of the input (binary segment) data. The VLC encoder assigns short codewords to frequently occurring bytes of the input data, and assigns longer codewords to less frequently occurring bytes. The output of both the hash table encoder and the VLC encoder can be saved to a file and/or transmitted via wired or wireless network transmission. The system can also include a decoder configured to receive hybrid Hash Table/VLC encoded bitstreams from a file or network, and to reproduce a binary string of data identical to the binary string of data that was originally input to the encoder. The system can optionally interface with a machine learning platform.
The VLC encoder 526 can define codewords for each byte to be encoded in accordance, for example, with a format defined by Table 1:
The prefix code and the VLC, collectively, define the codeword. The prefix code can be used to identify how many bits are in the VLC:
Note that only the first bit is used for the last entry in the table. It is used to tell the decoder which algorithm is being used.
In some embodiments, a first predetermined number (e.g., 20) of VLCs are defined in a dictionary, an example excerpt of which is provided in Table 3 below:
As shown in Table 3, each of the Prefix Code and the VLC Code is associated with a byte being coded. The bytes listed in Table 3 match the bytes shown in the Huffman Table of
In some embodiments, a pre-compiled VLC table can be used for encoding and/or decoding of data or data segments. Alternatively, instead of using a pre-compiled table, a table can be defined and populated by adding new bytes thereto as incoming bytes are processed. To maintain compression performance, the VLC encoder can dynamically update the table by shifting a table reference associated with the most recently encoded byte towards the top of the table (i.e., where codeword lengths are shorter). As shown in Tables 4-5 below, the “A” is coded (Table 4) and then moves up (or “bubbles” up) one row in the table (Table 5) by swapping the “A” with the “T” above it.
In some embodiments, a VLC encoding process includes:
An example VLC decoder dataflow, according to some embodiments, is provided in
In some embodiments, a VLC decoding process includes:
(1) Select a decoder table specified in the header of the bitstream
(2) Receive or retrieve a prefix code from the input
(3) Detect, based on the prefix code, a number of bits to read
(4) Read the bits associated with the detected number of bits
(5) Query the VLC lookup table based on the bits
(6) Retrieve the decoded byte
(7) Save/store and/or transmit the decoded byte
(8) Update the VLC lookup table
(9) Repeat steps (1)-(9)
In some embodiments, a hash table algorithm performs compression by replacing long sequences of bytes with a hash value, where each hash value is shorter than the length of the associated byte sequence. The hash table forms part of the overall hybrid CODEC, in that the hash table is selectively used when (1) the length of the byte sequence to be encoded is above a preselected value, and (2) the hash is found in a hash table dictionary. When conditions (1) and (2) are not met, the VLC encoding described above (e.g., with reference to
As shown in Table 6, the first bit of the codeword indicates that the hash table algorithm is being used for this codeword. The next 4 bits of the codeword indicate the length of the segment being compressed. When a segment to be encoded meets the criteria described earlier, a 16 bit hash value is generated using the segment to be encoded. An example hash function is as follows:
In some embodiments, the hash becomes the key for the hash table (e.g., pre-set hash table 112D of
Similar to the VLC encoder, the hash table matches occur in a pattern, with some more frequently than others. Greater compression ratios can be achieved by assigning smaller hash values to the more frequently-occurring matches, and by assigning larger hash values to the less frequently-occurring matches.
In some embodiments, the use of weighted frequencies in the hash table encoder yields a codeword having the format defined by Table 7.
The weighted format of Table 7 results in codeword lengths varying between 10 and 24 bits, as opposed to 21 bits with the unweighted format of Table 6. Since the most frequently-occurring hash values are the smaller ones, the overall compression ratio increases.
When the length of a codeword is 4 bits (e.g., as shown in Table 7 above), one might expect the range to be 1 to 15. Since the minimum match size is 4 bytes, however, the hash encoder uses the range shown in Table 8 below:
As shown in Table 8 above, to extend the length range beyond 18 bits, the last 4 bits are reserved to indicate the use of tallying. The range for this nibble is 1 to 14, with 1111 indicating an instruction to begin reading individual bits, with each 1 representing an extra 14 bits, and a 0 indicating the end of tallying. An example is provided in Table 9 below.
In some embodiments, a hash encoding process includes:
(1) Receive or retrieve four bytes from the input
(2) Generate a hash value based on the 4 bytes
(3) Query the hash table based on the hash value
(4) If the hash value is returned/found:
(5) If the hash value is not returned/found:
A flow diagram illustrating a hash decoder, according to some embodiments, is provided in
The hash decoder 958 reads and saves the next 4 bits of the compressed data 956. These 4 bits represent the length of the data segment to be decoded. Another 4 bits are then read, these further 4 bits representing the length of the hash value. Finally, based on the value of the previous 4 bits, a number of bits (between 1 and 15) associated with the length of the hash value are read. These 1-15 bits represent the hash value that points to the offset of the data segment to be extracted from the decoder buffer. Note that, in most embodiments, both the VLC decoder and the hash decoder append/add a copy of the decoded byte(s) to their respective/associated decoder buffers. The hash key can then be applied to a hash table (e.g., dictionary 960). The value obtained from dictionary 960 is the offset into the decode buffer 962 which, along with the previously decoded length, is used to locate the indicated bytes from the decode buffer 962 and output them (e.g., transmitting and/or saving the decoded data).
In some embodiments, a hash table decoding process includes:
(1) Receive compressed data
(2) Determine, based on a first bit of the compressed data, whether the bitstream is hash table encoded or VLC encoded
(3) If the bitstream is not hash table encoded
(4) If the bitstream is hash table encoded
In some embodiments, to achieve high compression ratios on small files (which can be characteristically more difficult to compress), encoders of the present disclosure can make use of pre-compiled dictionaries/VLC tables. These pre-compiled dictionaries/VLC are tailored to specific file/data types. The dictionary to be used for any given action may be specified at run-time, or may be selected real-time or near-real-time by analyzing a “small” (e.g., ˜4 kilobytes (kB)) portion of the file to be processed. In addition, the dictionary may be selected by an Artificial Intelligence (AI) powered dictionary selector algorithm. Note that, in some instances, a pre-compiled dictionary serves as a starting point dictionary that will subsequently be updated as described herein.
An example of a portion of an input bitstream segment is given below:
In some embodiments, a system includes a non-transitory memory, a processor, a DAC and a transmitter. The memory stores a predetermined file size threshold, a VLC encoder, and a hash table. The processor is in operable communication with the non-transitory memory, and configured to receive, at a processor, a first data, and to select one of the VLC encoder or the hash table based on a size of the first data. The processor is also configured to transform, using the selected one of the VLC encoder or the hash table, the first data into a second data including a compressed version of the first data. The DAC is configured to receive a digital representation of the second data from the processor and convert the digital representation of the second data into an analog representation of the second data. The transmitter is configured to transmit the analog representation of the second data. The transmitter can include an antenna such that the analog representation of the second data can be transmitted wirelessly. Alternatively or in addition, the transmitter can include a coaxial cable such that the analog representation of the second data can be transmitted over wire. Alternatively or in addition, the transmitter can include an optical fiber, such that the analog representation of the second data can be transmitted optically. The processor can be configured to store the digital representation of the second data in the memory.
All combinations of the foregoing concepts and additional concepts discussed here (provided such concepts are not mutually inconsistent) are contemplated as being part of the subject matter disclosed herein. The terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
The skilled artisan will understand that the drawings primarily are for illustrative purposes, and are not intended to limit the scope of the subject matter described herein. The drawings are not necessarily to scale; in some instances, various aspects of the subject matter disclosed herein may be shown exaggerated or enlarged in the drawings to facilitate an understanding of different features. In the drawings, like reference characters generally refer to like features (e.g., functionally similar and/or structurally similar elements).
To address various issues and advance the art, the entirety of this application (including the Cover Page, Title, Headings, Background, Summary, Brief Description of the Drawings, Detailed Description, Embodiments, Abstract, Figures, Appendices, and otherwise) shows, by way of illustration, various embodiments in which the embodiments may be practiced. The advantages and features of the application are of a representative sample of embodiments only, and are not exhaustive and/or exclusive. Rather, they are presented to assist in understanding and teach the embodiments, and are not representative of all embodiments. As such, certain aspects of the disclosure have not been discussed herein. That alternate embodiments may not have been presented for a specific portion of the innovations or that further undescribed alternate embodiments may be available for a portion is not to be considered to exclude such alternate embodiments from the scope of the disclosure. It will be appreciated that many of those undescribed embodiments incorporate the same principles of the innovations and others are equivalent. Thus, it is to be understood that other embodiments may be utilized and functional, logical, operational, organizational, structural and/or topological modifications may be made without departing from the scope and/or spirit of the disclosure. As such, all examples and/or embodiments are deemed to be non-limiting throughout this disclosure.
Also, no inference should be drawn regarding those embodiments discussed herein relative to those not discussed herein other than it is as such for purposes of reducing space and repetition. For instance, it is to be understood that the logical and/or topological structure of any combination of any program components (a component collection), other components and/or any present feature sets as described in the figures and/or throughout are not limited to a fixed operating order and/or arrangement, but rather, any disclosed order is exemplary and all equivalents, regardless of order, are contemplated by the disclosure.
Various concepts may be embodied as one or more methods, of which at least one example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. Put differently, it is to be understood that such features may not necessarily be limited to a particular order of execution, but rather, any number of threads, processes, services, servers, and/or the like that may execute serially, asynchronously, concurrently, in parallel, simultaneously, synchronously, and/or the like in a manner consistent with the disclosure. As such, some of these features may be mutually contradictory, in that they cannot be simultaneously present in a single embodiment. Similarly, some features are applicable to one aspect of the innovations, and inapplicable to others.
In addition, the disclosure may include other innovations not presently described. Applicant reserves all rights in such innovations, including the right to embodiment such innovations, file additional applications, continuations, continuations-in-part, divisionals, and/or the like thereof. As such, it should be understood that advantages, embodiments, examples, functional, features, logical, operational, organizational, structural, topological, and/or other aspects of the disclosure are not to be considered limitations on the disclosure as defined by the embodiments or limitations on equivalents to the embodiments. Depending on the particular desires and/or characteristics of an individual and/or enterprise user, database configuration and/or relational model, data type, data transmission and/or network framework, syntax structure, and/or the like, various embodiments of the technology disclosed herein may be implemented in a manner that enables a great deal of flexibility and customization as described herein.
All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
As used herein, in particular embodiments, the terms “about” or “approximately” when preceding a numerical value indicates the value plus or minus a range of 10%. Where a range of values is provided, it is understood that each intervening value, to the tenth of the unit of the lower limit unless the context clearly dictates otherwise, between the upper and lower limit of that range and any other stated or intervening value in that stated range is encompassed within the disclosure. That the upper and lower limits of these smaller ranges can independently be included in the smaller ranges is also encompassed within the disclosure, subject to any specifically excluded limit in the stated range. Where the stated range includes one or both of the limits, ranges excluding either or both of those included limits are also included in the disclosure.
The indefinite articles “a” and “an,” as used herein in the specification and in the embodiments, unless clearly indicated to the contrary, should be understood to mean “at least one.”
The phrase “and/or,” as used herein in the specification and in the embodiments, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
As used herein in the specification and in the embodiments, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the embodiments, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the embodiments, shall have its ordinary meaning as used in the field of patent law.
As used herein in the specification and in the embodiments, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
In the embodiments, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03.
While specific embodiments of the present disclosure have been outlined above, many alternatives, modifications, and variations will be apparent to those skilled in the art. Accordingly, the embodiments set forth herein are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the disclosure.
This application is a continuation of U.S. patent application Ser. No. 16/250,345, filed Jan. 17, 2019 and titled “Systems and Methods for Variable Length Codeword Based, Hybrid Data Encoding and Decoding Using Dynamic Memory Allocation,” the entire contents of each of which are incorporated herein by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 16250345 | Jan 2019 | US |
Child | 16691496 | US |