The invention relates to storage systems generally and, more particularly, to a method and/or apparatus for implementing a system and/or methods for efficient caching of file system journals.
In modern file systems, typical meta-data operations are journal-based. The journal-based meta-data operations are committed to on-disk file system journal entries first, then final updates of the file system meta-data are committed to the disk at a later point in time. The caching characteristics of file system journaling are quite different from (in most cases orthogonal to) cache characteristics implemented in conventional data caches. Because of this, the cache performance for journal I/Os using conventional caching schemes is poor and is affected in a negative way.
It would be desirable to have a system and methods for efficient caching of file system journals.
The invention concerns an apparatus including a memory and a controller. The memory may be configured to implement a cache and store meta-data. The cache generally comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the plurality of cache-lines is associated with meta-data indicating one or more of a dirty state, an invalid state, and a partially dirty state. The controller is connected to the memory and may be configured to (i) detect an input/output (I/O) operation directed to a file system recovery log area, (ii) mark a corresponding I/O using a predefined hint value, and (iii) pass the corresponding I/O along with the predefined hint value to a caching layer.
Embodiments of the invention will be apparent from the following detailed description and the appended claims and drawings in which:
Embodiments of the invention include providing a system and methods for efficient caching of file system journals that may (i) provide global tracking structures suited to managing file system journal caching, (ii) provide sub-cache-line management, (iii) modify cache window replacement and retention policies, (iv) isolate caching characteristics of file system journal I/Os, (v) be used also with database transaction logs, and/or (vi) be used with existing cache devices.
Referring to
In various embodiments, the system 100 is configured to communicate with a host 110 using one or more communications interfaces and/or protocols. According to various embodiments, one or more communications interfaces and/or protocols may comprise one or more of a serial advanced technology attachment (SATA) interface; a serial attached small computer system interface (serial SCSI or SAS interface), a (peripheral component interconnect express (PCIe) interface; a Fibre Channel interface, an Ethernet Interface (such as 10 Gigabit Ethernet), a non-standard version of any of the preceding interfaces, a custom interface, and/or any other type of interface used to interconnect storage and/or communications and/or computing devices. For example, in some embodiments, the storage controller 102 includes a SATA interface and a PCIe interface. The host 110 generally sends data read/write commands (requests) and journal read/write commands (requests) to the system 100 and receives responses from the system 100 via the one or more communications interfaces and/or protocols. The read/write commands generally include logical block addresses (LBAs) associated with the particular data or journal input/output (I/O). The system 100 generally stores information associated with write commands based upon the included LBAS. The system 100 generally retrieves information associated with the LBAs contained in the read commands and transfers the retrieved information to the host 110.
In various embodiments, the block 102 comprises a block (or circuit) 120, a block (or circuit) 122, a block (or circuit) 124, and a block (or circuit) 126. The block 120 implements a host interface (I/F). The block 122 implements a cache manager. The block 124 implements a storage medium interface (I/F). The block 126 implements an optional random access memory (RAM) that may be configured to store images of cache management information (e.g., meta-data) in order to provide faster access. In some embodiments, the block 126 may be omitted. The blocks 104, 122 and 126 (when present) generally implement journal caching data structures and schemes in accordance with embodiments of the invention.
Referring to
In various embodiments, the meta-data 136 comprises a first (valid) bitmap 138, a second (dirty) bitmap 140, and cache-line information 142. The first bitmap 138 includes a first (valid) flag (or bit) associated with each cache-line 134a-134m. The second bitmap 140 includes a second (dirty) flag (or bit) associated with each cache-line 134a-134m. A state of the first flag indicates whether the corresponding cache-line is valid or invalid. A state of the second flag indicates whether the corresponding cache-line is dirty or clean. In some implementations, the cache-lines within a cache window are not physically contiguous. In that case, the per cache window meta-data 136 stores the information about the cache-lines (e.g. cache line number) which are part of the cache window in the cache-line information 142. In various embodiments, a size of the cache-line information 142 is four bytes per cache-line. The meta-data 136 is stored persistently on the cache device 104 and, when available, also in the block 116 for faster access. For a very large cache memory, typically the cache-line size is large (>=64 KB) in order to reduce the size of the meta-data 136 on the cache device 104 and in the block 116.
Updates of the meta-data 136 are persisted on the cache device 104. Updating of the meta-data 136 is done at the end of each host I/O that modifies the meta-data 136. Updating of the meta-data 136 is also done during a shutdown process. Whenever a cache window 132a-132n is to be flushed (e.g., either during system recovery following a system reboot, or to free up active cache windows as part of a least recently used replacement or maintaining a minimum number of free cache windows in write back mode), the determination of which cache-lines to flush is based on picking all the valid cache-lines that are marked dirty. Usually, the flush is done by a background task. Once the flush is done successfully, the cache-lines are again indicated as being clean (e.g., the dirty bit for the corresponding cache-lines is cleared).
The block 104 generally supports existing caching approaches. For example, the block 104 may be used to implement a set of priority queues (in an example implementation, from 1 to 16, where 1 is the lowest priority and 16 is the highest priority), with more frequently accessed data in higher priority queues, and less frequently accessed data in lower priority queues. A cache window promotion, demotion and replacement scheme may be implemented that is based primarily on LRU (Least Recently Used) tracking. The data corresponding to the cache windows 132a-132n is both read and write intensive. A certain amount of data read/write to a cache window within a specified amount of time (or I/Os) makes the cache window “hot”. Until such time, a “heat index” needs to be tracked (e.g., via virtual cache windows). Once the heat index for a virtual cache window crosses a configured threshold, the virtual cache window is deemed hot, and a real cache window is allocated, indicating that the data is henceforth cached. While the heat index is being tracked, if sequential I/O occurs, the heat index is not incremented for regular data access. This is because caching sequential I/O access of data is counter-productive. Purely sequential I/O access of data is handled as pass-through I/O issued directly to the storage media 106 since these workloads are issued very rarely. These are usually deemed as one time occurrences. The above are processing steps done for non-journal I/O (read or write).
Once a real cache window is allocated, any non-journal I/O (read or write) on a cache-line that is invalid is preceded by a cache read-fill operation. The cache-line is made valid by first reading the data from the corresponding LBAs on the storage medium 106 and writing the same data to the corresponding cache device. Once a cache-line is valid, all writes to the corresponding LBAs are directly written only to the cache device 104 (since the cache is in write back mode), and not written to the storage media 106. Reads on a valid cache-line are fetched from the cache device 104.
When a user I/O request spans across two cache windows, the caching layer breaks the user I/O request into two I/O sub-requests corresponding to the I/O range covered by the respective windows. The caching layer internally tracks the two I/O sub-requests, and on completion of both I/O sub-requests, the original user I/O request is deemed completed. At that time, an I/O completion is signaled for the original user I/O request.
In various embodiments, caching characteristics of file system recovery log I/Os (e.g., journal I/Os, transaction log I/Os, etc.) are isolated (separated) from regular data I/Os. The recovery log entries (e.g., journal entries, transaction log entries, etc.) are organized in a circular fashion. For example, either a circular array, or a circular buffer, may be used depending on the implementation. For journaling, the first cache-line 134 in the first cache window 132 of journal entries is accessed again (specifically, over-written) only after a complete wraparound of the journal. Hence, the set of priority queues used for data caching is inappropriate for maintaining and tracking the journal information. A cache window replacement of journal pages is primarily MRU (Most Recently Used) based, due to the circular fashion in which the journal entries are arranged.
In various embodiments, writes of the journal pages are implemented with a granularity of 4 KB. Hence, the granularity of the cache-lines, and/or, the granularity of cache windows for the journal pages need to be handled differently from the cache windows corresponding to data pages. In general, the granularity of both the cache-line size and cache window size of journal pages is considerably smaller than the cache windows that hold data.
In various embodiments, methods are implemented to handle a difference between journal sizes and data sizes. In some embodiments, the cache-lines 134a-134m of each cache window 132a-132n that are used for journal entries are split into smaller sub-cache-lines. In some embodiments, sizes of both cache-lines and the corresponding cache windows used for journal entries are reduced with respect to cache-lines and cache windows used for data entries. In an example implementation, a data cache window size may be 1 MB with a cache-line size of 64 KB, while for journal entries, either one of two approaches may be used. In one approach, a journal cache window size of 1 MB is split into 16 cache-lines of 64 KB each, and each of the 16 cache-lines is further split into 16 sub-cache-lines of 4 KB each. In the other approach, a journal cache window size of 64 KB is split into 16 cache-lines of 4 KB each. A finer granularity for handling journal write I/Os by the cache device 104 generally improves the journal write performance.
Journals are generally write-only. A read is not issued on journals as long as the file system is mounted. A read is issued only to recover a file system (e.g., during file system mount time). Recovery of a file system generally happens only if the file system is either not un-mounted cleanly, or when a system crash occurs. The conventional scheme used for data windows, where a certain amount of data read/write to a cache window within a specified amount of time (or I/Os) makes the cache window hot, does not work for journal I/Os. Because of the circular nature of journal. I/Os, journal I/Os would not cause a cache window to become hot using the conventional scheme for data windows. A journal write is a purely sequential write. However, the journal write is circular in nature, and wraps around multiple times. Hence, a journal entry is going to be written many times, but later (e.g., after every wraparound). Hence, the conventional scheme used for data cache windows where the heat index is not incremented for regular data access for sequential I/O access does not work for journals since that would result in ensuring journal pages are not cached.
The conventional scheme used for data I/O (read or write) where once a real cache window is allocated, a cache-line is made valid by first reading the data from the corresponding LBAs on the storage medium and writing the same data to the corresponding cache device (a so-called cache read-fill operation) is not suitable for journals. This is because of the pure write-only nature of journal pages. Writes on journal pages are guaranteed to arrive sequentially, and hence the cache-line which is read from the storage medium as part of the cache read-fill operation will get overwritten by subsequent writes from the host. So, the cache read-fill operation during journal write is clearly unnecessary. Reads on a valid cache-line are of course fetched from the cache device. But, more importantly, a read operation on a cache-line that is invalid should be directly serviced from the storage medium, and the cache window and/or cache-lines should not be updated in any manner. This is because, for journals, reads are issued only during journal recovery time. The workload is write-only in nature. Hence, trying to do a cache read-fill on a read of data from the storage medium is highly detrimental to the performance of journal I/O.
In various embodiments, the above characteristics of journal pages containing file system meta-data are taken into account and a separate set of global tracking structures that are best suited for tracking journal pages are implemented. The same methods are applicable to the management of transaction logs for databases. The database transaction logs are managed in a way that is almost identical the file system journals. Thus, the features provided in accordance with embodiments of the invention for file system journals may also be applied to transaction logs for databases.
In various embodiments, a journal I/O is detected by trapping the I/O and checking whether the associated LBA corresponds to a journal entry. The determination of whether the associated LBA corresponds to a journal entry can be done using existing facilities and services available from conventional file system implementations and, therefore, would be known to those of ordinary skill in the field of the invention and need not be covered in any more detail here. Once a journal I/O is detected, the corresponding I/O is marked (or tagged) as a journal I/O using suitable “hint values” and passed to a caching layer. The mechanisms for marking the I/Os already exist and hence are not covered in any more detail here. The caching layer looks at the I/Os that are marked and determines, based on the corresponding hint values, whether the I/Os are journal I/Os.
Referring to
Referring to
If in the step 206, the host journal I/O request is determined to be a write request, the process 200 moves to a step 212. In the step 212, the process 200 determines whether the last journal offset points to the end of the current journal window. If the last journal offset points to the end of the current journal window, the process 200 performs a step 214, a step 216, and a step 218. If the last journal offset does not point to the end of the current journal window, the process 200 moves directly to the step 218. In the step 214, a new journal window is allocated. In the step 216, the current journal window is set to point to the newly allocated cache window and the last journal offset is set to the beginning of the newly allocated cache window. In the step 218, the process 200 determines whether the last journal offset is equal to the start LBA of the current request.
In the step 218, the block number of the write request is compared with the journal cache-line offset. If the block number of the write request is not sequentially following the journal cache-line offset (e.g., the last journal offset is not equal to the start LBA of the current request), the process 200 moves to a step 220, followed by either a step 222 or steps 224 and 226. If the last journal offset is equal to the start LBA of the current request, the process 200 moves directly to the step 226. In the step 220, the process 200 determines whether the start LBA of the current request falls within the current journal window. If the start LBA of the current request does not fall within the current journal window, the process 200 moves to the step 222. If the start LBA of the current request falls within the current journal window, the process 200 performs the steps 224 and 226.
In the step 222, the process 200 readfills all the cache-lines in the current journal window, starting from the cache-line on which the last journal offset falls to the last cache-line in the current journal window, then moves to the step 214. In the step 224, the process 200 readfills all the cache-lines in the current journal window, starting from the cache-line on which the last journal offset falls to the cache-line corresponding to the start LBA of the current request, then moves to the step 226. In the step 226, the process 200 writes to the current journal cache window, then moves to a step 228. In the step 228, the process 200 determines whether there are more writes than the current window. When there are more writes than the current window, the process 200 moves to the step 214. When there are not more writes than the current window, the process 200 moves to a step 230. In the step 230, the process 200 marks all cache-lines filled up during the current operation as dirty in the meta-data, then moves to the step 232. In the step 232, the process 200 sets the last journal offset to one block after the last block of the current request. The process 200 the moves to the step 234 and terminates.
The allocation of cache windows can be done from a dedicated pool of cache windows for journal data as shown in
Whenever a cache window is to be flushed, the determination of which cache-lines to flush is based on picking all the valid cache-lines that are marked dirty. Using this scheme, the cache-line containing the journal cache-line offset may never get picked. This is because the cache-line containing the journal cache-line offset is still in the invalid state although the cache-line has been marked dirty. In conventional cache schemes, a read/write on invalid cache-lines is preceded by a cache read-fill operation to make the cache-lines valid. Hence, for a cache-line with an invalid state, the state of the dirty/clean flag has no meaning in the conventional schemes.
In various embodiments, an additional state is introduced. The additional state is referred to as a “partially valid” state. The partially valid state is implemented for each cache-line in a cache window, in addition to the valid and invalid states. In some embodiments, the state of the cache-line is set to “dirty” even if the cache-line is marked as invalid. The cache controller is configured to recognize the state of a cache-line marked both dirty and invalid as partially valid by correlating and ensuring that the journal cache-line offset falls on the particular cache-line. The latter approach is used as an example in the following explanation.
In various embodiments, because the writes to journal data do not involve prior read-fills, special processing is done for the cache-line containing the journal cache-line offset during cache flush scenarios. For example, a first processing step is performed to find out if the cache-line containing the journal cache-line offset is “partially valid” (e.g., both the “Dirty” and “Invalid” states are asserted). If so, a read-fill operation is performed for the “Invalid” portion of the cache-line from the storage medium, and then, the entire cache-line is written (flushed) to the storage medium as part of the steps that constitute a flush of a cache-line.
Referring to
Since the size of a sub-cache-line is necessarily smaller than the size of a cache-line 134a-134m, the size of the extended meta-data 160 per cache window is large. Therefore, only a limited number of the cache windows 132a-132n are allowed to have corresponding extended meta-data 160. In various embodiments, the pool of memory holding the limited set of extended meta-data 160 is pre-allocated. Regions containing the per cache window extended meta-data 160 are associated with the respective cache windows 132a-132n on demand and returned back to a free pool of extended meta-data 160 when all the sub-cache-lines 150 within one of the cache-lines 134a-134m are filled up with journal writes.
Referring to
If a cache window does not contain the requested block, the process 300 moves to a step 316 to determine whether the requested number of blocks are aligned with a cache-line boundary. If the number of blocks are cache-line aligned, the process proceeds to the steps 312 and 314. If the requested number of blocks are not cache-line aligned, the process 300 moves to a step 318 where a determination is made whether the requested number of blocks and a start block are aligned with a sub-cache-line boundary. If the requested number of blocks and the start block are not sub-cache-line aligned, the process 300 proceeds to the steps 312 and 314. Otherwise, the process 300 moves to a step 320.
In the step 320, the process 300 determines whether the cache window corresponding to the start block is already allocated. If the cache window is already allocated, the process 300 moves to a step 322. If the cache window is not already allocated, the process 300 moves to a step 324. In the step 322, the process 300 determines whether extended meta-data is mapped to the cache window. If extended meta-data is not mapped to the cache window, the process 300 moves to a step 326. If extended meta-data is already mapped to the cache window, the process 300 moves to a step 328. In the step 324, the process 300 allocates a cache window, then moves to the step 326.
In the step 326, the process 300 allocates extended meta-data to the cache window, then moves to the step 328. In the step 328, the host write is transferred to the cache and the process 300 moves to a step 330. In the step 330, the sub-cache-line is marked as dirty in the extended meta-data copy in RAM and on the cache device. The process 300 then moves to the step 332. In the step 332, the process 300 determines whether all sub-cache-lines for a given cache-line are dirty. If all sub-cache-lines for a given cache-line are not dirty, the process 300 moves to a step 334 and terminates. If all sub-cache-lines for a given cache-line are dirty, the process 300 moves to a step 336 to mark the cache-line dirty in the cache meta-data copy in RAM and on the cache device, then moves to a step 338.
In the step 338, the process 300 determines whether all cache-lines with sub-cache-lines within the cache window are marked as dirty. If all the cache-lines with sub-cache-lines within the cache window are not marked as dirty, the process 300 moves to the step 334 and terminates. If all the cache-lines with sub-cache-lines within the cache window are marked as dirty, the process 300 moves to the step 340, frees the extended meta-data for the cache window, then moves to the step 334 and terminates.
When a host journal write request is received, the block number of the request is used to search the cache. If data is already available in the cache (e.g., a cache-line HIT is found), then the cache-line is updated with the host data and the cache-line is marked as dirty. If (i) a cache-line HIT is not found, (ii) the cache window corresponding to the start block of the journal write request is already in the cache, and (iii) the write request size is not a multiple of the cache-line size, an extended meta-data structure 130 is allocated and mapped to the cache window (if not already allocated and mapped). The host write is then completed and the sub-cache-line bitmap is updated in the extended meta-data 130 in RAM and on the cache device. If the cache-line HIT is not found and a cache window corresponding to the journal write request is not already present, a cache window is allocated. If the journal write request size is not a multiple of the cache-line size, an extended meta-data structure 140 is allocated and mapped to the cache window, the host journal write is completed and the sub-cache-line bitmap is updated in the extended meta-data 140 in RAM and on the cache device.
In various embodiments, once the number of cache windows with extended meta-data exceeds a predefined threshold (e.g., defined as some percentage of the number of cache windows reserved for journal I/O), a background read-fill process (described below in connection with
In some embodiments, a timer may be implemented for each partially filled cache window the first time extended meta-data 140 is allocated for the cache window. After the timer expires, the partially filled cache-lines of the cache window are read-filled and the extended meta-data 140 for the cache window is freed up.
Referring to
If, in the step 404, a free extended meta-data structure is not available, the process 400 moves to a step 414 and awakens the background read-fill process and moves to a step 416. In the step 416, the process 400 waits for a signal from the background read-fill process. Once the signal is received from the background read-fill process, the process 400 moves to a step 418 and allocates an extended meta-data structure. The extended meta-data structure is then mapped to the cache window and the process 400 moves to the step 412 and terminates.
It is possible that the number of available extended meta-data structures become exhausted. When the number of available extended meta-data structures is exhausted, a background read-fill process (described below in connection with
Referring to
In some embodiments, when the host issues a read request for the journal data and there is a cache HIT, the read request is served from the cache. If however, there is a MISS, the request is served from the storage medium (e.g., the backend disk) bypassing the cache device 112. If the read request is a partial HIT (e.g., the read is only partially available in cache device), the data in the cache device is read from cache device and the remaining data is retrieved from the storage medium as shown in
Referring to
When the host issues a write request that has a size that is either a multiple of a cache-line size or which is unaligned to a sub-cache-line boundary, a check is made to determine if the requested data blocks are already in a cache window. If not, then the requested blocks are read in from the storage medium as shown in
Referring to
In various embodiments, for each of the storage devices 108a-108n containing a file system, a corresponding journal tracking structure 806 is identified by a device ID of the particular storage device (e.g., <Device ID>). The tracking structure 806 comprises fields for the following entries: Device ID 808, Cache Window size, Cache-Line size, Start LBA of the Journal area, End LBA of the Journal area, LRU Head pointer 802, MRU Head pointer 804, Current Journal Window pointer 810. For each storage device, the cache windows for the journals are arranged in the form of a doubly-linked list resulting in the LRU/MRU chain 800 pointed at by the LRU Head pointer 802 and MRU Head pointer 804, respectively (as shown in
Linear searching for an entry starting from the location pointed at by the MRU Head pointer 804 can be expensive in terms of time in some of the configurations. In such cases where search efficiency is important, the entries can additionally be placed on a Hash list 812 where the hashing is done based on logical block addresses (e.g., <LBA Number>). The <LBA Number> corresponds to the <Start LBA> of the I/O request for which a search is made for a matching entry.
The Current Journal Window field 810 points to the most-recent journal entry that is being updated and is not full. Once this cache window is full (e.g., an update results in reaching the End LBA of the cache window pointed to by the Current Journal Window field 808), the cache window is inserted at the location pointed to by the MRU Head pointer 804 after setting the Current Journal Window field 808 to point to a newly allocated journal cache window.
In various embodiments, a separate free list 814 is maintained for journal I/Os. The free list 814 is used to control and provide an upper-bound on how many cache windows journal I/Os claim. Even among all the different journals, those that are meta-data intensive workloads should be allocated more journal cache windows. The free list 814 comes from the free list of (data) cache windows itself. However, managing a separate free list of journal cache windows gives more control on allocation and growing or shrinking the resources allocated to the journal cache windows. Another characteristic of the MRU entries is that each of the MRU entries are sorted in terms of the respective LBAs, and are arranged in decreasing order.
Since the journal is circular, the journal can wrap around (as shown in
In various embodiments, once a file system is mounted from a storage device, the following steps are performed on the first journal write (e.g., when the first journal entry is written to a journal device): The journal tracking structure 806 is allocated; The Device ID field 808 is initialized to point to the journal device; The Cache Window size, Cache-Line size, Start LBA of the Journal area, and End LBA of the Journal area fields are initialized based on the file system. The LRU Head pointer 802 and MRU Head pointer 804 are empty; The Current Journal Window field 810 points to a newly allocated journal cache window (as described above in connection with
At least one active journal cache window is implemented for each storage device 108a-108n once the file system on the respective storage device 108a-108n is mounted and the first journal entry has been written. The at least one active journal cache window is pointed at by the Current Journal Window field 810 in the journal tracking structure 806, as explained above. For each journal tracking structure 806, the following parameters are tracked: min_size (in LBAs)=8 (e.g., 4 KB); max_size (in LBAs)=total journal size; curr_size=the current size (in LBAS) allocated for journal. The amount of total free cache windows for journals can be based on some small percentage of total data space (e.g., 1%), and may be programmable.
The free list of journal cache windows 814 can be managed either as a local pool for each device or as a global pool across all devices. Implementing a local pool is trivial, but is sub-optimal: if the I/O workload does not generate journal I/O entries, the corresponding cache remains unused and is hence wasted. Implementing a global pool is complex, but makes optimal use of the corresponding cache windows. In addition, the global pool allows for over allocation based on demand from file systems that have high journal I/O workload. Later, when there is pressure on journal pages (e.g., no free cache windows in the free list 814), the over allocated journal cache windows can be freed back. Since such global pool management techniques are well known and conventionally available, no further description is necessary.
Searching if a journal page is cached may be implemented as illustrated by the following pseudo-code:
The read I/O requests on the journal are handled in the manner described above in connection with
Referring to
In various embodiments, the host 902 comprises the cache manager 910, a block 912 and a block 914. The block 912 implements an optional random access memory (RAM) that may be configured to store images of cache management information (e.g., meta-data) in order to provide faster access. In some embodiments, the block 912 may be omitted. The block 914 implements a storage medium interface (I/F). The blocks 904, 910 and 912 (when present) generally implement journal caching data structures and schemes in accordance with embodiments of the invention.
The terms “may” and “generally” when used herein in conjunction with “is(are)” and verbs are meant to communicate the intention that the description is exemplary and believed to be broad enough to encompass both the specific examples presented in the disclosure as well as alternative examples that could be derived based on the disclosure. The terms “may” and “generally” as used herein should not be construed to necessarily imply the desirability or possibility of omitting a corresponding element.
The functions illustrated by the diagrams of
While the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made without departing from the scope of the invention.
This application relates to U.S. Provisional Application No. 61/888,736, filed Oct. 9, 2013 and U.S. Provisional Application No. 61/876,953, filed Sep. 12, 2013, each of which are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
61888736 | Oct 2013 | US | |
61876953 | Sep 2013 | US |