Embodiments of the invention relate generally to the caching of IO devices in an IO network using the combination of volatile and solid-state memories as cache by Host Bus Adapters and IO bridges/switches in the network. More particularly, the present invention relates to the distribution of caching operation, as well as its management, to each of the HBA/IO bridge switch in the network.
A cache is a low capacity, high-speed memory used as an aid to a high capacity, but much slower memory to speed up data transfer from one memory location to another. A cache can be a portion of the processor memory, or a collection of multiple external Random-Access Memories (RAMs). A Direct Memory Access Controller (DMAC) can be used, in behalf of the processor (CPU), to perform a sequence of data transfers from one memory location to another, or an IO device to memory location (and vice versa) in case large amounts of data are involved. DMAC may have multiple channels which can be programmed to perform a sequence of DMA operations. The utilization of cache and DMAC in Host Bus Adapters (HBAs) and IO bridges-switches in an IO network and the use of these HBA/IO bridge-switch caches in a modular manner greatly improve the overall performance of the network in terms of data transfer rate and bandwidth.
One current system (“Method, System, and Program for Maintaining Data in Distributed Caches, U.S. Pat. No. 6,973,546) describes the distribution of network IO data to caches of multiple or all components in the network. The implementation discusses two main components—(1) a central cache directory, which lists the location of all cached IO data in the entire network, and (2) cache server, which connects to a host/client, manages all its requests, and caches a copy of the requested data in case it does not have a copy yet. Whenever a host requests an IO data from a connected cache server, the cache server sends a request to the component managing the central directory to determine the cache location of the requested IO data in case the requested data is not in its cache. Once the location is detected, the cache fetches the IO data by sending a request to the cache location. If in case the requested IO data is not yet cached, the cache server fetches from the actual target, caches the data, and updates the central directory by sending message to the component which handles it. This implementation makes the component which handles the central directory of caches as bottleneck since all the cache servers in the network sends requests to this component to determine the location in cache of requested IO data by the hosts/clients connected to the cache servers. Also, the latency of the requested IO data to return to the requesting host/client is longer due to the number of steps in fetching the requested IO data—(1) request for the cache location of the IO data from the central directory, (2) fetch IO data from location returned by the previous request, and (3) send the fetched data to the host/client. Note that additional step/s is/are included in the event when the requested IO data is not found in the central directory. Another problem introduced by this implementation is the non-scalability of the network to the limitation in number of entries which can be put in the central directory.
Another current system (“Scalable Distributed Caching System, U.S. Pat. No. 5,933,849) describes a distributed caching of network data using a cache directory which is distributed to network components. The implementation discusses 3 main components—(1) receive cache server, which is directly connected to a host/client and does not have a copy of the requested data by the host/client in its cache, (2) directory cache server, which store a directory list of distributed cached data, and (3) object cache server, which store a copy of the data. This implementation is similar to the first current system discussed above except that the directory cache server is not centralized and the directory list of the entire cached data is distributed to the network components. A receive cache server implements a locator function which it uses to determine the location of directory cache that stores the directory list containing the requested data. The directory list includes the network address of the requested data as well as the network addresses of object cache servers, each of which having a copy of the requested data. Once the location of the directory cache is known, the receive cache server requests for the data by sending request to the directory cache server in the form of a message. The directory cache server, then polls the object cache servers on the directory list, which then send messages to the receive cache server indicating if data being requested is cached. The receive cache server sends a data request message to the first object cache server that sent a message indicating it has a cached copy of the requested data. The object cache server sends the requested cached copy in response to the request of receive cache server, which then stores a copy of the data before forwarding to the source of the request (host/client). The directory list is then updated to include the network address of the receive cache server to the list of cache servers having a copy of the recently requested data. The coherency of cached copies is managed by the distributed deletion of old cached copies whenever a new copy from the original source of data is sent to a cache server, and by the implementation of time-to-live parameter associated with cache address with each copy of data in cache. The time-to-live parameter specifies the date and time at which the cached data expires, and is to be deleted from the cache. If the requested data is not found in the directory list determined by the locator function, the receive cache server sends the request to the actual source of data. The response data sent by the actual source is then cached by the receive cache server before forwarding to the host/client. The receive cache server then creates a directory for the newly cached data. This implementation offers scalability and requires less memory than the conventional art. However, the latency of the requested data to return to the host is still longer due to the series of steps needed to be performed by the receive cache server in order to fetch the requested data from an object cache server or actual source of data.
Other improvements for the previously discussed conventional arts are still possible. One way is the utilization of solid state memories as cache extensions. Solid state memory, although has longer access time than that of a typical cache (SRAM, SDRAM, etc), is non-volatile which can be used to back-up cache content in the event of a power failure. Once the power is returned, the previous cache content can be reloaded from the cache back-ups, thus maintaining the performance of the component. One patent (“Hybrid Storage Device”) describes the employment of solid-state memories as level 2 cache. However, the implementation only discusses the usage of solid state memories as cache for locally-connected IO devices, particularly locally-connected storage devices. The possibility of extending the usage of the cache to store copies of remote IO devices is not discussed. Rotational disk drives, being non-volatile, can also be used as cache back-ups. However, due to longer access time, solid-state memories are more preferred.
In an embodiment of the invention, a network of multiple IO devices, multiple hosts, and multiple HBA/IO bridges-switches with multi-level cache comprised of volatile and solid-state memories is described. Distributing the caching of data from the IO devices to each HBA/IO bridge-switch within the network results to a high-performance and high-bandwidth IO network.
Different interfaces can be used to connect the HBA/IO bridge-switch to a host, to IO devices, and to other HBA/IO bridges-switches. PCI, PCI-X, PCI Express, and other memory-based interconnects can be used to directly connect the HBA/IO bridge-switch to a host. Standard IO interfaces such as IDE, ATA, SATA, USB, SCSI, Fibre Channel, Ethernet, etc. can be used to connect the HBA/IO bridge-switch to one or more IO devices. These IO interfaces can also be used to connect the HBA/IO bridge-switch indirectly to host via PCI/PCI-X/PCI Express bridge to these IO interfaces. Serial interfaces such as PCI Express, Fibre Channel, Ethernet, etc. can be used to connect the HBA/IO bridge-switch to other HBA/IO bridges-switches.
A multi-level cache composed of non-volatile and solid-state memories are used to store latest IO data that passed through the HBA/IO bridge-switch. The cache content can be from the IO devices local to the HBA/IO bridge-switch or from remote IO devices connected to other HBA/IO bridges-switches. With the aid of a DMAC, data transfer from IO device to HBA/IO bridge-switch cache, or HBA/IO bridge-switch cache to IO device, is increased. Furthermore, the utilization of solid-state memories as cache back-ups strengthens the performance of the entire network by bypassing the initial caching process during power failure since cache contents before the power failure are retrieved from the solid-state memories once the power is reinstated.
Generic Switch Cache Remap Tables are maintained by each HBA/IO bridge-switch to identify the source device of a specific cached data. Source IO device can be connected directly to the HBA/IO bridge-switch or from remote IO device connected to another HBA/IO bridge-switch.
A Remote Copies Table (inside each Generic Switch Cache Remap Table) is maintained by each HBA/IO bridge-switch to track multiple copies of its cached data, if there is any. This table indicates the device ID of the HBA/IO bridge-switch which holds a copy of the local cache.
A Cache Control Information Table (inside each Generic Switch Cache Remap Table) is used by each of the HBA/IO bridge-switch to maintain cache coherency with other HBA/IO bridges-switches. This information is used to synchronize the local cache with that of the other HBA/IO bridges-switches. Using the Remote Copies table, remote copies of the local IO cache are monitored by the HBA/IO bridge-switch and in the event of cache update, the other HBA/IO bridges-switches having the remote copies are informed that their copies are stale thus they can either flush their copy or get the updated one. Copies of remote IO cache are also monitored by HBA/IO bridges-switches and in the event of cache update, received information on the validity of a remote IO cache is checked against the list of remote IO caches. Once determined as stale, the HBA/IO bridge-switch may request an updated copy of the remote IO cache or wait until it is instructed to fetch it.
Different cache operations can be used to enhance the performance of the IO network in terms of data throughput, transfer rate, and high-percentage cache hits. Cache migration is used to move the remotest IO caches to HBAs/IO bridges-switches near the requesting Host for faster data transfer and cache expansion. Cache splitting is used to provide more bandwidth by routing different cached data at the same time through different paths but same destination. Cache splitting can also be used for cache redundancy by sending duplicated copies of cached data to different location at the same time.
In the event of power failure, a POWER GUARD™ can be used to temporarily supply power to HBA/IO bridge-switch. During this time, current cache content is transferred to the solid-state memories. Solid-state memories, being non-volatile, maintain content data even in the absence of power. Once the power has returned to normal, cache content before the power failure is retrieved from the solid-state memories thus maintaining the performance of the HBA/IO bridges-switches as well as the IO network.
So that the manner in which the above recited features, advantages and objects of the present invention are attained and can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments thereof which are illustrated in the appended drawings.
It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the present invention may admit to other equally effective embodiments.
In the following detailed description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the various embodiments of the present invention. Those of ordinary skill in the art will realize that these various embodiments of the present invention are illustrative only and are not intended to be limiting in any way. Other embodiments of the present invention will readily suggest themselves to such skilled persons having the benefit of this disclosure.
In addition, for clarity purposes, not all of the routine features of the embodiments described herein are shown or described. One of ordinary skill in the art would readily appreciate that in the development of any such actual implementation, numerous implementation-specific decisions may be required to achieve specific design objectives. These design objectives will vary from one implementation to another and from one developer to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming but would nevertheless be a routine engineering undertaking for those of ordinary skill in the art having the benefit of this disclosure. The various embodiments disclosed herein are not intended to limit the scope and spirit of the herein disclosure.
Preferred embodiments for carrying out the principles of the present invention are described herein with reference to the drawings. However, the present invention is not limited to the specifically described and illustrated embodiments. A person skilled in the art will appreciate that many other embodiments are possible without deviating from the basic concept of the invention. Therefore, the principles of the present invention extend to any work that falls within the scope of the appended claims.
As used herein, the terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items.
In
If both cache A (211) and cache D (231) have the data, and host C (222) requests for the same data, host C (222) may get the data from either cache A (211) or cache D (231) without knowing it, depending on which cache transfer is more optimized. Utilizing standard protocols (like Advanced Switching, Ethernet, etc.), host C (222) using switch C (220) sends its request to device D (232) through switch D (230) via any of these four paths depending on which is most optimized or has less traffic—switch C (220) to switch A (210) to switch D (230), switch C (220) to switch B (213) to switch D (230), switch C (220) to switch A (210) to switch B (213) to switch D (230), or switch C (220) to switch D (230). If the selected path to send the request by host C (222) to IO D (232) passes through switch A (210), and switch A (210) is configured to snoop all passing requests, switch A (210) may respond to the request since the requested IO data is already in cache A (211). The data path from switch A (210) to switch C (220) is shown by the vertical line (202). If the request passed through other path and reached switch D (230), after processing the request and knowing that a copy of the requested data is also stored in cache A (211), switch D (230) has 2 options to send the requested data—either directly from its cache, or instruct switch A (210) to send the data from its cache on its behalf. These two options are shown by the vertical arrow (202) traversing switch A (210) and switch C (220), and the L-shaped arrow (201) traversing switch D (230) and switch C (220). The system has the option to cache the read data in cache C (221), making the data available in 3 possible location of the system.
This distributed cache arrangement allows redundancy and much flexibility in optimizing the IO access of the hosts. The general goal of this optimization is to cache the data as near to the requesting host as possible. Another benefit of this scheme is in case the data is too big for one switch's cache, there is an option of caching the data in the cache of the nearby switches.
Later, when host C (322) needs to read the recently written data, assuming host C (322) initially requests the data from IO device D (332), through switch C (320) and switch D (330). Switch D (330) has 2 options on sending the requested data to host C (322) via switch C (321)—(1) Switch D (330) can give the requested data by reading from cache D (331) and return it to switch C (320). This read data path is shown by the L-shaped arrow (301) traversing switch D (330) and switch C (320). (2). Switch D (330) can inform switch C (320), that the requested data is in both cache A (311) and cache D (331). Switch C (320), in this case, now has the option to read from either of the 2 caches via the vertical arrow (302) traversing switch A (310) and switch C (320), and the L-shaped arrow (301) traversing switch D (330) and switch C (320). The final chosen path is based on algorithms, and some factors like routing, bandwidth, and cache state considerations.
If the switch C (320) to switch D (330) path is not selected due to an ongoing traffic through this path, and other path is selected instead such that the read request will pass through switch A (310), switch A (310) may respond to the request since cache A (311) contains the requested data. As described in the
If Host C (320) happens to write data to IO device D (332) and a cache hit occurs in cache C (321), messages are sent to both switch A (310) and switch D (330) informing them of the dirty cache condition. Switch A (310) and switch D (330) have the option to either invalidate or to update their respective cache entries. This ensures that cache coherency is maintained among cache A (311), cache C (321), and cache D (331).
This distributed cache procedure allows the requested data to be transferred at 3 times faster than normal transaction since the data bandwidth is increased by 3 folds. The data bandwidth can further be increased depending on the number of participating HBA/IO bridges-switches in the network.
This distributed caching technique is a variation of “cache splitting” which is used not only to increase the data transfer speed but also extends the cache by utilizing other HBA/IO bridges-switches caches. “Cache migration” is not limited to the scenario discussed above. It can also be used for other cases such as when the source switch's cache itself is insufficient to cache all the requested data thus after fetching some data and transferred to other caches, it replaces the cache content with new IO data. It is also not limited to the number of remote IO caches where data is to be distributed. Depending on the requirement or the application, data can be distributed to as much as all the components in the network.
The selection of IO switches nearest the host switch, switch A (502), and the selected paths from switch I (512) to these IO switches are assumptions only and may or may not be the most optimized selection for the sample scenario. Note that available resources, current traffic from one path to another, number of intermediate IO switches to cross until the target is reached, and other factors must be considered when selecting the paths and cache extensions due to latencies and overhead that may be involved when performing the distributed caching operation. This scenario only illustrates how the distributed caching with “cache migration” works and caching algorithms to use must be considered to effectively implement this cache operation.
Similar to the read counterpart, this split-shared caching technique is used not only to increase the data transfer speed but also extends the cache by utilizing other HBA/IO bridges-switches caches. “Cache split-sharing” is not limited to the scenario discussed above. It can also be used for other cases such as when the IO device switch's cache stores the entire write data before splitting and then distributing to nearby switches, cache is used as temporary buffer only for a split thus the actual write data are distributed only to the nearby switches excluding the IO switch connected to the host, or any other possible cases. Note that depending on the requirement or the application, data can be distributed to as much as all the components in the network.
The selection of IO switches nearest the host switch, switch A (603), and the selected paths to switch I (618) from these IO switches are assumptions only and may or may not be the most optimized selection for the sample scenario. Note that available resources, current traffic from one path to another, number of intermediate IO switches to cross until the target is reached, and other factors must be considered when selecting the paths and cache extensions due to latencies and overhead that may be involved when performing the distributed caching operation. This scenario only illustrates how the distributed caching with “cache split-sharing” works and caching algorithms to use must be considered to effectively implement this cache operation.
Note that the flushing method described is not limited to the IO network illustrated in
For case (1), after determining that normal transfer will be performed, the read data is fetched from the cache and sent to the network (1207). Depending on the maximum allowable data size per transfer of the network, the read data may be chunked before sending to the requestor. If there are more data to be sent (1210), the remaining data are fetched from the cache and sent through the network. This procedure continues until all the read data are transferred to the requestor.
Case (2) is the process of splitting the read data before sending to the requestor. Unlike case (1) which uses a single path to the requestor when sending the read data, this case utilizes multiple paths, and thus increasing the speed and size of the data transfer. Before the IO switch sends the read data, it first divides the data into splits (1202). The maximum number of splits is implementation-specific and the size of each split can be as large as the maximum allowable data transfer size by the network protocol. The maximum number of paths to use when sending the splits is also implementation-specific. After determining the number of splits, the IO switch checks from a routing table for different paths to the requestor and selects the optimized paths (1205). Once the paths and the number of splits are selected, the IO switch starts sending the read data, in the form of splits, to the requestor using the selected paths. If there are more data to send (1211), the splits are fetched from the cache and sent through the paths. This procedure continues until all the read data are sent to the requestor.
Case (3), termed as cache migration, is typically the same as case (2) except that the targets of the splits are the IO switches near the requestor. After computing for the number of splits and determining that migration is to be performed (1203), the IO switch determines the other IO switches near the requestor (1204). Once determined, the IO switch checks from a routing table the paths to these neighbors (1206) and sends the read data, in splits, to these neighbors via the selected paths. The IO switch continues to fetch the read data from cache until all are transmitted to the requestor's neighbors. Once all data are transmitted, the IO switch creates a message (1215) informing the requestor of the location of the split read data where the requestor can fetch eventually.
Case (1), the write data needs only the local HBA cache, and the host command requires write through. First the CPU checks if the write data needs the network distributed shared cache (1300). If not, the CPU checks if the command requires write back (1301). If not, command requires write through and the CPU translates the command and sends the command to the IO switch which has the target IO device (1302). In (1303) the CPU receives (an optional) write acknowledge, or some handshake from the IO switch (that has the target IO device) that the target IO device is ready to receive the write data. After this, the CPU proceeds with transferring the write data to the target IO device (1304). After the write data has been transferred to the device (depending on the protocol) the HBA optionally receives completion status information from the target IO device (1305). For this case, the HBA switch needs to talk only directly with the IO switch.
Case (2), the write data needs only the local HBA cache, and the host command requires write back. First the CPU checks if the write data needs the network distributed shared cache (1300). If not, the CPU checks if the command requires write back (1301). If yes, the HBA switch already has the write data in it's cache, and it optionally generates completion status for the host (depending on the protocol) (1306). For this case, the HBA switch does need to talk with any other switch in the network immediately.
Case (3), the write data needs the distributed shared cache, and the host command requires write back. First the CPU checks if the write data needs the network distributed shared cache (1300). If yes, the CPU translates and splits the write command to be transmitted to the remote switches with shared cache (1307). Then the CPU checks if the command requires write back (1308). If yes, the CPU sends the split commands to the remote switches which have the shared cache (1309). In (1310) the CPU receives (an optional) write acknowledge, or some handshake from the remote switches that they are ready to receive the write data. After this, the CPU proceeds with transferring the write data to the remote switches (1311). After the write data has been transferred (depending on the protocol), the HBA optionally waits for all the completion status information from the remote switches (1312). For this case, the HBA switch needs to talk only directly with the remote switches.
Case (4), the write data needs the distributed shared cache, and the host command requires write through. First the CPU checks if the write data needs the network distributed shared cache (1300). If yes, the CPU translates and splits the write command to be transmitted to the remote switches with shared cache (1307). Then the CPU checks if the command requires write back (1308). If no, write through is required and the CPU sends the split commands to the remote switches which have the shared cache, and to the IO switch (so the HBA switch can write through the target device its share of the split cache) (1313). In (1314) the CPU receives (an optional) write acknowledge, or some handshake from the remote and IO switches that they are ready to receive the write data. After this, the CPU proceeds with transferring the write data to the remote and IO switches (1315). After the write data has been transferred (depending on the protocol), the HBA optionally waits for all the completion status information from the remote and IO switches (1316). For this case, the HBA switch needs to talk directly with the remote and IO switches.
If in (1504) the message is determined to have no associated payload, the IO switch CPU processes and determines the type of the message (1505). In the embodiment of this invention, there are defined message without payload types—(1) invalidate cache, (2) cache migrated, and (3) allocate cache. In (1507), it is checked if the message relates to allocation of cache space. If the message is determined to be cache allocation-related, it is further checked if it is a cache request or response (1508). If cache allocation request, the IO switch CPU allocates some cache space (1512). Allocation of cache space includes flushing or invalidating of some cache contents to be able to make space for the incoming data to be cached, in case the cache is full or the cache space being requested by the message is larger than the current free space. After freeing some cache space, the CPU may create a respond message indicating if the space is successfully allocated or not and send to the originator of the message request. If the cache allocate message is a completion or a response to a previous request, the IO switch CPU processes the message (1509) and determines whether it has successfully allocated a space to another IO switch cache. If it has allocated a cache, the IO switch can start sending the data to the remote cache. The IO switch CPU also has the option not to use the allocated remote cache space if the returned message response indicates a much smaller space than it has requested, or if it has already transferred the data to a different IO switch cache. It is assumed that after allocating a cache space for another IO switch, the local CPU also allocates a limited time frame for the remote IO switch to use the allocated cache space. If the time frame has expired, the allocated cache space is invalidated and can be used by the IO switch itself to cache local IO or by a different IO switch as its extended cache.
The Local IOC Usage Table (1710) identifies which bridge/switch in the system is currently caching (or using) the data from the IO device that is locally attached to a particular bridge/switch. This table is found in every bridge/switch that is managing a local IO device that is attached to it. The basic entries for this table are the location (1709) of the attached local IO device and the Remote Usage Mapping (1711) of the bridges/switches that are currently caching the attached local IO device. The Location (1709) entries have the Device ID and the LBA of the data in the locally attached IO device. The Remote Usage Mapping (1711) entries may consist of a basic bit map (1708), with each bit corresponding to a particular bridge/switch Device ID. Data entries in the Local IOC Usage Table (1710) are not necessarily cached locally in the bridge/switch, and may not be found in the corresponding local Generic Switch Cache Remap Table (1700).
Foregoing described embodiments of the invention are provided as illustrations and descriptions. They are not intended to limit the invention to precise form described. In particular, it is contemplated that functional implementation of invention described herein may be implemented equivalently in hardware, software, firmware, and/or other available functional components or building blocks, and that networks may be wired, wireless, or a combination of wired and wireless.
It is also within the scope of the present invention to implement a program or code that can be stored in a machine-readable or computer-readable medium to permit a computer to perform any of the inventive techniques described above, or a program or code that can be stored in an article of manufacture that includes a computer readable medium on which computer-readable instructions for carrying out embodiments of the inventive techniques are stored. Other variations and modifications of the above-described embodiments and methods are possible in light of the teaching discussed herein.
The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize.
These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
This application claims the benefit of and priority to U.S. Provisional Application 61/799,362, filed 15 Mar. 2013. This U.S. Provisional Application 61/799,362 is hereby fully incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4752871 | Sparks | Jun 1988 | A |
5111058 | Martin | May 1992 | A |
RE34100 | Hartness | Oct 1992 | E |
5222046 | Kreifels et al. | Jun 1993 | A |
5297148 | Harari et al. | Mar 1994 | A |
5341339 | Wells | Aug 1994 | A |
5371709 | Fisher et al. | Dec 1994 | A |
5379401 | Robinson et al. | Jan 1995 | A |
5388083 | Assar et al. | Feb 1995 | A |
5396468 | Harari et al. | Mar 1995 | A |
5406529 | Asano | Apr 1995 | A |
5432748 | Hsu et al. | Jul 1995 | A |
5448577 | Wells et al. | Sep 1995 | A |
5459850 | Clay et al. | Oct 1995 | A |
5479638 | Assar et al. | Dec 1995 | A |
5485595 | Assar et al. | Jan 1996 | A |
5488711 | Hewitt et al. | Jan 1996 | A |
5500826 | Hsu et al. | Mar 1996 | A |
5509134 | Fandrich et al. | Apr 1996 | A |
5513138 | Manabe et al. | Apr 1996 | A |
5524231 | Brown | Jun 1996 | A |
5530828 | Kaki et al. | Jun 1996 | A |
5535328 | Harari et al. | Jul 1996 | A |
5535356 | Kim et al. | Jul 1996 | A |
5542042 | Manson | Jul 1996 | A |
5542082 | Solhjell | Jul 1996 | A |
5548741 | Watanabe | Aug 1996 | A |
5559956 | Sukegawa | Sep 1996 | A |
5568423 | Jou et al. | Oct 1996 | A |
5568439 | Harari | Oct 1996 | A |
5572466 | Sukegawa | Nov 1996 | A |
5594883 | Pricer | Jan 1997 | A |
5602987 | Harari et al. | Feb 1997 | A |
5603001 | Sukegawa et al. | Feb 1997 | A |
5606529 | Honma et al. | Feb 1997 | A |
5606532 | Lambrache et al. | Feb 1997 | A |
5619470 | Fukumoto | Apr 1997 | A |
5627783 | Miyauchi | May 1997 | A |
5640349 | Kakinuma et al. | Jun 1997 | A |
5644784 | Peek | Jul 1997 | A |
5682509 | Kabenjian | Oct 1997 | A |
5737742 | Achiwa et al. | Apr 1998 | A |
5787466 | Berliner | Jul 1998 | A |
5796182 | Martin | Aug 1998 | A |
5799200 | Brant et al. | Aug 1998 | A |
5802554 | Caceres et al. | Sep 1998 | A |
5819307 | Iwamoto et al. | Oct 1998 | A |
5822251 | Bruce et al. | Oct 1998 | A |
5875351 | Riley | Feb 1999 | A |
5881264 | Kurosawa | Mar 1999 | A |
5913215 | Rubinstein et al. | Jun 1999 | A |
5918033 | Heeb et al. | Jun 1999 | A |
5930481 | Benhase | Jul 1999 | A |
5933849 | Srbljic et al. | Aug 1999 | A |
5943421 | Grabon | Aug 1999 | A |
5956743 | Bruce et al. | Sep 1999 | A |
6000006 | Bruce et al. | Dec 1999 | A |
6014709 | Gulick | Jan 2000 | A |
6076137 | Asnaashari | Jun 2000 | A |
6098119 | Surugucchi et al. | Aug 2000 | A |
6128303 | Bergantino | Oct 2000 | A |
6151641 | Herbert | Nov 2000 | A |
6215875 | Nohda | Apr 2001 | B1 |
6230269 | Spies et al. | May 2001 | B1 |
6298071 | Taylor et al. | Oct 2001 | B1 |
6363441 | Bentz et al. | Mar 2002 | B1 |
6363444 | Platko et al. | Mar 2002 | B1 |
6397267 | Chong, Jr. | May 2002 | B1 |
6404772 | Beach et al. | Jun 2002 | B1 |
6496939 | Portman et al. | Dec 2002 | B2 |
6526506 | Lewis | Feb 2003 | B1 |
6529416 | Bruce et al. | Mar 2003 | B2 |
6557095 | Henstrom | Apr 2003 | B1 |
6601126 | Zaidi et al. | Jul 2003 | B1 |
6678754 | Soulier | Jan 2004 | B1 |
6744635 | Portman et al. | Jun 2004 | B2 |
6757845 | Bruce | Jun 2004 | B2 |
6857076 | Klein | Feb 2005 | B1 |
6901499 | Aasheim et al. | May 2005 | B2 |
6922391 | King | Jul 2005 | B1 |
6961805 | Lakhani et al. | Nov 2005 | B2 |
6970446 | Krischer et al. | Nov 2005 | B2 |
6970890 | Bruce et al. | Nov 2005 | B1 |
6973546 | Johnson | Dec 2005 | B2 |
6980795 | Hermann et al. | Dec 2005 | B1 |
7103684 | Chen et al. | Sep 2006 | B2 |
7174438 | Homma et al. | Feb 2007 | B2 |
7194766 | Noehring et al. | Mar 2007 | B2 |
7263006 | Aritome | Aug 2007 | B2 |
7283629 | Kaler et al. | Oct 2007 | B2 |
7305548 | Pierce et al. | Dec 2007 | B2 |
7330954 | Nangle | Feb 2008 | B2 |
7372962 | Fujimoto et al. | May 2008 | B2 |
7386662 | Kekre | Jun 2008 | B1 |
7415549 | Vemula et al. | Aug 2008 | B2 |
7424553 | Borrelli et al. | Sep 2008 | B1 |
7430650 | Ross | Sep 2008 | B1 |
7490177 | Kao | Feb 2009 | B2 |
7500063 | Zohar et al. | Mar 2009 | B2 |
7506098 | Arcedera et al. | Mar 2009 | B2 |
7613876 | Bruce et al. | Nov 2009 | B2 |
7620748 | Bruce et al. | Nov 2009 | B1 |
7624239 | Bennett et al. | Nov 2009 | B2 |
7636801 | Kekre | Dec 2009 | B1 |
7660941 | Lee et al. | Feb 2010 | B2 |
7676640 | Chow | Mar 2010 | B2 |
7681188 | Tirumalai et al. | Mar 2010 | B1 |
7716389 | Bruce et al. | May 2010 | B1 |
7729730 | Zuo et al. | Jun 2010 | B2 |
7743202 | Tsai et al. | Jun 2010 | B2 |
7765359 | Kang et al. | Jul 2010 | B2 |
7877639 | Hoang | Jan 2011 | B2 |
7913073 | Choi | Mar 2011 | B2 |
7921237 | Holland et al. | Apr 2011 | B1 |
7934052 | Prins et al. | Apr 2011 | B2 |
7979614 | Yang | Jul 2011 | B1 |
8010740 | Arcedera et al. | Aug 2011 | B2 |
8032700 | Bruce et al. | Oct 2011 | B2 |
8156320 | Borras | Apr 2012 | B2 |
8161223 | Chamseddine et al. | Apr 2012 | B1 |
8165301 | Bruce et al. | Apr 2012 | B1 |
8200879 | Falik et al. | Jun 2012 | B1 |
8225022 | Caulkins | Jul 2012 | B2 |
8341311 | Szewerenko et al. | Dec 2012 | B1 |
8375257 | Hong et al. | Feb 2013 | B2 |
8447908 | Bruce et al. | May 2013 | B2 |
8510631 | Wu et al. | Aug 2013 | B2 |
8560804 | Bruce et al. | Oct 2013 | B2 |
8707134 | Takahashi et al. | Apr 2014 | B2 |
8713417 | Jo | Apr 2014 | B2 |
8762609 | Lam et al. | Jun 2014 | B1 |
8788725 | Bruce et al. | Jul 2014 | B2 |
8959307 | Bruce et al. | Feb 2015 | B1 |
9043669 | Bruce et al. | May 2015 | B1 |
9099187 | Bruce et al. | Aug 2015 | B2 |
9135190 | Bruce et al. | Sep 2015 | B1 |
9147500 | Kim et al. | Sep 2015 | B2 |
20010010066 | Chin et al. | Jul 2001 | A1 |
20020044486 | Chan et al. | Apr 2002 | A1 |
20020073324 | Hsu et al. | Jun 2002 | A1 |
20020083262 | Fukuzumi | Jun 2002 | A1 |
20020083264 | Coulson | Jun 2002 | A1 |
20020141244 | Bruce et al. | Oct 2002 | A1 |
20030023817 | Rowlands | Jan 2003 | A1 |
20030065836 | Pecone | Apr 2003 | A1 |
20030120864 | Lee et al. | Jun 2003 | A1 |
20030126451 | Gorobets | Jul 2003 | A1 |
20030131201 | Khare | Jul 2003 | A1 |
20030161355 | Falcomato et al. | Aug 2003 | A1 |
20030163624 | Matsui et al. | Aug 2003 | A1 |
20030163647 | Cameron | Aug 2003 | A1 |
20030163649 | Kapur | Aug 2003 | A1 |
20030182576 | Morlang et al. | Sep 2003 | A1 |
20030188100 | Solomon et al. | Oct 2003 | A1 |
20030204675 | Dover et al. | Oct 2003 | A1 |
20030217202 | Zilberman et al. | Nov 2003 | A1 |
20030223585 | Tardo et al. | Dec 2003 | A1 |
20040073721 | Goff et al. | Apr 2004 | A1 |
20040128553 | Buer et al. | Jul 2004 | A1 |
20050050245 | Miller et al. | Mar 2005 | A1 |
20050078016 | Neff | Apr 2005 | A1 |
20050097368 | Peinado et al. | May 2005 | A1 |
20050120146 | Chen et al. | Jun 2005 | A1 |
20050210149 | Kimball | Sep 2005 | A1 |
20050226407 | Kasuya et al. | Oct 2005 | A1 |
20050243610 | Guha et al. | Nov 2005 | A1 |
20050289361 | Sutardja | Dec 2005 | A1 |
20060004957 | Hand, III et al. | Jan 2006 | A1 |
20060031450 | Unrau | Feb 2006 | A1 |
20060095709 | Achiwa | May 2006 | A1 |
20060112251 | Karr | May 2006 | A1 |
20060184723 | Sinclair et al. | Aug 2006 | A1 |
20070019573 | Nishimura | Jan 2007 | A1 |
20070028040 | Sinclair | Feb 2007 | A1 |
20070058478 | Murayama | Mar 2007 | A1 |
20070073922 | Go et al. | Mar 2007 | A1 |
20070079017 | Brink et al. | Apr 2007 | A1 |
20070083680 | King et al. | Apr 2007 | A1 |
20070088864 | Foster | Apr 2007 | A1 |
20070094450 | VanderWiel | Apr 2007 | A1 |
20070096785 | Maeda | May 2007 | A1 |
20070121499 | Pal | May 2007 | A1 |
20070130439 | Andersson et al. | Jun 2007 | A1 |
20070159885 | Gorobets | Jul 2007 | A1 |
20070168754 | Zohar | Jul 2007 | A1 |
20070174493 | Irish et al. | Jul 2007 | A1 |
20070174506 | Tsuruta | Jul 2007 | A1 |
20070195957 | Arulambalam et al. | Aug 2007 | A1 |
20070288686 | Arcedera et al. | Dec 2007 | A1 |
20070288692 | Bruce et al. | Dec 2007 | A1 |
20080052456 | Ash et al. | Feb 2008 | A1 |
20080072031 | Choi | Mar 2008 | A1 |
20080147963 | Tsai et al. | Jun 2008 | A1 |
20080189466 | Hemmi | Aug 2008 | A1 |
20080218230 | Shim | Sep 2008 | A1 |
20080228959 | Wang | Sep 2008 | A1 |
20080276037 | Chang et al. | Nov 2008 | A1 |
20090055573 | Ito | Feb 2009 | A1 |
20090077306 | Arcedera et al. | Mar 2009 | A1 |
20090083022 | Bin Mohd Nordin et al. | Mar 2009 | A1 |
20090094411 | Que | Apr 2009 | A1 |
20090158085 | Kern et al. | Jun 2009 | A1 |
20090172250 | Allen et al. | Jul 2009 | A1 |
20090172466 | Royer et al. | Jul 2009 | A1 |
20090240873 | Yu et al. | Sep 2009 | A1 |
20100058045 | Borras et al. | Mar 2010 | A1 |
20100095053 | Bruce et al. | Apr 2010 | A1 |
20100125695 | Wu et al. | May 2010 | A1 |
20100250806 | Devilla et al. | Sep 2010 | A1 |
20100299538 | Miller | Nov 2010 | A1 |
20110022778 | Schibilla et al. | Jan 2011 | A1 |
20110022783 | Moshayedi | Jan 2011 | A1 |
20110022801 | Flynn | Jan 2011 | A1 |
20110087833 | Jones | Apr 2011 | A1 |
20110093648 | Belluomini et al. | Apr 2011 | A1 |
20110113186 | Bruce et al. | May 2011 | A1 |
20110145479 | Talagala et al. | Jun 2011 | A1 |
20110161568 | Bruce et al. | Jun 2011 | A1 |
20110167204 | Estakhri et al. | Jul 2011 | A1 |
20110197011 | Suzuki | Aug 2011 | A1 |
20110202709 | Rychlik | Aug 2011 | A1 |
20110219150 | Piccirillo et al. | Sep 2011 | A1 |
20110258405 | Asaki et al. | Oct 2011 | A1 |
20110264884 | Kim | Oct 2011 | A1 |
20110264949 | Ikeuchi et al. | Oct 2011 | A1 |
20110270979 | Schlansker | Nov 2011 | A1 |
20120005405 | Wu et al. | Jan 2012 | A1 |
20120005410 | Ikeuchi | Jan 2012 | A1 |
20120017037 | Riddle | Jan 2012 | A1 |
20120079352 | Frost et al. | Mar 2012 | A1 |
20120102263 | Aswadhati | Apr 2012 | A1 |
20120102268 | Smith | Apr 2012 | A1 |
20120137050 | Wang et al. | May 2012 | A1 |
20120161568 | Umemoto et al. | Jun 2012 | A1 |
20120260102 | Zaks et al. | Oct 2012 | A1 |
20120271967 | Hirschman | Oct 2012 | A1 |
20120303924 | Ross | Nov 2012 | A1 |
20120311197 | Larson et al. | Dec 2012 | A1 |
20120324277 | Weston-Lewis et al. | Dec 2012 | A1 |
20130010058 | Pomeroy | Jan 2013 | A1 |
20130019053 | Somanache et al. | Jan 2013 | A1 |
20130073821 | Flynn et al. | Mar 2013 | A1 |
20130094312 | Jang et al. | Apr 2013 | A1 |
20130099838 | Kim et al. | Apr 2013 | A1 |
20130111135 | Bell, Jr. et al. | May 2013 | A1 |
20130124801 | Natrajan | May 2013 | A1 |
20130208546 | Kim et al. | Aug 2013 | A1 |
20130212337 | Maruyama | Aug 2013 | A1 |
20130212349 | Maruyama | Aug 2013 | A1 |
20130246694 | Bruce et al. | Sep 2013 | A1 |
20130254435 | Shapiro | Sep 2013 | A1 |
20130262750 | Yamasaki et al. | Oct 2013 | A1 |
20130282933 | Jokinen et al. | Oct 2013 | A1 |
20130304775 | Davis et al. | Nov 2013 | A1 |
20130339578 | Ohya et al. | Dec 2013 | A1 |
20130339582 | Olbrich et al. | Dec 2013 | A1 |
20130346672 | Sengupta et al. | Dec 2013 | A1 |
20140095803 | Kim et al. | Apr 2014 | A1 |
20140104949 | Bruce et al. | Apr 2014 | A1 |
20140108869 | Brewerton et al. | Apr 2014 | A1 |
20140189203 | Suzuki et al. | Jul 2014 | A1 |
20140258788 | Maruyama | Sep 2014 | A1 |
20140285211 | Raffinan | Sep 2014 | A1 |
20140331034 | Ponce et al. | Nov 2014 | A1 |
20150006766 | Ponce et al. | Jan 2015 | A1 |
20150012690 | Bruce et al. | Jan 2015 | A1 |
20150032937 | Salessi | Jan 2015 | A1 |
20150032938 | Salessi | Jan 2015 | A1 |
20150067243 | Salessi et al. | Mar 2015 | A1 |
20150149697 | Salessi et al. | May 2015 | A1 |
20150149706 | Salessi et al. | May 2015 | A1 |
20150153962 | Salessi et al. | Jun 2015 | A1 |
20150169021 | Salessi et al. | Jun 2015 | A1 |
20150261456 | Alcantara et al. | Sep 2015 | A1 |
20150261475 | Alcantara et al. | Sep 2015 | A1 |
20150261797 | Alcantara et al. | Sep 2015 | A1 |
20150370670 | Lu | Dec 2015 | A1 |
20150371684 | Mataya | Dec 2015 | A1 |
20150378932 | Souri et al. | Dec 2015 | A1 |
20160026402 | Alcantara et al. | Jan 2016 | A1 |
20160027521 | Lu | Jan 2016 | A1 |
20160041596 | Alcantara et al. | Feb 2016 | A1 |
Number | Date | Country |
---|---|---|
2005142859 | Jun 2005 | JP |
2005-309847 | Nov 2005 | JP |
489308 | Jun 2002 | TW |
200428219 | Dec 2004 | TW |
436689 | Dec 2005 | TW |
I420316 | Dec 2013 | TW |
WO 9406210 | Mar 1994 | WO |
WO 9838568 | Sep 1998 | WO |
Entry |
---|
Office Action for U.S. Appl. No. 13/475,878, mailed on Jun. 23, 2014. |
Office Action for U.S. Appl. No. 13/253,912 mailed on Jul. 16, 2014. |
Office Action for U.S. Appl. No. 12/876,113 mailed on Jul. 11, 2014. |
Office Action for U.S. Appl. No. 12/270,626 mailed on Feb. 3, 2012. |
Office Action for U.S. Appl. No. 12/270,626 mailed on Apr. 4, 2011. |
Office Action for U.S. Appl. No. 12/270,626 mailed on Mar. 15, 2013. |
Notice of Allowance/Allowability for U.S. Appl. No. 12/270,626 mailed on Oct. 3, 2014. |
Advisory Action for U.S. Appl. No. 12/876,113 mailed on Oct. 16, 2014. |
Office Action for U.S. Appl. No. 14/297,628 mailed on Jul. 17, 2015. |
Office Action for U.S. Appl. No. 13/475,878 mailed on Dec. 4, 2014. |
Office Action for U.S. Appl. No. 12/876,113 mailed on Mar. 13, 2014. |
Advisory Action for U.S. Appl. No. 12/876,113 mailed on Sep. 6, 2013. |
Office Action for U.S. Appl. No. 12/876,113 mailed on May 14, 2013. |
Office Action for U.S. Appl. No. 12/876,113 mailed on Dec. 21, 2012. |
William Stallings, Security Comes to SNMP: The New SNMPv3 Proposed Internet Standard, The Internet Protocol Journal, vol. 1, No. 3, Dec. 1998. |
Notice of Allowability for U.S. Appl. No. 12/882,059 mailed on May 30, 2013. |
Notice of Allowability for U.S. Appl. No. 12/882,059 mailed on Feb. 14, 2013. |
Office Action for U.S. Appl. No. 12/882,059 mailed on May 11, 2012. |
Notice of Allowability for U.S. Appl. No. 14/038,684 mailed on Aug. 1, 2014. |
Office Action for U.S. Appl. No. 14/038,684 mailed on Mar. 17, 2014. |
USPTO Notice of Allowability & attachment(s) mailed Jan. 7, 2013 for U.S. Appl. No. 12/876,247. |
Office Action mailed Sep. 14, 2012 for U.S. Appl. No. 12/876,247. |
Office Action mailed Feb. 1, 2012 for U.S. Appl. No. 12/876,247. |
Notice of Allowance/Allowability mailed Mar. 31, 2015 for U.S. Appl. 13/475,878. |
Office Action mailed May 22, 2015 for U.S. Appl. No. 13/253,912. |
Notice of Allowance/Allowability for U.S. Appl. No. 13/890,229 mailed on Feb. 20, 2014. |
Office Action for U.S. Appl. No. 13/890,229 mailed on Oct. 8, 2013. |
Office Action for U.S. Appl. No. 12/876,113 mailed on Dec. 5, 2014. |
Notice of Allowance/Allowabilty for U.S. Appl. No. 12/876,113 mailed on Jun. 22, 2015. |
Office Action for U.S. Appl. No. 14/217,249 mailed on Apr. 23, 2015. |
Office Action for U.S. Appl. No. 14/217,467 mailed on Apr. 27, 2015. |
Office Action for U.S. Appl. No. 14/616,700 mailed on Apr. 30, 2015. |
Office Action for U.S. Appl. No. 14/217,436 mailed on Sep. 11, 2015. |
Office Action for U.S. Appl. No. 13/475,878 mailed on Jun. 23, 2014. |
Office Action for U.S. Appl. No. 12/876,113 mailed on Oct. 16, 2014. |
Notice of Allowance for U.S. Appl. No. 12/270,626 mailed Oct. 3, 2014. |
Office Action for U.S. Appl. No. 12/270,626 mailed on May 23, 2014. |
Office Action for U.S. Appl. No. 12/270,626 mailed on Dec. 18, 2013. |
Office Action for U.S. Appl. No. 12/270,626 mailed on Aug. 23, 2012. |
Office Action mailed Dec. 5, 2014 for U.S. Appl. No. 14/038,684. |
Office Action mailed Oct. 8, 2015 for U.S. Appl. No. 14/217,291. |
Final Office Action mailed Nov. 19, 2015 for U.S. Appl. No. 14/217,249. |
Final Office Action mailed Nov. 18, 2015 for U.S. Appl. No. 14/217,467. |
Office Action mailed Nov. 25, 2015 for U.S. Appl. No. 14/217,041. |
Office Action mailed Dec. 15, 2015 for U.S. Appl. No. 13/253,912. |
Office Action mailed Dec. 17, 2015 for U.S. Appl. No. 14/214,216. |
Office Action mailed Dec. 17, 2015 for U.S. Appl. No. 14/215,414. |
Office Action mailed Dec. 17, 2015 for U.S. Appl. No. 14/803,107. |
Office Action mailed Jan. 15, 2016 for U.S. Appl. No. 14/866,946. |
Office Action mailed Jan. 11, 2016 for U.S. Appl. No. 14/217,399. |
Office Action mailed Jan. 15, 2016 for U.S. Appl. No. 14/216,937. |
Notice of Allowance and Examiner-Initiated Interview Summary, mailed Jan. 29, 2016 for U.S. Appl. No. 14/297,628. |
National Science Fountation,Award Abstract #1548968, SBIR Phase I: SSD In-Situ Processing, http://www.nsf.gov/awardsearch/showAward?AWD—ID=1548968 printed on Feb. 13, 2016. |
Design-Reuse, NxGn Data Emerges from Stealth Mode to provide a paradigm shift in enterprise storage solution (author(s) not indicated). |
http://www.design-reuse.com/news/35111/nxgn-data-intelligent-solutions.html, printed on Feb. 13, 2016 (author(s) not indicated). |
Office Action for U.S. Appl. No. 14/217,365 dated Feb. 18, 2016. |
Office Action for U.S. Appl. No. 14/217,365 dated Mar. 2, 2016. |
Office Action for U.S. Appl. No. 14/690,305 dated Feb. 25, 2016. |
Office Action for U.S. Appl. No. 14/217,436 dated Feb. 25, 2016. |
Office Action for U.S. Appl. No. 14/217,316 dated Feb. 26, 2016. |
Office Action for U.S. Appl. No. 14/215,414 dated Mar. 1, 2016. |
Office Action for U.S. Appl. No. 14/616,700 dated Mar. 8, 2016. |
Notice of allowance/allowability for U.S. Appl. No. 13/253,912 dated Mar. 21, 2016. |
Notice of allowance/allowability for U.S. Appl. No. 14/803,107 dated Mar. 28, 2016. |
Office Action for U.S. Appl. No. 14/217,334 dated Apr. 4, 2016. |
Notice of allowance/allowability for U.S. Appl. No. 14/217,041 dated Apr. 11, 2016. |
Office Action for U.S. Appl. No. 14/217,249 dated Apr. 21, 2016. |
Notice of allowance/allowability for U.S. Appl. No. 14/217,467 dated Apr. 20, 2016. |
Notice of allowance/allowability for U.S. Appl. No. 14/214,216 dated Apr. 27, 2016. |
Notice of allowance/allowability for U.S. Appl. No. 14/217,436 dated May 6, 2016. |
Office Action mailed Sep. 11, 2015 for U.S. Appl. No. 14/217,436. |
Office Action mailed Sep. 24, 2015 for U.S. Appl. No. 14/217,334. |
Office Action dated Sep. 18, 2015 for Taiwanese Patent Application No. 102144165. |
Office Action mailed Sep. 29, 2015 for U.S. Appl. No. 14/217,316. |
Office Action mailed Sep. 28, 2015 for U.S. Appl. No. 14/689,045. |
Office Action mailed Oct. 5, 2015 for Taiwanese Application No. 103105076. |
Office Action mailed Nov. 19, 2015 for U.S. Appl. No. 14/217,249. |
Office Action mailed Nov. 18, 2015 for U.S. Appl. No. 14/217,467. |
Office Action mailed Dec. 4, 2015 for U.S. Appl. No. 14/616,700. |
Office Action mailed Jun. 4, 2015 for U.S. Appl. No. 14/215,414. |
Office Action for U.S. Appl. No. 14/215,414 dated May 20, 2016. |
Office Action for U.S. Appl. No. 14/616,700 dated May 20, 2016. |
Office Action for U.S. Appl. No. 14/689,019 dated May 20, 2016. |
Advisory Action for U.S. Appl. No. 14/217,316 dated May 19, 2016. |
Advisory Action for U.S. Appl. No. 14/217,334 dated Jun. 13, 2016. |
Office Action for U.S. Appl. No. 14/217,291 dated Jun. 15, 2016. |
Number | Date | Country | |
---|---|---|---|
61799362 | Mar 2013 | US |