This invention pertains generally to systems, devices, and methods for processing data or other information in a multiple processor or multiple processor core environment using shared memory resources, and more particularly to systems, devices, and methods for processing data in such environments using a structured block transfer module, system architecture, and methodology.
Increasingly, multiple-processor-based systems as well as processors having multiple cores are being deployed for computer, information processing, communications, and other systems where processor performance or throughput cannot be met satisfactorily with single processors or single cores. For convenience of description, these multiple-processor and multiple-core devices and systems will interchangeably be referred to as multi-core systems or architectures and the terms processors and cores will be used interchangeably.
When designing a multicore architecture, one of the most basic decisions that should be made by the designer is whether to use shared data storage or structure (such as is shown in the example in
In the exemplary shared memory architecture illustrated in
In the exemplary architecture illustrated in
These data storage or structures may commonly be or include a memory, such as but not limited to a solid state memory. Conventionally, the benefit of shared memory is that multiple processors or cores can access it. By comparison, if a private data storage or memory is utilized, then only one processor can see and access it. It may be appreciated however, that even in a shared storage or memory design, although multiple processors or cores can see and ultimately access the shared memory, only one processor or core is allowed access at a time. Some form of memory arbitration must be put in place in order to arbitrate or resolve situations where more than processor or core needs to access shared memory. For processors or cores denied immediate memory access, they must wait their turn, which slows down processing and throughput.
Private memory may frequently work well for data that is only required by a single processor or core. This may provide some guarantee of access by the single processor or core with predictable latency. However, many multi-core architectures, particularly architectures of the type including parallel pipeline architectures process a collection of data called a “context”. One example of a parallel pipeline architecture is illustrated in
In this architecture, a plurality of blocks 310, each comprising a memory 320 plus a processor 330, arranged in parallel groups 340 and sequential sets 350. Context 360 flows though the blocks as indicated by the arrow 370, and is successively processed in each sequential set 350.
The context data is usually operated on in turn by various processors 330 in the pipeline. Typically, at any given time, only one processor needs access to or works on or processes the context data, so the context can be stored in private memory for fastest access. But when the processing of the context data by one processor is complete, the processor sends the context to another processor for continued processing. This means that when a private memory or storage architecture is used, the context data must be moved from the private memory of one processor into the private memory of the next processor. This is a specific example of a system problem where copying is required; other system situations may also require such copying, and the scope of the problem being addressed is not intended to be limited to this specific scenario.
There are a number of ways to copy the context between private memories in the architecture in
In the example approach diagrammed in
If some attempt is made to assure that a second memory associated with a second processor really is private, then the data must be placed in some shared holding area or intermediate memory and copied by both processors, that is from the first processor from its first private memory to the share holding area or intermediate memory and then from the intermediate memory by the second processor to its own private memory, as shown in
An alternative approach that relieves some of this copy operation time is to employ a dedicated Direct Memory Access (DMA) engine to do the actual copying as illustrated in the example of
Unfortunately, even this approach has some limitations and is not entirely satisfying. First, DMA 610 requires host control (in this case provided at least in part by first processor 670), so the processor still has, for example, to provide the memory source and destination addresses. Because there is no way for first processor 670 to access second memory 620, processor 670 can use a fixed destination address or processor 690 must communicate a destination address to processor 670 through some communication mechanism. The former solution removes a significant amount of flexibility for second processor 690 since it is not free to assign memory usage in the manner most advantageous to its functioning. The latter requires an explicit coordination between the two processors.
Second, the first processor 670, after having provided the DMA 610 with source and destination addresses and the size of the memory to copy, must wait for the copy operation to be complete in order to free up the occupied memory for new processing data. While less of a penalty than if the processor did the actual copying operation, the wait for completion is still substantial and may usually be unacceptable in high-performance embedded systems. Even if the processor can perform some background task while waiting for the completion, the required bookkeeping adds complexity to the processor program.
With reference to
In this way, it is possible to segment memory such that the processor may use one segment while processing its primary data stream using the other partition, while the DMA engine is copying to or from another memory segment, as
Yet another approach would be to put the code that handles copying into a different thread from the main application code. In systems and devices that have a multi-threading capability, a multi-threaded processor could swap threads during the copy operation and process a different context. However, low-end processing subsystems that are often used in embedded systems do not have multi-threading capability.
Therefore it may be appreciated that none of these various approaches provides an entirely suitable solution for copying a specified block of private memory from one processor into a location in the private memory pertaining to a second processor, and that there remains a need for a system for executing such a copy.
In one aspect, the invention provides a structured block transfer module, a system architecture, and method for transferring content or data.
In another aspect, the invention provides a circuit that allows content in one memory to be shifted or moved to another memory with no direction from a host, the circuit comprising: a connection manager with a plurality of pointer inputs, a plurality of upstream free list pointer outputs, and a plurality of pointer outputs; at least one copy engine with data input busses and data output busses; and a connection between the connection manager and the at least one copy engine.
In another aspect, the invention further provides that this circuit may be adapted to perform one or any combinations of these operations: (a) identifying a particular source memory block as the source for a copy operation using any one or more of identified source memory identification criteria; (b) identifying a particular destination memory block as the destination for the copy operation using any one or more of a identified destination memory selection criteria; (c) maintaining a record of available memory blocks and occupied memory blocks for each potential destination processor; and (d) copying or moving the contents of the source memory to the selected destination memory.
In another aspect, the invention provides a connection manager with a plurality of pointer inputs, a plurality of upstream free list pointer outputs, and a plurality of pointer outputs.
In another aspect, the invention provides a copy engine with data input busses and data output busses.
In another aspect, the invention provides a connection means and mechanism for connecting a connection manager and a copy engine.
In another aspect, the invention provides a method for transferring the contents of one of a number of blocks of source memory to one of a number of possible destination memories, the method comprising: selecting a source memory; selecting an available destination memory; marking the selected destination as no longer available; copying the contents of the selected source memory into the selected destination memory; and marking the selected source as available.
Various aspects, features, and embodiments of the invention are now described relative to the figures.
In one aspect, the invention provides a structure for a Structured Block Transfer Module (SBTM) block or circuit, such as shown in the embodiment of
The exemplary SBTM 800 includes a connection manager 805 and one or more copy engines 840. The number of copy engines is not critical to the invention. Connection manager 805 receives copy requests on pointer inputs 810, selects copy requests based on a criterion or criteria that is/are not critical to the invention, selects a copy destination based on a criterion or criteria that is/are not critical to the invention, selects one of the copy engines 840 based on a criterion or criteria that is/are not critical to the invention, and instructs the selected copy engine 840 via copy enable signal 820 to copy the data from the selected source via copy input bus 835 to the selected destination via copy output bus 845. Free list inputs 825 provide a selection of available memory locations for copy destinations. Selection of a copy engine 840 only occurs if there is more than one copy engine 840 present. The format of copy enable signal 820 and the method of instruction are not critical to the invention. Copy enable signal 820 could be implemented using a collection of inputs, one for each copy engine 840, or by a common bus with an instruction identifying the targeted copy engine 840, or by any other suitable means. After copying is complete, connection manager 805 informs the selected destination of the location of the copied data via one of pointer outputs 815.
Connection manager 805 also places the pointer value on the selected pointer input 810 onto the upstream free list output 850 for the selected source. The number of pointer inputs 810, copy inputs 835, and upstream free list outputs 850 is determined by the number of potential sources of data to be copied. Each such source contributes a pointer input 810, a copy input 835, and an upstream free list output 850. The number of pointer outputs 815 copy outputs 845, and free list inputs 825 is determined by the number of potential copy destinations. Each destination contributes a pointer output 815, a copy output 845, and a free list input 825. The format (signal, bus, serial connection, etc.) of pointer inputs 810, pointer outputs 815, and upstream free list outputs 850, and free list inputs 825 may vary and is not critical to the invention. If the number of copy engines is different from the number of sources and/or destinations, then bussing, switching, or other well-understood methods can be used to connect the copy engines to the source and destination busses.
For the exemplary SBTM, given a number of potential source memory locations to be copied, the SBTM can provide any one or any combination of two or more of the following capabilities and features:
(1) Identify a suitable source memory as the source for a copy operation using any one or more of a number of criteria such as by way of example but not limitation, load balancing, and/or memory availability.
(2) Identify a suitable memory block as the destination using any one or more of a number of criteria such as by way of example but not limitation, specific direction, load balancing, and/or memory availability.
(3) Maintain a record of available and occupied memory blocks for each potential destination processor.
(4) Copy the contents of the selected source to the selected destination.
(5) Alter selected portions of the data during the copy process.
(6) Execute multiple block copies concurrently.
(7) Communicate back to the prior (upstream) SBTM (see
(8) Receive communication from the subsequent (downstream) SBTM that it has emptied a memory block and mark that block as now being available for re-use.
As used herein, the term copy may mean copying or duplicating a content from one memory or storage location to another, or it may be moving or shifting the contents from one storage or memory location to another location without retaining the contents at the original storage or memory location, or it may mean realizing a content or data at a second storage or memory location without caring if the content or data was retained or deleted from the first storage or memory location.
These SBTM 800 can be arranged with processing elements as shown in the exemplary pipeline segment Configuration 900 of
In this pipeline segment configuration 900, at least one processing unit 920 may be coupled with at least two SBTMs 905 by SBTM data output 930, SBTM pointer output 950, SBTM data input 940, and SBTM pointer input 960. Each SBTM 905 may be further coupled with an upstream SBTM via free list connection 910.
As illustrated in exemplary drawing
Comparing the structures illustrated in
Processing unit 920 is illustrated in exemplary embodiment of
In the illustrated embodiment, Connection Manager 1105 has a plurality of pointer inputs 1120 and pointer output 1110 that are connected to a corresponding plurality of sources 1115. Connection Manager 1105 may also have a plurality of pointer outputs 1145 that connect to a corresponding plurality of destinations 1125. Connection Manager 1105 may also have a plurality of pointer inputs 1130 that connect to Free List 1135. Each Free List 1135 includes an input 1190 from a destination 1125. A control signal line or set of lines or interface 1140 may also be provided between the connection manager 1105 and the copy engine 1180 that provides a way for Connection Manager 1105 to control Copy Engine 1180. Copy Engine 1180 has a plurality of outputs 1170 to a corresponding plurality of Destinations 1125. The specific nature of connections 1110, 1120, 1130, 1140, 1150, 1160, and 1170 is not critical, and can be implemented in any number of ways well known to those skilled in the art. The number of Sources 1115 is at least one; the number of Destinations 1125 is at least one; and the number of Sources 1115 need not equal the number of Destinations 1125.
In the case where there is more than one Copy Engine 1180, the connections shown can be replicated for each Copy Engine 1180. Alternatively a Copy Engine 1180 could be associated with each Source 1115 with one set of dedicated connections between each Source 1115/Copy Engine 1180 pair. Alternatively a Copy Engine 1180 could be associated with each Destination 1125 with one set of dedicated connections between each Destination 1125/Copy Engine 1180 pair.
Free List 1135 can contain a list of destination memory blocks that are available to receive data. Input 1140 feeds Free List 1135 and can add pointers of available memory blocks to Free List 1135 as those blocks are freed up by a downstream SBTM. Pointer output 1110 can feed the Free List of an upstream SBTM. Input 1120 can provide the location of the block of data to be copied. Output 1160 provides a pointer to the block of data that has been copied to the destination. The specific implementation of Free List 1135 is not critical to the invention. In the preferred embodiment, it has been implemented using a queue, and specifically, a Fast Simplex Link (FSL), which is a means of implementing a queue known to users of certain Field Programmable Gate Arrays (FPGAs).
It may be appreciated that this connectivity permits the connection or coupling of any source 1115 with any destination 1125 under the control of Connection Manager 1105 and as a result of these connections, provides an ability to copy data or other content or information between any of the sources and destinations. Any processor may be a source or a destination for a given copy operation, depending upon how the system is configured. It should be appreciated that the number of Copy Engines 1180 need not be the same as either the number of Sources or the number of Destinations.
Connection Manager 1105 firsts selects (step 1200) a source 1115. It then selects (step 1210) a destination 1125. The order of selection is not important and may be reversed or the selections may be concurrent. Once the source and destination have been selected, the next Structured Block Descriptor on Free List 1135 corresponding to the selected destination is removed (step 1220) from its Free List 1135 and held by the Connection Manager 1105. Connection Manager 1105 then instructs (step 1230) Copy Engine 1180 to copy (step 1240) the contents from data input bus 1150 corresponding to the selected source to data output bus 1170 corresponding to the selected destination 1125. If multiple Copy Engines 1180 are used and there is not a direct correspondence between each Copy Engine 1180 and either a Source 1115 or Destination 1125, then in addition to selecting a Source 1115 and a Destination 1125, a Copy Engine 1180 must also be selected.
The means of copying, moving, duplicating, or shifting may be any of the means or methods known to one skilled in the art. One non-limiting but advantageous embodiment uses a Direct' Memory Access (DMA) copy means and method. During the copying process, selected portions of the data may optionally be altered en route so that the data at the destination may optionally be an altered version of the data from the source. Once the copying is complete, the Structured Block Descriptor that was previously removed (step 1220) from the Free List is sent (step 1250) to the selected destination 1125 on pointer output 1145. The Structured Block Descriptor at the selected source 1115 is taken from the source via pointer input 1120 and sent (step 1260) to output 1110 corresponding to the selected source 1115.
There are a variety of means which can be used to select a source 1115 and destination 1125. Among the possible means for selecting source and destination are included queue depth and memory availability, alone or combined with round-robin or other such arbitration schemes. An additional means is available for selecting the destination 1125, which is referred to as Direct Routing. In this Direct Routing case, the Structured Block Descriptor includes an index number or some other identifier specifying a destination 1125, and the Connection Manager 1105 ensures that the specified destination 1125 is selected.
One non-limiting but preferred embodiment uses a non-obvious combination of queue depth and memory availability by creating a composite measure as shown in the embodiment of
With reference to
In the embodiment illustrated in
The preceding discussion allows the selection of the available queue with the greatest depth. This is appropriate when selecting an input from which to load-balance, since the goal is to unburden the fullest queue. However, when load balancing to an output, the intent is to pick the emptiest queue. Similar circuits can be used to achieve this, the difference being that the Availability signal is inverted in both circuits, and in the case of the latter circuit, the AND gates are replaced by OR gates. The selection process in either case is to select the queue with the lowest composite value. In these exemplary embodiments, it will be appreciated that various different or opposite signaling logic schemes may be utilized without deviating from the invention.
Yet another embodiment may alternatively be utilized and which can ensure that no Source Queue remains unselected for an extended period of time.
Copied data may optionally be altered during the copying process or operation. There are a number of means by which the copied data can be altered or undergo additional processing during the copy process; the means by which this processing is accomplished or the processing performed is not critical to the invention. In one non-limiting embodiment, a Direct Memory Access (DMA) engine may be used to provide the desired copy operation.
The specific workings of Replacement Engine 1750 may be implemented in a variety of ways and the specific way or means is not critical to the invention.
The two outputs 1870 and 1820 of each Replacement Module 1860 are logically ANDed together using AND gate 1830. If Replace signal 1870 for a given Replacement Module 1860 is deasserted 0, then the output of the corresponding AND gate 1830 will be deasserted 0. If the Replace signal 1870 for a given Replacement Module 1860 is asserted 1, then the output of the corresponding AND gate 1830 will be the same as the value of the corresponding New Data value 1820. If all of the Replacement Modules are designed with non-overlapping replacement criteria, then zero or one Replacement Module will have its Replace signal 1870 asserted 1. As a result, only one of the AND gates 1810 and 1830 will have a non-zero value. The outputs of all of the AND gates 1810 and 1830 are logically ORed together using OR gate 1840. If any Replace signal is asserted 1, then output 1850 will be the same as the New Data signal 1820 corresponding to the asserted Replace signal. If no Replace signal is asserted 1, then output 1850 will be the same as the original data.
It can be appreciated that the effect of this replacement is to modify select portions (or even all portions) of the data being copied in a manner specific to the intent of a particular use. Such replacement may or may not be required in a given use, but the capability constitutes an optional aspect of the invention. Other implementations can be used, with arbitration capabilities in the case of overlapping replacement criteria, using techniques known to one skilled in the art in light of the description provided here.
A non-limiting embodiment of Replacement Module 1860 is illustrated in
The functioning of the example Replacement Module circuit in
As used herein, the term “embodiment” means an embodiment that serves to illustrate by way of example but not limitation.
It will be appreciated to those skilled in the art that the preceding examples and preferred embodiments are exemplary and not limiting to the scope of the present invention. It is intended that all permutations, enhancements, equivalents, and improvements thereto that are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present invention.
This application is a continuation of U.S. application Ser. No. 11/607,474 filed on Dec. 1, 2006 (now issued as U.S. Pat. No. 8,706,987), entitled “Structured Block Transfer Module, System Architecture and Method for Transferring,” which are incorporated by reference herein in its entirety. This application is related to U.S. application Ser. No. 11/607,481, filed on Dec. 1, 2006 (now abandoned); U.S. application Ser. No. 11/607,429 filed on Dec. 1, 2006 (issued as U.S. Pat. No. 8,289,966 on Oct. 16, 2012); U.S. application Ser. No. 11/607,452 filed on Dec. 1, 2006 (issued as U.S. Pat. No. 8,127,113 on Feb. 28, 2012); and U.S. application Ser. No. 13/358,407 filed on Jan. 25, 2012; and U.S. application Ser. No. 14/193,932 filed on Feb. 28, 2014; which are incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4642789 | Lavelle | Feb 1987 | A |
4700292 | Campanini | Oct 1987 | A |
5408469 | Opher et al. | Apr 1995 | A |
5758106 | Fenwick et al. | May 1998 | A |
5898889 | Davis et al. | Apr 1999 | A |
5948067 | Caldara et al. | Sep 1999 | A |
5996032 | Baker | Nov 1999 | A |
6009096 | Jaisingh et al. | Dec 1999 | A |
6058438 | Diehl et al. | May 2000 | A |
6088740 | Ghaffari et al. | Jul 2000 | A |
6163539 | Alexander et al. | Dec 2000 | A |
6172540 | Gandhi | Jan 2001 | B1 |
6182177 | Harriman | Jan 2001 | B1 |
6381242 | Maher, III et al. | Apr 2002 | B1 |
6408369 | Garrett et al. | Jun 2002 | B1 |
6415296 | Challener et al. | Jul 2002 | B1 |
6421751 | Gulick | Jul 2002 | B1 |
6453360 | Muller et al. | Sep 2002 | B1 |
6480489 | Muller et al. | Nov 2002 | B1 |
6513102 | Garrett et al. | Jan 2003 | B2 |
6606301 | Muller et al. | Aug 2003 | B1 |
6700888 | Jonsson et al. | Mar 2004 | B1 |
6804679 | Jevons et al. | Oct 2004 | B2 |
6847984 | Midgley | Jan 2005 | B1 |
7076595 | Dao et al. | Jul 2006 | B1 |
7133984 | Dickensheets | Nov 2006 | B1 |
7162608 | Bethard | Jan 2007 | B2 |
7228422 | Morioka et al. | Jun 2007 | B2 |
7246197 | Rosenbluth et al. | Jul 2007 | B2 |
7289537 | Devanagondi et al. | Oct 2007 | B1 |
7295575 | Ido et al. | Nov 2007 | B2 |
7307986 | Henderson et al. | Dec 2007 | B2 |
7320041 | Biran et al. | Jan 2008 | B2 |
7325122 | Abdelilah et al. | Jan 2008 | B2 |
7337201 | Yellin et al. | Feb 2008 | B1 |
7380106 | Bilski | May 2008 | B1 |
7386704 | Schulz et al. | Jun 2008 | B2 |
7389315 | Scott | Jun 2008 | B1 |
7389403 | Alpert et al. | Jun 2008 | B1 |
7487302 | Gouldey et al. | Feb 2009 | B2 |
7545809 | Engbersen et al. | Jun 2009 | B2 |
7602801 | Hayashi et al. | Oct 2009 | B2 |
7623539 | Isley | Nov 2009 | B2 |
7715437 | Denney et al. | May 2010 | B2 |
7742480 | Calvignac et al. | Jun 2010 | B2 |
7826486 | Calvignac et al. | Nov 2010 | B2 |
7835398 | Denney et al. | Nov 2010 | B2 |
7839852 | Liu et al. | Nov 2010 | B2 |
7924882 | Nagai et al. | Apr 2011 | B2 |
20020019917 | Teshima | Feb 2002 | A1 |
20020026502 | Phillips et al. | Feb 2002 | A1 |
20020085551 | Tzeng | Jul 2002 | A1 |
20020091826 | Comeau et al. | Jul 2002 | A1 |
20020105910 | Maher, III et al. | Aug 2002 | A1 |
20020136211 | Battle et al. | Sep 2002 | A1 |
20020145974 | Saidi et al. | Oct 2002 | A1 |
20020161941 | Chue et al. | Oct 2002 | A1 |
20020163914 | Dooley | Nov 2002 | A1 |
20020163922 | Dooley et al. | Nov 2002 | A1 |
20020174279 | Wynne et al. | Nov 2002 | A1 |
20020174316 | Dale et al. | Nov 2002 | A1 |
20030031172 | Grinfeld | Feb 2003 | A1 |
20030079104 | Bethard | Apr 2003 | A1 |
20030086300 | Noyes et al. | May 2003 | A1 |
20030105617 | Cadambi et al. | Jun 2003 | A1 |
20030145155 | Wolrich et al. | Jul 2003 | A1 |
20030152078 | Henderson et al. | Aug 2003 | A1 |
20030152084 | Lee et al. | Aug 2003 | A1 |
20030161327 | Vlodavsky et al. | Aug 2003 | A1 |
20030172229 | Takasugi | Sep 2003 | A1 |
20030177252 | Krichevski et al. | Sep 2003 | A1 |
20030231634 | Henderson et al. | Dec 2003 | A1 |
20030233503 | Yang et al. | Dec 2003 | A1 |
20040004961 | Lakshmanamurthy et al. | Jan 2004 | A1 |
20040004964 | Lakshmanamurthy et al. | Jan 2004 | A1 |
20040037276 | Henderson et al. | Feb 2004 | A1 |
20040039891 | Leung et al. | Feb 2004 | A1 |
20040054837 | Biran et al. | Mar 2004 | A1 |
20040083308 | Sebastian et al. | Apr 2004 | A1 |
20040098701 | Klein | May 2004 | A1 |
20040193733 | Vangal et al. | Sep 2004 | A1 |
20040193768 | Carnevale et al. | Sep 2004 | A1 |
20040205110 | Hinshaw | Oct 2004 | A1 |
20040258043 | Engbersen et al. | Dec 2004 | A1 |
20050013318 | Fike et al. | Jan 2005 | A1 |
20050027958 | Fuente et al. | Feb 2005 | A1 |
20050038946 | Borden | Feb 2005 | A1 |
20050080928 | Beverly et al. | Apr 2005 | A1 |
20050165985 | Vangal et al. | Jul 2005 | A1 |
20050188129 | Abdelilah et al. | Aug 2005 | A1 |
20050259672 | Eduri | Nov 2005 | A1 |
20050278502 | Hundley | Dec 2005 | A1 |
20060007862 | Sayeedi et al. | Jan 2006 | A1 |
20060039374 | Belz et al. | Feb 2006 | A1 |
20060056435 | Biran et al. | Mar 2006 | A1 |
20060112272 | Morioka et al. | May 2006 | A1 |
20060129767 | Berenyi et al. | Jun 2006 | A1 |
20060146808 | Campini et al. | Jul 2006 | A1 |
20060149921 | Lim | Jul 2006 | A1 |
20060221976 | Isley | Oct 2006 | A1 |
20060251120 | Arimilli et al. | Nov 2006 | A1 |
20060294149 | Seshadri et al. | Dec 2006 | A1 |
20070088932 | Bethard | Apr 2007 | A1 |
20070248075 | Liu et al. | Oct 2007 | A1 |
20070271413 | Fujibayashi et al. | Nov 2007 | A1 |
20080010390 | Abdelilah et al. | Jan 2008 | A1 |
20080013541 | Calvignac et al. | Jan 2008 | A1 |
20080072005 | Abdelilah et al. | Mar 2008 | A1 |
20080155135 | Garg et al. | Jun 2008 | A1 |
20080253398 | Calvignac et al. | Oct 2008 | A1 |
20090067429 | Nagai et al. | Mar 2009 | A1 |
20090296738 | Shimada | Dec 2009 | A1 |
20120089548 | Kobayashi et al. | Apr 2012 | A1 |
Entry |
---|
Gong, S. et al., “Design of an Efficient Memory Subsystem for Network Processor,” IEEE, copyright 2005, pp. 897-900. |
Williams, B. et al., “Approaching ATM Switch Architectures,” Network World, Nov. 16, 1992, 5 pages. |
United States Office Action, U.S. Appl. No. 11/607,474, Aug. 15, 2013, 15 pages. |
United States Office Action, U.S. Appl. No. 11/607,474, Jul. 7, 2010, 41 pages. |
United States Office Action, U.S. Appl. No. 11/607,474, Feb. 3, 2010, 36 pages. |
United States Office Action, U.S. Appl. No. 14/194,242, Nov. 2, 2015, 25 pages. |
United States Office Action, U.S. Appl. No. 14/194,242, Mar. 16, 2016, 25 pages. |
Number | Date | Country | |
---|---|---|---|
20140181343 A1 | Jun 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11607474 | Dec 2006 | US |
Child | 14195457 | US |