Claims
- 1. A method of analyzing resource utilization in an information management system comprising a distributed interconnect, said method comprising:
monitoring resource utilization information obtained from said information management system across said distributed interconnect; logging said monitored resource utilization information; and analyzing said logged resource utilization information.
- 2. The method of claim 1, wherein said resource utilization information comprises at least one of memory utilization, CPU utilization, IOPS utilization, or a combination thereof.
- 3. The method of claim 1, wherein said information management system comprises a plurality of individual processing engines coupled together by said distributed interconnect.
- 4. The method of claim 3, wherein said plurality of processing engines communicate as peers in a peer to peer environment.
- 5. The system of claim 3, wherein said distributed interconnect comprises a switch fabric.
- 6. The system of claim 3, wherein said distributed interconnect comprises a virtual distributed interconnect.
- 7. The method of claim 3, wherein said information management system comprises a content delivery system.
- 8. The method of claim 7, wherein said content delivery system comprises a network endpoint content delivery system.
- 9. The method of claim 3, wherein said plurality of processing engines comprise a system management engine; and wherein said method comprises using said system management engine to perform said monitoring, logging, and analyzing.
- 10. The method of claim 9, wherein said logging comprises communicating said monitored resource utilization information to a history repository coupled to said system management engine and maintaining said monitored resource utilization information in said history repository; and wherein said method further comprises retrieving said logged resource utilization information from said history repository prior to performing said analyzing.
- 11. The method of claim 10, wherein said history repository is implemented on a device external to said information management system.
- 12. The method of claim 1, wherein said analyzing comprises performing at least one of a peak time period analysis, a short term forecast analysis, a long term trend analysis, a load balancing analysis, a bottleneck analysis, or a combination thereof.
- 13. The method of claim 3, wherein said method further comprises automatically reallocating workload between said processing engines based at least in part on said analysis of said logged resource utilization information.
- 14. The method of claim 1, wherein said method further comprises dynamically managing system resources based on the results of said analyzing.
- 15. A method of analyzing resource utilization in an information management system, comprising:
monitoring resource utilization information obtained from said information management system; logging said monitored resource utilization information; and analyzing said logged resource utilization information; wherein said information management system comprises a plurality of individual processing engines coupled together by a distributed interconnect; wherein said resource utilization information is obtained from one or more of said individual processing engines; and wherein said method comprises monitoring and logging said resource utilization information on an individual processing engine basis.
- 16. The method of claim 15, wherein said resource utilization information comprises at least one of processing engine memory utilization, processing engine CPU utilization, processing engine IOPS utilization, or a combination thereof.
- 17. The method of claim 15, wherein said distributed interconnect comprises a switch fabric.
- 18. The method of claim 17, wherein said information management system comprises a content delivery system.
- 19. The method of claim 18, wherein said content delivery system comprises a network endpoint content delivery system.
- 20. The method of claim 15, wherein said plurality of processing engines comprise a system management engine; and wherein said method comprises using said system management engine to perform said monitoring, logging, and analyzing.
- 21. The method of claim 20, wherein said logging comprises communicating said monitored resource utilization information to a history repository coupled to said system management engine and maintaining said monitored resource utilization information in said history repository; and wherein said method further comprises retrieving said logged resource utilization information from said history repository prior to performing said analyzing.
- 22. The method of claim 21, wherein said history repository is implemented on a device external to said information management system.
- 23. The method of claim 15, wherein said analyzing comprises performing at least one of a peak time period analysis, a short term forecast analysis, a long term trend analysis, a load balancing analysis, a bottleneck analysis, or a combination thereof.
- 24. The method of claim 23, wherein said method further comprises specifying or sizing additional subsystem or system equipment based at least in part on the results of one or more of said analyses.
- 25. The method of claim 23, wherein said method further comprises identifying a condition of overutilization for at least one of said processing engines based on the results of at least one of said analyses; and addressing said identified condition of overutilization in response to said identification by at least one of downloading additional software functionality onto said overutilized processing engine, transferring workload from said overutilized processing engine to a hot spare processing engine, issuing a notification to add additional processing engine hardware, or a combination thereof.
- 26. The method of claim 23, wherein said method further comprises identifying an adverse workload condition for at least one of said processing engines based on the results of at least one of said analyses; and generating an alarm in response to said identification of said adverse workload condition.
- 27. The method of claim 23, wherein said method further comprises identifying an adverse workload condition for at least one of said processing engines based on the results of at least one of said analyses; and addressing said adverse workload condition by automatically reallocating workload between two or more of said processing engines based at least in part on said analysis of said logged resource utilization information.
- 28. The method of claim 23, wherein said method further comprises forecasting a future adverse workload condition for at least one of said processing engines based on the results of at least one of said analyses.
- 29. The method of claim 28, wherein said method further comprises providing a user with at least one suggested information management system reconfiguration to address said forecasted adverse workload condition, allowing a user to reconfigure said information management system to address said forecasted adverse workload condition, allowing a user to purchase additional information system equipment to address said forecasted adverse workload condition, or a combination thereof.
- 30. The method of claim 15, wherein said method further comprises dynamically managing system resources based on the results of said analyzing.
- 31. A method of analyzing resource utilization in a network connectable information management system that includes a system management processing engine coupled to at least one other processing engine by a distributed interconnect, said method comprising:
monitoring resource utilization information obtained across said distributed interconnect from said at least one other processing engine, wherein said monitoring is performed using a resource utilization monitor implemented on said system management processing engine; logging said monitored resource utilization information by communicating said monitored resource utilization information to a history repository, wherein said logging is performed using a resource utilization logger implemented on said system management processing engine and wherein said history repository is implemented on a server coupled to said system management processing engine; maintaining said logged resource utilization information on said history repository; retrieving said logged resource utilization information from said history repository; and analyzing said retrieved resource utilization information, wherein said retrieving and said analyzing is performed using a logging and analysis manager implemented on said system management processing engine; wherein said resource utilization information comprises at least one of memory utilization for said at least one other processing engine, CPU utilization for said at least one other processing engine, IOPS utilization for said at least one other processing engine, or a combination thereof.
- 32. The method of claim 31, wherein said at least one other processing engine comprises a plurality of other individual processing engines coupled to said system management processing engine by said distributed interconnect; wherein said resource utilization information is obtained from two or more of said plurality of individual processing engines; and wherein said steps of monitoring, logging, maintaining, retrieving and analyzing said resource utilization information are performed on an individual processing engine basis.
- 33. The method of claim 32, wherein said monitoring comprises using said resource utilization monitor to periodically poll each of said plurality of other processing engines across said distributed interconnect and to collect resource utilization information communicated from each of said plurality of other processing engines across said distributed interconnect in response to said periodic polling.
- 34. The method of claim 32, wherein said monitoring comprises using said resource utilization monitor to collect resource utilization information communicated in an asynchronous manner from each of said plurality of other processing engines to said resource utilization monitor across said distributed interconnect.
- 35. The method of claim 32, wherein said steps of retrieving and analyzing are initiated by user input into a user interface module implemented by said logging and analysis manager.
- 36. The method of claim 32, wherein said monitoring comprises using said resource utilization monitor to periodically poll each of said plurality of other processing engines across said distributed interconnect and to collect resource utilization information communicated from each of said plurality of other processing engines across said distributed interconnect in response to said periodic polling; wherein said steps of retrieving and analyzing are initiated by user input into a user interface module implemented by said logging and analysis manager; wherein said retrieving is performed using a data retrieval module implemented by said logging and analysis manager; and wherein said analyzing is performed using a data analysis module implemented by said logging and analysis manager.
- 37. The method of claim 36, wherein said distributed interconnect comprises a switch fabric.
- 38. The system of claim 36, wherein said distributed interconnect comprises a virtual distributed interconnect.
- 39. The method of claim 37, wherein said information management system comprises a content delivery system.
- 40. The method of claim 39, wherein said content delivery system comprises a network endpoint content delivery system.
- 41. The method of claim 31, wherein said method further comprises dynamically managing system resources based on the results of said analyzing.
- 42. A method of analyzing resource utilization in a network connectable content delivery system that includes a system management processing engine coupled to a plurality of other processing engines by a distributed interconnect, said method comprising:
monitoring resource utilization information obtained across said distributed interconnect from said plurality of other processing engines, wherein said monitoring is performed using a resource utilization monitor implemented on said system management processing engine; logging said monitored resource utilization information by communicating said monitored resource utilization information to a history repository, wherein said logging is performed using a resource utilization logger implemented on said system management processing engine and wherein said history repository is implemented on a server coupled to said system management processing engine; maintaining said logged resource utilization information on said history repository; retrieving said logged resource utilization information from said history repository; and analyzing said retrieved resource utilization information, wherein said retrieving and said analyzing is performed using a logging and analysis manager implemented on said system management processing engine; wherein said resource utilization information is obtained from two or more of said plurality of other processing engines; and wherein said steps of monitoring, logging, maintaining, retrieving and analyzing said resource utilization information are performed on an individual processing engine basis; and wherein said resource utilization information comprises at least one of memory utilization for said two or more other processing engines, CPU utilization for said two or more other processing engines, IOPS utilization for said two or more other processing engines, or a combination thereof.
- 43. The system of claim 42, wherein said distributed interconnect comprises a switch fabric.
- 44. The method of claim 43, wherein said plurality of processing engines comprise a network interface engine, a storage processing engine and an application processing engine
- 45. The method of claim 43, wherein said analyzing comprises performing a peak time period analysis.
- 46. The method of claim 43, wherein said analyzing comprises performing a short term forecast analysis.
- 47. The method of claim 43, wherein said analyzing comprises performing a long term trend analysis.
- 48. The method of claim 43, wherein said analyzing comprises performing a load balancing analysis.
- 49. The method of claim 43, wherein said analyzing comprises performing a bottleneck analysis.
- 50. The method of claim 43, wherein said analyzing comprises performing at least one of a peak time period analysis, a short term forecast analysis, a long term trend analysis, a load balancing analysis, a bottleneck analysis, or a combination thereof.
- 51. The method of claim 50, wherein aid method further comprises specifying or sizing additional subsystem or system equipment based at least in part on the results of one or more of said analyses.
- 52. The method of claim 50, wherein said method further comprises identifying a condition of overutilization for at least one of said processing engines based on the results of at least one of said analyses; and addressing said identified condition of overutilization in response to said identification by at least one of downloading additional software functionality onto said overutilized processing engine, transferring workload from said overutilized processing engine to a hot spare processing engine, issuing a notification to add additional processing engine hardware, or a combination thereof.
- 53. The method of claim 50, wherein said method further comprises identifying an adverse workload condition for at least one of said processing engines based on the results of at least one of said analyses; and generating an alarm in response to said identification of said adverse workload condition.
- 54. The method of claim 53, wherein said identified adverse workload condition comprises at least one of an identified bottleneck, an identified unbalanced workload, an identified condition of overutilization, or a combination thereof.
- 55. The method of claim 50, wherein said method further comprises identifying an adverse workload condition for at least one of said processing engines based on the results of at least one of said analyses; wherein said identified adverse workload condition comprises at least one of an identified bottleneck, an identified unbalanced workload, an identified condition of overutilization, or a combination thereof; and automatically reallocating workload between two or more of said processing engines based at least in part on said analysis of said logged resource utilization information to lessen said condition of bottleneck, overutilization, imbalance, or combination thereof.
- 56. The method of claim 50, wherein said method further comprises forecasting a future adverse workload condition for at least one of said processing engines based on the results of at least one of said analyses.
- 57. The method of claim 56, wherein said method further comprises allowing a user to reconfigure said information management system to address said forecasted adverse workload condition by user input into a user interface module implemented by said logging and analysis manager.
- 58. The method of claim 57, wherein said method further comprises using said user interface module to provide a user with at least one suggested information management system reconfiguration to address said forecasted adverse workload condition.
- 59. The method of claim 58, wherein said method further comprises allowing a user to purchase additional information system equipment to address said forecasted adverse workload condition by user input into said user interface module.
- 60. The method of claim 42, wherein said method further comprises dynamically managing system resources based on the results of said analyzing.
- 61. A resource utilization analysis system for analyzing resource utilization in an information management system, said resource utilization analysis system comprising:
a distributed interconnect, and a resource utilization monitor capable of monitoring resource utilization information obtained from said information management system across said distributed interconnect; a resource utilization logger in communication with said resource utilization monitor and capable of logging said monitored resource utilization information; and a logging and analysis manager capable of analyzing said logged resource utilization information.
- 62. The system of claim 61, wherein said information management system comprises a system management processing engine and at least one other processing engine coupled to said system management processing engine by said distributed interconnect; wherein said resource utilization information comprises resource utilization obtained from said at least one other processing engine; wherein each of said resource utilization monitor, said resource utilization logger, and said logging and analysis manager are implemented on said system management processing engine.
- 63. The system of claim 62, wherein said resource utilization logger is capable of logging said monitored resource utilization information by communicating said monitored resource utilization information to a history repository capable of maintaining said logged resource utilization information and being implemented on a server coupled to said system management processing engine; and wherein said logging and analysis manager is capable of retrieving said logged resource utilization information from said history repository and is further capable of analyzing said retrieved resource utilization information.
- 64. The system of claim 62, wherein said resource utilization information comprises at least one of memory utilization for said at least one other processing engine, CPU utilization for said at least one other processing engine, IOPS utilization for said at least one other processing engine, or a combination thereof.
- 65. The system of claim 63, wherein said at least one other processing engine comprises a plurality of other individual processing engines coupled to said system management processing engine by said distributed interconnect; wherein said resource utilization information is obtained from two or more of said plurality of individual processing engines; and wherein said monitoring, logging, maintaining, retrieving and analyzing of said resource utilization information is performed on an individual processing engine basis.
- 66. The system of claim 65, wherein said resource utilization monitor is capable of periodically polling each of said plurality of other processing engines across said distributed interconnect, and is further capable of collecting resource utilization information communicated from each of said plurality of other processing engines across said distributed interconnect in response to said periodic polling.
- 67. The system of claim 65, wherein said resource utilization monitor is capable of capable of collecting resource utilization information communicated to said resource utilization monitor in an asynchronous manner from each of said plurality of other processing engines across said distributed interconnect.
- 68. The system of claim 65, wherein said logging and analysis manager comprises a user interface module capable of allowing a user to initiate said retrieving and analyzing of said resource utilization information.
- 69. The system of claim 65, wherein said resource utilization monitor is capable of periodically polling each of said plurality of other processing engines across said distributed interconnect, and is further capable of collecting resource utilization information communicated from each of said plurality of other processing engines across said distributed interconnect in response to said periodic polling; and wherein said logging and analysis manager comprises:
a user interface module capable of allowing a user to initiate said retrieving and analyzing of said resource utilization information, a data retrieval module capable of performing said retrieving of said logged resource utilization information from said history repository, and a data analysis module capable of performing said analysis of said retrieved resource utilization information.
- 70. The system of claim 69, wherein said logging and analysis manager further comprises a task initiation module capable of identifying an adverse workload condition for at least one of said processing engines based on the results of at least one of said analyses; wherein said identified adverse workload condition comprises at least one of an identified bottleneck, an identified unbalanced workload, an identified condition of overutilization, or a combination thereof; and wherein said task initiation module is capable of automatically reallocating workload between two or more of said processing engines based at least in part on said analysis of said logged resource utilization information to lessen said condition of bottleneck, overutilization, imbalance, or combination thereof.
- 71. The system of claim 69, wherein said distributed interconnect comprises a switch fabric.
- 72. The system of claim 69, wherein said distributed interconnect comprises a virtual distributed interconnect.
- 73. The system of claim 71, wherein said information management system comprises a content delivery system.
- 74. The system of claim 73, wherein said content delivery system comprises a network endpoint content delivery system.
- 75. The system of claim 61, wherein said logging and analysis manager further comprises a task initiation module capable of dynamically managing system resources based on the results of said analyzing.
- 76. A resource utilization analysis system for analyzing resource utilization in a network connectable content delivery system that includes a system management processing engine coupled to a plurality of other processing engines by a distributed interconnect, said resource utilization analysis system comprising:
a resource utilization monitor implemented on said system management processing engine and capable of monitoring resource utilization information on an individual processing engine basis that is obtained across said distributed interconnect from two or more of said plurality of other processing engines; a resource utilization logger implemented on said system management processing engine and capable of logging said monitored resource utilization information on an individual processing engine basis by communicating said monitored resource utilization information to a history repository capable of maintaining said logged resource utilization information on an individual processing engine basis, said history repository being implemented on a server coupled to said system management processing engine; and a logging and analysis manager implemented on said system management processing engine and capable of retrieving said logged resource utilization information on an individual processing engine basis from said history repository and, capable of analyzing said logged resource utilization information on an individual processing engine basis; wherein said resource utilization information comprises at least one of memory utilization for said two or more other processing engines, CPU utilization for said two or more other processing engines, IOPS utilization for said two or more other processing engines, or a combination thereof.
- 77. The system of claim 76, wherein said distributed interconnect comprises a switch fabric.
- 78. The system of claim 77, wherein said plurality of processing engines comprise a network interface engine, a storage processing engine and an application processing engine
- 79. The system of claim 77, wherein said analyzing comprises performing a peak time period analysis.
- 80. The system of claim 77, wherein said analyzing comprises performing a short term forecast analysis.
- 81. The system of claim 77, wherein said analyzing comprises performing a long term trend analysis.
- 82. The system of claim 77, wherein said analyzing comprises performing a load balancing analysis.
- 83. The system of claim 77, wherein said analyzing comprises performing a bottleneck analysis.
- 84. The system of claim 77, wherein said analyzing comprises performing at least one of a peak time period analysis, a short term forecast analysis, a long term trend analysis, a load balancing analysis, a bottleneck analysis, or a combination thereof.
- 85. The system of claim 84, wherein said logging and analysis manager is further capable of specifying or sizing additional subsystem or system equipment based at least in part on the results of one or more of said analyses.
- 86. The system of claim 84, wherein said logging and analysis manager is further capable of identifying a condition of overutilization for at least one of said processing engines based on the results of at least one of said analyses; and is capable of addressing said identified condition of overutilization in response to said identification by at least one of downloading additional software functionality onto said overutilized processing engine, transferring workload from said overutilized processing engine to a hot spare processing engine, issuing a notification to add additional processing engine hardware, or a combination thereof.
- 87. The system of claim 84, wherein said logging and analysis manager is further capable of identifying an ad verse workload condition for at least one of said processing engines based on the results of at least one of said analyses; and is capable of generating an alarm in response to said identification of said adverse workload condition.
- 88. The system of claim 87, wherein said identified adverse workload condition comprises at least one of an identified bottleneck, an identified unbalanced workload, an identified condition of overutilization, or a combination thereof.
- 89. The method of claim 84, wherein said logging and analysis manager is further capable of identifying an adverse workload condition for at least one of said processing engines based on the results of at least one of said analyses; wherein said identified adverse workload condition comprises at least one of an identified bottleneck, an identified unbalanced workload, an identified condition of overutilization, or a combination thereof, and wherein said logging and analysis manager is capable of automatically reallocating workload between two or more of said processing engines based at least in part on said analysis of said logged resource utilization information to lessen said condition of bottleneck, overutilization, imbalance, or combination thereof.
- 90. The system of claim 84, wherein said logging and analysis manager is farther capable of forecasting a future adverse workload condition for at least one of said processing engines based on the results of at least one of said analyses.
- 91. The system of claim 90, wherein said logging and analysis manager comprises a user interface module; and wherein said user interface module is capable of allowing a user to reconfigure said information management system to address said forecasted adverse workload condition by user input into said user interface module.
- 92. The system of claim 90, wherein said user interface module is further capable of providing a user with at least one suggested information management system reconfiguration to address said forecasted adverse workload condition.
- 93. The system of claim 92, wherein said user interface module is further capable of allowing a user to purchase additional information system equipment to address said forecasted adverse workload condition by user input into said user interface module.
- 94. The system of claim 76, wherein said logging and analysis manager further comprises a task initiation module capable of dynamically managing system resources based on the results of said analyzing.
Parent Case Info
[0001] This application is a continuation-in-part of co-pending U.S. patent application Ser. No. 10/003,683 filed on Nov. 2, 2001 which is entitled “SYSTEMS AND METHODS FOR USING DISTRIBUTED INTERCONNECTS IN INFORMATION MANAGEMENT ENVIRONMENTS,” which itself is a continuation-in-part of co-pending U.S. patent application Ser. No. 09/879,810 filed on Jun. 12, 2001 which is entitled “SYSTEMS AND METHODS FOR PROVIDING DIFFERENTIATED SERVICE IN INFORMATION MANAGEMENT ENVIRONMENTS,” and which also claims priority from co-pending Provisional Application Serial No. 60/285,211 filed on Apr. 20, 2001 which is entitled “SYSTEMS AND METHODS FOR PROVIDING DIFFERENTIATED SERVICE IN A NETWORK ENVIRONMENT,” and which also claims priority from co-pending Provisional Application Serial No. 60/291,073 filed on May 15, 2001 which is entitled “SYSTEMS AND METHODS FOR PROVIDING DIFFERENTIATED SERVICE IN A NETWORK ENVIRONMENT,” and which also claims priority from co-pending Provisional Application Serial No. 60/246,401 filed on Nov. 7, 2000 which is entitled “SYSTEM AND METHOD FOR THE DETERMINISTIC DELIVERY OF DATA AND SERVICES,” and which also claims priority from co-pending U.S. patent application Ser. No. 09/797,200 filed on Mar. 1, 2001 which is entitled “SYSTEMS AND METHODS FOR THE DETERMINISTIC MANAGEMENT OF INFORMATION” which itself claims priority from Provisional Application Ser. No. 60/187,211 filed on Mar. 3, 2000 which is entitled “SYSTEM AND APPARATUS FOR INCREASING FILE SERVER BANDWIDTH,” the disclosures of each of the forgoing applications being incorporated herein by reference. This application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/797,404 filed on Mar. 1, 2001 which is entitled “INTERPROCESS COMMUNICATIONS WITHIN A NETWORK NODE USING SWITCH FABRIC” which itself claims priority from Provisional Application Serial No. 60/246,373 filed on Nov. 7, 2000 which is entitled “INTERPROCESS COMMUNICATIONS WITHIN A NETWORK NODE USING SWITCH FABRIC,” and which also claims priority to Provisional Application Ser. No. 60/187,211 filed on Mar. 3, 2000 which is entitled “SYSTEM AND APPARATUS FOR INCREASING FILE SERVER BANDWIDTH,” the disclosures of each being incorporated herein by reference. This application is also a continuation-in-part of co-pending U.S. patent application Ser. No. 09/797,413 filed on Mar. 1, 2001 which is entitled “NETWORK CONNECTED COMPUTING SYSTEM,” which itself claims priority from Provisional Application Serial No. 60/187,211, which was filed Mar. 30, 2000 and is entitled “SYSTEM AND APPARATUS FOR INCREASING FILE SERVER BANDWIDTH,” and which also claims priority from Provisional Application Serial No. 60/246,343, which was filed Nov. 7, 2000 and is entitled “NETWORK CONTENT DELIVERY SYSTEM WITH PEER TO PEER PROCESSING COMPONENTS,” and which also claims priority from Provisional Application Serial No. 60/246,335, which was filed Nov. 7, 2000 and is entitled “NETWORK SECURITY ACCELERATOR,” and which also claims priority from Provisional Application Serial No. 60/246,443, which was filed Nov. 7, 2000 and is entitled “METHODS AND SYSTEMS FOR THE ORDER SERIALIZATION OF INFORMATION IN A NETWORK PROCESSING ENVIRONMENT,” and which also claims priority from Provisional Application Serial No. 60/246,373, which was filed Nov. 7, 2000 and is entitled “INTERPROCESS COMMUNICATIONS WITHIN A NETWORK NODE USING SWITCH FABRIC,” and which also claims priority from Provisional Application Serial No. 60/246,444, which was filed Nov. 7, 2000 and is entitled “NETWORK TRANSPORT ACCELERATOR,” and which also claims priority from Provisional Application Serial No. 60/246,372, which was filed Nov. 7, 2000 and is entitled “SINGLE CHASSIS NETWORK ENDPOINT SYSTEM WITH NETWORK PROCESSOR FOR LOAD BALANCING,” the disclosures of each of the foregoing applications being incorporated herein by reference.
Provisional Applications (9)
|
Number |
Date |
Country |
|
60285211 |
Apr 2001 |
US |
|
60291073 |
May 2001 |
US |
|
60246401 |
Nov 2000 |
US |
|
60246444 |
Nov 2000 |
US |
|
60246372 |
Nov 2000 |
US |
|
60246343 |
Nov 2000 |
US |
|
60246335 |
Nov 2000 |
US |
|
60246373 |
Nov 2000 |
US |
|
60187211 |
Mar 2000 |
US |
Continuation in Parts (3)
|
Number |
Date |
Country |
Parent |
10003683 |
Nov 2001 |
US |
Child |
10060940 |
Jan 2002 |
US |
Parent |
09879810 |
Jun 2001 |
US |
Child |
10003683 |
Nov 2001 |
US |
Parent |
09797404 |
Mar 2001 |
US |
Child |
10060940 |
Jan 2002 |
US |