This Application claims priority to Chinese Patent Application 200910131892.8, filed Apr. 9, 2009.
The present invention generally relates to a computer-aided design of integrated circuits and, more specifically, to the verification of the performance model of an integrated circuit in a memory latency simulation environment.
When a processor submits a request to a memory device, e.g., a Dynamic Random Access Memory (DRAM), the response from the memory device can be read by the processor after a delay of time, referred to as a “latency.” For example, a processor may issue a read request to a cache memory system; after a period of time, the cache memory system responds by placing the requested data on the bus. The processor can then receive the data from the bus after the latency expires. If the processor attempts to receive the data from the bus before the latency expires, the processor is likely to receive inaccurate and invalid data. Therefore when designing processors, it is crucial to take the memory latency into consideration.
In conventional computer-aided designs of processor or other integrated circuits, a lot of man and machine hours are needed to verify that the model of the integrated circuit is correct. For example, an architectural model, typically written by an engineering team, of the integrated circuit is used to define the functional requirements. Then a Register Transfer Level (RTL) model of the integrated circuit is then produced, typically by another engineering team, and the logic or the functionality of the RTL model is verified against the architectural model. Conventionally, the verification is performed with a “fixed latency” model (or so-called simulation environment), in which the memory latency values are set fixed. Actually, the amount of latency can vary depending on several factors, for example, the types of request. The amount of latency can also vary among the same types of request. Therefore, “fixed latency” is not accurate enough for the verification.
Based on the foregoing, there is a need for a more accurate and dynamic latency model to perform the verification of an integrated circuit.
The present invention is to provide a method of verifying a performance latency model of an integrated circuit and a method of designing an integrated circuit.
One aspect of the present invention is to adopt “dynamic latency” in the verification of an integrated circuit. Another aspect of the present invention is to create a new memory latency model for the verification of an integrated circuit. Still another aspect of the present invention is to assign the latency value(s) randomly for the verification of an integrated circuit. Yet another aspect of the present invention is to assign the latency value(s) in a manner related to statistical latency data.
In one embodiment, the method of verifying a performance model of an integrated circuit comprises the following steps: obtaining statistical request numbers and corresponding latency values of memory access; developing functions of latency value based on the statistical request numbers and the corresponding latency values; bringing a random value to one of the functions to retrieve a latency value; and verifying the logic of the performance model using the latency value retrieved in the step above.
In another embodiment, the method of designing an integrated circuit comprises: writing source code of the integrated circuit, and verifying a performance model, e.g., an RTL model, of the integrated circuit. The later step may further include obtaining statistical request numbers and corresponding latency values of memory access; developing functions of latency value based on the statistical request numbers and the corresponding latency values; bringing a random value to one of the functions to retrieve a latency value; and verifying the logic of the performance model using the latency value retrieved in the step above.
In yet another embodiment, the method of dynamically verifying memory latency in an integrated circuit comprises: writing source code of the integrated circuit, and verifying a performance model of the integrated circuit. The later step may further include obtaining statistical request numbers and corresponding latency values of memory access; developing functions of latency value based on the statistical request numbers and the corresponding latency values; bringing a random value to one of the functions to retrieve a latency value; bringing another random value to one of the functions to retrieve another latency value and verifying the logic of the performance model using the latency values retrieved in the steps above.
The foregoing and other features of the invention will be apparent from the following more particular description of embodiment of the invention.
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
a, 3b, 3c and 3d illustrate exemplary piecewise linear functions according to one or more embodiments of the present invention.
In the following descriptions, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one of skill in the art that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the present invention.
In step 102, statistical data such as request numbers and corresponding latency values of memory access are collected to create a memory latency distribution model.
From the statistical data in the example of
In step 104, a predetermined function for each divided range (or sub-range) is developed based on the requests accumulated in the divided range, e.g., one of the ranges of 600-700 Clks, 700-800 Clks, or 800-900 Clks. It is preferable that the predetermined functions are piecewise linear functions, but other continuous or discontinuous, linear or nonlinear, functions which are able to describe the latency distribution could be also adopted in the present invention.
In this example, the ratios of the requests in each divided range to the total requests are furthered used to define the segment of the piecewise linear function. For example, while the requests in the first range of 600-700 Clks are 15% of the total requests, the piecewise linear function for the first range of 600-700 Clks is established between the points 0 and 0.15. Next, the piecewise linear function for the second range of 700-800 Clks is established between 0.15 and 0.5, and the piecewise linear function for the third range of 800-900 Clks is established between 0.5 and 1. The exemplary piecewise linear functions are listed as below and illustrated in
Y=X*(700−600)/(0.15−0.0)+600, where X is between 0 and 0.15;
Y=(X−0.15)*(800−700)/(0.5−0.15)+700, where X is between 0.15 and 0.5;
Y=(X−0.5)*(900−800)/(1.0−0.5)+800, where X is between 0.5 and 1.
In step 106, a random value or more random values are selected from a range of 0 to 1 and are brought into the piecewise linear functions listed above as the ratio in X-axis to retrieve the latency value(s) in Y-axis. For example, if the random value is 0.1, which is in the range of 0 to 0.15, then the random value of 0.1 is brought into the first piecewise linear function and a latency value of 666.67 Clks is obtained; if the random value is 0.75, which is in the range of 0.5 to 1, then the random value of 0.75 is brought into the third piecewise linear function and a latency value of 850 Clks is obtained. In this manner, the memory latency for an access could be simulated in a dynamic manner which could be more similar to the real case. In step 108, one or more retrieved latency values are used in the verification of the logic of the performance model.
Note that in step 104 mentioned above, it is not necessary to have the ratio of the requests to the total for the present invention. In some embodiments, the segment of the piecewise linear function could be defined directly by the statistical request numbers, and accordingly the random value is selected from the range of entire statistical request numbers in the step 106.
Also note that the steps 102 to 106 described above may be embodied in a software product, which could be written in a common programming language such as “C++”, and could be executed on a personal computer or a workstation. The software product for performing the steps 102 to 106 could be a standalone product or a functional module to be combined with other module(s) for analyzing the queuing behavior, e.g., in an integrated software product for verification.
In another example the statistical data discussed with respect to
In step 104, two piecewise linear functions are developed based on the requests accumulated in the ranges of 600-750 Clks and 750-900 Clks. Similar to what has been described above, the ratio of the requests in each divided range to the total requests is used to define the segment of the piecewise linear function. For example, while the requests in the first range of 600 Clks to 750 Clks account for 32.5% of the total request numbers, the piecewise linear function for the first range of 600 Clks to 750 Clks is established between points 0 and 0.325. Accordingly, the piecewise linear function for the second range of 750 Clks to 900 Clks is established between 0.325 and 1. The exemplary piecewise linear functions are listed as below and illustrated in
Y=X*(750−600)/(0.325−0.0)+600, where X is between 0 and 0.325; and
Y=(X−0.325)*(900−750)/(1−0.325)+750, where X is between 0.325 and 1.
In step 106, if the random value is 0.1, which is in the range of 0 to 0.325, then the random value of 0.1 is brought into the first piecewise linear function and a latency value of 646.15 Clks is obtained; if the random value is 0.75, which is in the range of 0.325 to 1, then the random value of 0.75 is brought into the second piecewise linear function and a latency value of 844.44 Clks is obtained.
In still another example of
In step 104, two piecewise linear functions are developed based on the requests accumulated in the ranges of 600-700 Clks and 700-900 Clks. Likewise, the ratio of the requests in each divided range to the total requests is used to define the segment of the piecewise linear function. For example, while the requests in the first range of 600 Clks to 700 Clks are 15% of the total requests, the piecewise linear function for the first range of 600 Clks to 700 Clks is established between points 0 and 0.15. Accordingly, the piecewise linear function for the second range of 700 Clks to 900 Clks is established between 0.15 and 1. The exemplary piecewise linear functions are listed as below and illustrated in
Y=X*(700−600)/(0.15−0.0)+600, where X is between 0 and 0.15; and
Y=(X−0.15)*(900−700)/(1−0.15)+700, where X is between 0.15 and 1.
In step 106, if the random value is 0.1, which is in the range of 0 to 0.15, then the random value of 0.1 is brought into the first piecewise linear function and a latency value of 666.67 Clks is obtained; if the random value is 0.75, which is in the range of 0.15 to 1, then the random value of 0.75 is brought into the second piecewise linear function and a latency value of 841.18 Clks is obtained.
In yet another example of
In step 104, two piecewise linear functions are developed based on the request accumulated in the ranges of 600-700 Clks and 800-900 Clks. Likewise, the ratio of the requests in each divided range to the total requests is used to define the segment of the piecewise linear function. For example, while the request numbers in the first range of 600 Clks to 700 Clks account for 23.1% of the total requests, the piecewise linear function for the first range of 600 Clks to 700 Clks is established between points 0 and 0.231. Accordingly, the piecewise linear function for the second range of 800 Clks to 900 Clks is established between 0.231 and 1. Then the piecewise linear functions are listed as below and illustrated in
Y=X*(700−600)/(0.231−0)+600, where X is between 0 and 0.231; and
Y=(X−0.231)*(900−800)/(1−0.231)+800, where X is between 0.231 and 1.
Note that these two piecewise linear functions are discontinuous, and the first piecewise linear function may be predetermined inclusive at point 0.231.
In step 106, if the random value is 0.1, which is in the range of 0 to 0.231, then the random value of 0.1 is brought into the first piecewise linear function and a latency value of 643.29 Clks is obtained; if the random value is 0.75, which is in the range of 0.231 to 1, then the random value of 0.75 is brought into the second piecewise linear function and a latency value of 867.49 Clks is obtained.
While this invention has been described with reference to the illustrative embodiments, these descriptions should not be construed in a limiting sense. Various modifications of the illustrative embodiments, as well as other embodiments of the invention, will be apparent upon reference to these descriptions. It is therefore contemplated that the appended claims will cover any such modifications or embodiments as falling within the true scope of the invention and its legal equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2009 1 0131892 | Apr 2009 | CN | national |
Number | Name | Date | Kind |
---|---|---|---|
4063308 | Collins et al. | Dec 1977 | A |
5276858 | Oak et al. | Jan 1994 | A |
5643086 | Alcorn et al. | Jul 1997 | A |
5752062 | Gover et al. | May 1998 | A |
5757919 | Herbert et al. | May 1998 | A |
5815154 | Hirschtick et al. | Sep 1998 | A |
5978484 | Apperson et al. | Nov 1999 | A |
6016474 | Kim et al. | Jan 2000 | A |
6088774 | Gillingham | Jul 2000 | A |
6157618 | Boss et al. | Dec 2000 | A |
6330008 | Razdow et al. | Dec 2001 | B1 |
6334174 | Delp et al. | Dec 2001 | B1 |
6362825 | Johnson | Mar 2002 | B1 |
6424198 | Wolford | Jul 2002 | B1 |
6438670 | McClannahan | Aug 2002 | B1 |
6668325 | Collberg et al. | Dec 2003 | B1 |
6684314 | Manter | Jan 2004 | B1 |
6732060 | Lee | May 2004 | B1 |
6779096 | Cornelius et al. | Aug 2004 | B1 |
6829191 | Perner et al. | Dec 2004 | B1 |
6898682 | Welker et al. | May 2005 | B2 |
6901582 | Harrison | May 2005 | B1 |
6934824 | Woo et al. | Aug 2005 | B2 |
6934899 | Yuan et al. | Aug 2005 | B2 |
6943800 | Taylor et al. | Sep 2005 | B2 |
7047519 | Bates et al. | May 2006 | B2 |
7095416 | Johns et al. | Aug 2006 | B1 |
7107484 | Yamazaki et al. | Sep 2006 | B2 |
7173635 | Amann et al. | Feb 2007 | B2 |
7237151 | Swoboda et al. | Jun 2007 | B2 |
7260066 | Wang et al. | Aug 2007 | B2 |
7277826 | Castelli et al. | Oct 2007 | B2 |
7383205 | Peinado et al. | Jun 2008 | B1 |
7395426 | Lee et al. | Jul 2008 | B2 |
7420563 | Wakabayashi | Sep 2008 | B2 |
7505953 | Doshi | Mar 2009 | B2 |
7555499 | Shah et al. | Jun 2009 | B2 |
7778800 | Aguaviva et al. | Aug 2010 | B2 |
20010044928 | Akaike et al. | Nov 2001 | A1 |
20020013881 | Delp et al. | Jan 2002 | A1 |
20020157086 | Lewis et al. | Oct 2002 | A1 |
20030043022 | Burgan et al. | Mar 2003 | A1 |
20030214660 | Plass et al. | Nov 2003 | A1 |
20040030853 | Welker et al. | Feb 2004 | A1 |
20040085894 | Wang et al. | May 2004 | A1 |
20040130552 | Duluk, Jr. et al. | Jul 2004 | A1 |
20050222881 | Booker | Oct 2005 | A1 |
20050243094 | Patel et al. | Nov 2005 | A1 |
20050273652 | Okawa et al. | Dec 2005 | A1 |
20050278684 | Hamilton et al. | Dec 2005 | A1 |
20060079333 | Morrow et al. | Apr 2006 | A1 |
20060080625 | Bose et al. | Apr 2006 | A1 |
20060109846 | Lioy et al. | May 2006 | A1 |
20060161761 | Schwartz et al. | Jul 2006 | A1 |
20060185017 | Challener et al. | Aug 2006 | A1 |
20070115292 | Brothers et al. | May 2007 | A1 |
20070274284 | Dendukuri et al. | Nov 2007 | A1 |
20080007563 | Aronson et al. | Jan 2008 | A1 |
20080095090 | Lee et al. | Apr 2008 | A1 |
Entry |
---|
Soundararajan, et al. “Dynamic Partitioning of the Cache Hierarchy in Shared Data Centers” Aug. 2008. |
“ATI Redeon X800, 3D Architecture White Paper”, ATI, 2005, pp. 1-13, with proof of seniority (4 pages), according to ACM bibliography regarding the document: “The Direct3D 10 system”, ACM TOG, vol. 25, Iss. 3 (Jul. 2006), Reference 1; eHG. |
“maxVUE Graphic Editor”, Metso Automation, 2005, pp. 1-6, http://www.metsoautomation.com/automation/indes.nsf/FR?Readform&ATL=automation/ed—prod.nsf/WebWID/WTB-041110-22256F-2445A; eHB. |
C. Cebenoyan and M. Wloka, “optimizing the graphics pipeline”, 203, Nvidia GDC Presentation Slide. |
Duca et al., A Relational Debugging Engine for Graphics Pipeline, International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2005, pp. 453-463, ISSN: 0730-0301. |
gDEBugger, graphicREMEDY, http://www.gremedy.com/ Jul. 29, 2005. |
gDEBugger, graphicREMEDY, http://www.gremedy.com/ Aug. 8, 2006. |
N. Tatarchuk, “New RenderMonkey Features for DirectX and OpenGL Shader Development”, Game Developers Conference, Mar. 2004. |
Number | Date | Country | |
---|---|---|---|
20100262415 A1 | Oct 2010 | US |