Claims
- 1. A method of storing a method frame of a method call in a computing system comprising:
storing an execution environment of said method frame in a first memory circuit; and storing one or more parameters, one or more variables, and one or more operands of said method frame in a second memory circuit.
- 2. The method of claim 1, wherein said execution environment comprises a return program counter.
- 3. The method of claim 1, wherein said execution environment comprises a return frame.
- 4. The method of claim 1, wherein said execution environment comprises a return constant pool.
- 5. The method of claim 1, wherein said execution environment comprises a current method vector.
- 6. The method of claim 1, wherein said execution environment comprises a current monitor address.
- 7. The method of claim 1, wherein said one or more parameters and said one or more variables form a local variable area of said method frame.
- 8. The method of claim 7, wherein said one or more operands form an operand stack of said method frame.
- 9. The method of claim 1, further comprising removing said execution environment from said first memory circuit upon completion of said method call.
- 10. The method of claim 1, wherein said first memory circuit is a stack.
- 11. The method of claim 1, wherein said second memory circuit is a stack.
- 12. The method of claim 10, wherein said stack is cached by a stack cache comprising:
a stack cache having a plurality of memory locations; an frame pointer pointed at a top memory location of said stack cache; and a bottom pointer pointed at a bottom memory location of said stack cache.
- 13. The method of claim 12, further comprising:
writing an new execution environment on said stack at said frame memory location; incrementing said frame pointer; spilling an first execution environment from said stack cache to said stack if a spill condition exists; and filling a second execution environment from said stack to said stack cache if a fill condition exists.
- 14. The method of claim 13, wherein said spilling a first execution environment from said stack cache to said stack comprises:
transferring said first execution environment from said bottom memory location to said stack; incrementing said bottom pointer; and
- 15. The method of claim 13, wherein said filling a second execution environment from said stack cache to said stack comprises:
decrementing said bottom pointer; and transferring a second execution environment from said stack to said bottom. memory location.
- 16. The method of claim 13, wherein said filling a second execution environment from said stack cache to said stack comprises:
transferring a second execution environment from said stack to a memory location preceding said bottom memory location; and decrementing said bottom pointer.
- 17. The method of claim 13, further comprising:
reading a first stacked execution environment from said stack cache at said top memory location; and decrementing said frame pointer.
- 18. The method of claim 13, further comprises determining if said spill condition exists.
- 19. The method of claim 18 wherein said determining if said spill condition exists comprises:
calculating a number of free memory locations; and comparing said number of free memory locations to a high cache threshold.
- 20. The method of claim 18 wherein said determining if said spill condition exists comprises:
comparing said optop pointer to a high water mark.
- 21. The method of claim 13, further comprises determining if said fill condition exists.
- 22. The method of claim 21, wherein said determining if said fill condition exists comprises:
calculating a number of used memory locations; and comparing said number of used memory locations to a low cache threshold.
- 23. The method of claim 21 wherein said determining if said fill condition exists comprises:
comparing said optop pointer to a low water mark.
- 24. The method of claim 11, wherein said stack is cached by a stack cache comprising:
a stack cache having a plurality of memory locations; an optop pointer pointed at a top memory location of said stack cache; and a bottom pointer pointed at a bottom memory location of said stack cache.
- 25. The method of claim 24, further comprising:
writing a new data word for said stack at said optop memory location; incrementing said optop pointer; spilling a first data word from said stack cache to said stack if a spill condition exists; and filling a second data word from said stack to said stack cache if a fill condition exists.
- 26. The method of claim 25, wherein said spilling a first data word from said stack cache to said stack comprises:
transferring said first data word from said bottom memory location to said stack; incrementing said bottom pointer; and
- 27. The method of claim 25, wherein said filling a second data word from said stack cache to said stack comprises:
decrementing said bottom pointer; and transferring a second data word from said stack to said bottom memory location.
- 28. The method of claim 25, wherein said filling a second data word from said stack cache to said stack comprises:
transferring a second data word from said stack to a memory location preceding said bottom memory location; and decrementing said bottom pointer.
- 29. The method of claim 25, further comprising:
reading a stacked data word from said stack cache at said top memory location; and decrementing said optop pointer;
- 30. The method of claim 25, further comprising:
reading a first stacked data word from said stack cache at said top memory location; and reading a stacked data word from said stack cache at a memory location preceding said top memory; and decrementing said optop pointer by two.
- 31. A memory architecture of a computing system capable of executing a plurality of method calls, said memory architecture comprising:
a first memory circuit configured to store an execution environment for each of said method calls; and a second memory circuit configured to store parameters, variables, and operands of each of said methods.
- 32. The memory architecture of claim 31, wherein said first memory circuit is a stack.
- 33. The memory architecture of claim 31, wherein said second memory circuit is a stack.
- 34. The memory architecture of claim 31, wherein said first memory circuit comprises:
a circular memory buffer having a plurality of memory locations; an frame pointer pointing to a top memory location in said circular memory buffer; an bottom pointer pointing to a bottom memory location in said circular memory buffer; a first read port coupled to said circular memory buffer; and a first write port coupled to said circular memory buffer.
- 35. The memory architecture of claim 34, wherein
said first read port is configured to read data from said top memory location; and said first write port is configured to write data above said top memory location.
- 36. The memory architecture of claim 34, wherein said frame pointer is incremented if said first write port writes data above said top memory location.
- 37. The memory architecture of claim 34, wherein said frame pointer is decremented if said first read port pops data from said top memory location.
- 38. The memory architecture of claim 35, wherein
said first read port is also configured to read data from said bottom memory location; and said first write port is also configured to write data below said bottom memory location.
- 39. The memory architecture of claim 38, wherein said bottom pointer is decremented if said first write port writes data below said bottom memory location.
- 40. The memory architecture of claim 38, wherein said bottom pointer is incremented if said first read port reads data from said bottom memory location.
- 41. The memory architecture of claim 35, further comprising:
a second read port coupled to said circular memory buffer; and a second write port coupled to said circular memory buffer.
- 42. The memory architecture of claim 41, wherein
said second read port is configured to read data from said bottom memory location; and said second write port is configured to write data below said bottom memory location.
- 43. The memory architecture of claim 42, wherein said bottom pointer is decremented if said second write port writes data below said bottom memory location.
- 44. The memory architecture of claim 42, wherein said bottom pointer is incremented if said second read port reads data from said bottom memory location.
- 45. The memory architecture of claim 42, wherein said first memory circuit comprises:
a stack; and a stack cache management unit for caching said stack.
- 46. The memory architecture of claim 45, wherein said stack cache management unit comprises:
a stack cache having a stack cache memory circuit coupled to said stack, said stack cache memory circuit having a plurality of memory locations; a cache bottom pointer pointing to and defining a bottom memory location within said stack cache memory circuit; a spill control unit coupled to transfer a first execution environment stored in said bottom memory location from said stack cache to said stack; and a fill control unit coupled to transfer a second execution environment from said stack to said bottom memory location or a memory location adjacent said bottom memory location.
- 47. The memory architecture of claim 46, wherein said stack cache further comprises:
a first read port coupled between said stack cache memory circuit and said stack, wherein said spill control unit controls said first read port; and a first write port coupled between said stack cache memory circuit and said stack, wherein said fill control unit controls said first write port.
- 48. The memory architecture of claim 47, further comprising an frame pointer pointing to and defining a top memory location of said stack cache memory circuit.
- 49. The memory architecture of claim 32, wherein said second memory circuit comprises:
a circular memory buffer having a plurality of memory locations; an optop pointer pointing to a top memory location in said circular memory buffer; an bottom pointer pointing to a bottom memory location in said circular memory buffer; a first read port coupled to said circular memory buffer; and a first write port coupled to said circular memory buffer.
- 50. The memory architecture of claim 49, wherein
said first read port is configured to read data from said top memory location; and said first write port is configured to write data above said top memory location.
- 51. The memory architecture of claim 49, wherein said optop pointer is incremented if said first write port writes data above said top memory location.
- 52. The memory architecture of claim 49, wherein said optop pointer is decremented if said first read port pops data from said top memory location.
- 53. The memory architecture of claim 32, wherein said second memory circuit comprises:
a stack; and a stack cache management unit for caching said stack.
- 54. The memory architecture of claim 53, wherein said stack cache management unit comprises:
a stack cache having a stack cache memory circuit coupled to said stack, said stack cache memory circuit having a plurality of memory locations; a cache bottom pointer pointing to and defining a bottom memory location within said stack cache memory circuit; a spill control unit coupled to transfer a first data word stored in said bottom memory location from said stack cache to said stack; and a fill control unit coupled to transfer a second data word from said stack to said bottom memory location or a memory location adjacent said bottom memory location.
- 55. The memory architecture of claim 54, wherein said stack cache further comprises:
a first read port coupled between said stack cache memory circuit and said stack, wherein said spill control unit controls said first read port; and a first write port coupled between said stack cache memory circuit and said stack, wherein said fill control unit controls said first write port.
- 56. The memory architecture of claim 55, further comprising an optop pointer pointing to and defining a top memory location of said stack cache memory circuit.
RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application No. 60/010,527, filed Jan. 24, 1996, entitled “Methods and Apparatuses for Implementing the JAVA Virtual Machine” (JAVA is a trademark of Sun Microsystems, Inc.) and naming Marc Tremblay, James Michael O'Connor, Robert Garner, and William N. Joy as inventors, and is a continuation-in-part application of U.S. application Ser. No. 08/647,103, filed May 7, 1996, entitled “METHOD AND APPARATUS FOR STACK HARDWARE PARTITIONING FOR A STACK-BASED TYPE PROCESSOR” naming James Michael O'Connor and Mark Tremblay as inventors and U.S. application Ser. No. 08/642,253, filed May 2, 1996, entitled “METHODS AND APPARATUSES FOR IMPLEMENTING OPERAND STACK CACHE AS A CIRCULAR BUFFER” and naming Marc Tremblay and James Michael O'Connor as inventors both of which also claimed the benefit of U.S. Provisional Application No. 60/010,527, filed Jan. 24, 1996, entitled “Methods and Apparatuses for Implementing the JAVA Virtual Machine” and naming Marc Tremblay, James Michael O'Connor, Robert Garner, and William N. Joy as inventors.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60010527 |
Jan 1996 |
US |
Continuations (3)
|
Number |
Date |
Country |
Parent |
08787617 |
Jan 1997 |
US |
Child |
10346886 |
Jan 2003 |
US |
Parent |
08647103 |
May 1996 |
US |
Child |
10346886 |
Jan 2003 |
US |
Parent |
08642253 |
May 1996 |
US |
Child |
10346886 |
Jan 2003 |
US |