Claims
- 1. A resource sharing system comprising:
a first processor and a second processor, the first processor managing a resource which is to be made available to the second processor; a communications protocol comprising a first interprocessor communications protocol running on the first processor, and a second interprocessor communications protocol running on the second processor which is a peer to the first interprocessor communications protocol; a physical layer interconnection between the first processor and the second processor; a first application layer entity on the first processor and a corresponding second application layer entity on the second processor, the first application layer entity and the second application layer entity together being adapted to arbitrate access to the resource between the first processor and the second processor using the first interprocessor communications protocol, the physical layer interconnection and the second intercommunications protocol to provide a communication channel between the first application layer entity and the second application layer entity.
- 2. The resource sharing system according to claim 1 wherein arbitrating access to the resource between the first processor and the second processor comprises arbitrating access to the resource between one or more applications running on the first processor and one or more applications running on the second processor core.
- 3. The resource sharing system according to claim 1 wherein the first application layer entity is a resource manager and the second application layer entity is a peer resource manager.
- 4. The system according to claim 1 further comprising an application layer state machine running on at least one of the first and second processors adapted to define a state of the resource.
- 5. The system according to claim 1 further comprising an interprocessor resource arbitration messaging protocol.
- 6. The system according to claim 1 further comprising:
for each of a plurality of resources to be shared, a respective first application layer entity on the first processor and a respective corresponding second application layer entity on the second processor, the respective first application layer entity and the respective second application layer entity together being adapted to arbitrate access to the resource between the first processor and the second processor, using the first interprocessor communications protocol, the physical layer interconnection and the second intercommunications protocol to provide a communication channel between the respective first application layer entity and the respective second application layer entity.
- 7. The resource sharing system according to claim 1 wherein arbitrating access to each resource between the first processor and the second processor comprises arbitrating access to the resource between one or more applications running on the first processor and one or more applications running on the second processor core.
- 8. The system according to claim 6 wherein one of the two interprocessor communications protocols is designed for efficiency and orthogonality between application layer entities running on the processor running the one of the two interprocessor communications protocols, and the other of the two interprocessor communications protocols is designed to leave undisturbed real-time profiles of existing real-time functions of the processor running the other of the two interprocessor communications protocols.
- 9. The system according to claim 8 wherein the first processor is a host processor, and the second processor is a coprocessor adding further functionality to the host processor.
- 10. The system according to claim 9 wherein the host processor has a message passing mechanism outside of the first interprocessor communications protocol to communicate between the first interprocessor communications protocol and the first application layer entity.
- 11. The system according to claim 6 further comprising for each resource to be shared a respective resource specific interprocessor resource arbitration messaging protocol.
- 12. The system according to claim 11 further comprising for each resource a respective application layer state machine running on at least one of the first and second processors adapted to define a state of the resource.
- 13. The system according to claim 6 wherein:
the first interprocessor communications protocol and the second interprocessor communications protocol are adapted to provide a respective resource-specific communications channel in respect of each resource, each resource-specific communications channel providing an interconnection between the application layer entities arbitrating use of the resource.
- 14. The system according to claim 6 wherein:
the first interprocessor communications protocol and the second interprocessor communications protocol are adapted to provide a respective resource-specific communications channel in respect of each resource; wherein at least one resource-specific communications channel provides an interconnection between the application layer entities arbitrating use of the resource; wherein at least one resource-specific communications channel maps directly to a processing algorithm called by the communications protocol.
- 15. The system according to claim 13 wherein for each resource-specific communications channel, the first interprocessor communications protocol and the second interprocessor communications protocol each have a respective receive queue and a respective transmit queue.
- 16. The system according to claim 6 wherein the first and second interprocessor communications protocols are adapted to exchange messages using a plurality of priorities.
- 17. The system according to claim 15 wherein the first and second interprocessor communications protocols are adapted to exchange data using a plurality of priorities by providing a respective transmit channel queue and a respective receive channel queue for each priority, and by serving higher priority channel queues before lower priority queues.
- 18. The system according to claim 12 wherein at least one of the application layer entities is adapted to advise at least one respective third application layer entity of changes in the state of their respective resources.
- 19. The system according to claim 18 wherein each at least one respective third application layer entity is an application which have registered with one of the application layer entities to be advised of changes in the state of one or more particular resources.
- 20. The system according to claim 12 wherein each state machine maintains a state of the resource and identifies how incoming and outgoing messages of the associated resource specific messaging protocol affect the state of the state machine.
- 21. The system according to claim 9 wherein the second interprocessor communications protocol comprises a channel thread domain which provides at least two different priorities over the physical layer interconnection.
- 22. The system according to claim 21 wherein the channel thread domain runs as part of a physical layer ISR (interrupt service routine).
- 23. The system according to claim 21 wherein the channel thread domain provides at least two different priorities and a control priority.
- 24. The system according to claim 9 wherein for each resource, the respective second application layer entity comprises an incoming message listener, an outgoing message producer and a state controller.
- 25. The system according to claim 24 wherein the state controller and outgoing message producer are on one thread specific to each resource, and the incoming message listener is a separate thread that is adapted to serve a plurality of resources.
- 26. The system according to claim 8 wherein for each resource, the second application layer entity is entirely event driven and controlled by an incoming message listener.
- 27. The system according to claim 12 wherein a state machine is maintained on both processors for each resource.
- 28. The system according to claim 12 wherein the second interprocessor communications protocol further comprises a system observable having a system state machine and state controller.
- 29. The system according to claim 28 wherein messages in respect of all resources are routed through the system observable, thereby allowing conglomerate resource requests.
- 30. A system according to claim 8 wherein each second application layer entity has a common API (application interface).
- 31. A system according to claim 30 wherein the common API comprises, for a given application layer entity, one or more interfaces in the following group:
an interface for an application to register with the application layer entity to receive event notifications generated by this application layer entity; an interface for an application to de-register from the application layer entity to no longer receive event notifications generated by this application layer entity; an interface for an application to temporarily suspend the notifications from the application layer entity; an interface for an application to end the suspension of the notifications from that application layer entity; an interface to send data to the corresponding application layer entity; and an interface to invoke a callback function from the application layer entity to another application.
- 32. The system according to claim 8 further comprising:
for each resource a respective receive session queue and a respective transmit session queue in at least one of the first interprocessor communications protocol and the second interprocessor communications protocol.
- 33. The system according to claim 32 further comprising:
for each of a plurality of different priorities, a respective receive channel queue and a respective transmit channel queue in at least one of the first interprocessor communications protocol and the second interprocessor communications protocol.
- 34. The system according to claim 33 further comprising on at least one of the two processors, a physical layer service routine adapted to service the transmit channel queues by dequeueing channel data elements from the transmit channel queues starting with a highest priority transmit channel queue and transmitting the channel data elements thus dequeued over the physical layer interconnection, and to service the receive channel queues by dequeueing channel data elements from the physical layer interconnection and enqueueing them on a receive channel queue having a priority matching that of the dequeued channel data element.
- 35. The system according to claim 33 wherein on one of the two processors, the transmit channel queues and receive channel queues are serviced on a scheduled basis, the system further comprising on the one of the two processors, a transmit buffer between the transmit channel queues and the physical layer interconnection and a receive buffer between the receive physical layer interconnection and the receive channel queues, wherein the output of the transmit channel queues is copied to the transmit buffer which is then periodically serviced by copying to the physical layer interconnection, and wherein received data from the physical layer interconnection is emptied into the receive buffer which is then serviced when the channel controller is scheduled.
- 36. The system according to claim 34 wherein each transmit session queue is bound to one of the transmit channel queues, each receive session queue is bound to one of the receive channel queues and each session queue is given a priority matching the channel queue to which the session queue is bound, the system further comprising:
a session thread domain adapted to dequeue from the transmit session queues working from highest priority session queue to lowest priority session queue and to enqueue on the transmit channel queue to which the transmit session queue is bound, and to dequeue from the receive channel queues working from the highest priority channel queue to the lowest priority channel queue and to enqueue on an appropriate receive session queue, the appropriate receive session queue being determined by matching an identifier in that which is to be enqueued to a corresponding session queue identifier.
- 37. The system according to claim 36 wherein data/messages is transmitted between corresponding application layer entities managing a given resource in frames;
wherein the session thread domain converts each frame into one or more packets; wherein the channel thread domain converts each packet into one or more blocks for transmission.
- 38. The system according to claim 37 wherein blocks received by the channel controller are stored a data structure comprising one or more blocks, and a reference to the data structure is queued for the session layer thread domain to process.
- 39. The system according to claim 38 further comprising, for each of a plurality of {queue, peer queue} pairs implemented by the first and second interprocessor communications protocols, a respective flow control protocol.
- 40. The system according to claim 39 further comprising:
for each of a plurality of {transmit session queue, peer receive session queue} pairs implemented by the first and second interprocessor communications protocols, a respective flow control protocol, wherein the session thread is adapted to handle congestion in a session queue; for each of a plurality of {transmit channel queue, peer receive channel queue} pairs implemented by the first and second interprocessor communications protocols, a respective flow control protocol, wherein the channel controller handles congestion on a channel queue.
- 41. The system according to claim 40 wherein the session controller handles congestion in a receive session queue with flow control messaging exchanged through an in-band control channel.
- 42. The system according to claim 40 wherein the physical layer ISR handles congestion in a receive channel queue with flow control messaging exchanged through an out-of-band channel.
- 43. The system according to claim 40 wherein congestion in a transmit session queue is handled by the corresponding application entity.
- 44. The system according to claim 40 wherein congestion in a transmit channel queue is handled by the session thread by holding any channel data element directed to the congested queues and letting traffic queue up in the session queues.
- 45. The system according to claim 8 wherein the interprocessor communications protocol designed to mitigate the effects on the real-time profile further comprises an additional buffer between the physical layer interconnection a scheduled combined channel controller/session manager function adapted to perform buffering during periods between schedulings of the combined channel controller/session manager.
- 46. The system according to claim 45 wherein the first interprocessor communications protocol interfaces with application layer entities using a message-passing mechanism provided by the processor the external to the first interprocessor communications protocol of the first processor, each application layer entity being a resource manager.
- 47. The system according to claim 45 wherein the first interprocessor communications protocol is implemented with a single thread acting as a combined channel controller and session manager.
- 48. The system according to claim 45 wherein the first interprocessor communications protocol is implemented with a single system task acting as a combined channel controller and session manager.
- 49. The system according to claim 1 wherein the physical layer interconnection is a serial link.
- 50. The system according to claim 1 wherein the physical layer interconnection is an HPI (host processor interface).
- 51. The system according to claim 1 wherein the physical layer interconnection is a shared memory arrangement.
- 52. The system according to claim 1 wherein the physical layer interconnection comprises an in-band messaging channel and an out-of-band messaging channel.
- 53. The system according to claim 52 wherein the out-of-band messaging channel comprises at least one hardware mailbox.
- 54. The system according to claim 53 wherein the at least one hardware mailbox comprises at least one mailbox for each direction of communication.
- 55. The system according to claim 52 wherein the in-band messaging channel comprises a hardware FIFO.
- 56. The system according to claim 52 wherein the in-band messaging channel comprises a pair of unidirectional hardware FIFOs.
- 57. The system according to claim 52 wherein the in-band messaging channel comprises a shared memory location.
- 58. The system according to claim 56 wherein the out-of-band messaging channel comprises a hardware mailbox, the hardware mailbox causing an interrupt on the appropriate processor.
- 59. The system according to claim 52 wherein an out-of-band message to a particular processor causes an interrupt on the processor to receive the out-of-band message and causes activation of an interrupt service routine which is adapted to parse the message.
- 60. An interprocessor interface for interfacing between a first processor core and a second processor core, the interprocessor interface comprising:
at least one data FIFO queue having an input adapted to receive data from the second processor core and an output adapted to send data to the first processor core; at least one data FIFO queue having an input adapted to receive data from the first processor core and an output adapted to send data to the second processor core; a first out-of-band message transfer channel for sending a message from the first processor core to the second processor core; a second out-of-band message transfer channel for sending a message from the second processor core to the first processor core.
- 61. A system on a chip comprising an interprocessor interface according to claim 60 in combination with the second processor core.
- 62. An interprocessor interface according to claim 60 further comprising:
a first interrupt channel adapted to allow the first processor core to interrupt the second processor core; and a second interrupt channel adapted to allow the second processor core to interrupt the first processor core.
- 63. An interprocessor interface according to claim 60 further comprising at least one register adapted to store an interrupt vector.
- 64. An interprocessor interface according to claim 60 having functionality accessible by the first processor core memory mapped to a first memory space understood by the first processor core, and having functionality accessible by the second processor core memory mapped to a second memory space understood by the second processor core.
- 65. An interprocessor interface according to claim 60 comprising a first access port comprising:
a data port, an address port and a plurality of control ports.
- 66. An interprocessor interface according to claim 65 wherein the control ports one or more of a group comprising chip select, write, read, interrupt, and DMA (direct memory access) interrupts.
- 67. An interprocessor interface according to claim 65 further comprising chip select decode circuitry adapted to allow a chip select normally reserved for another chip to be used for the interprocessor interface over a range of addresses memory mapped to the interprocessor interface the range of addresses comprising at least a sub-set of addresses previously mapped to said another chip.
- 68. An interprocessor interface according to claim 65 comprising a second access port comprising:
a data port, an address port, and a control port.
- 69. A system on a chip comprising the interprocessor interface of claim 67 in combination with the second processor, wherein the second access port is internal to the system on a chip.
- 70. An interprocessor interface according to claim 60 further comprising at least one general purpose input/output pin.
- 71. An interprocessor interface according to claim 60 further comprising:
a first plurality of memory mapped registers accessible to the first processor core, and a second plurality of memory mapped registers accessible to the second processor core.
- 72. An interprocessor interface according to claim 60 wherein the second processor core has a sleep state in which the second processor core has a reduced power consumption, and in which the interprocessor interface remains active.
- 73. An interprocessor interface according to claim 72 further comprising a register indicating the sleep state of the second processor core.
- 74. A system on a chip according to claim 61 wherein the second processor core has a sleep mode in which the second processor core has a reduced power consumption, and in which the interprocessor interface remains active.
- 75. A system on a chip according to claim 74 further comprising a register indicating the sleep state of the second processor core.
- 76. The system according to claim 1 wherein the physical layer interconnection between the first processor and the second processor comprises an interprocessor interface, the interprocessor interface comprising:
at least one data FIFO queue having an input adapted to receive data from the second processor core and an output adapted to send data to the first processor core; at least one data FIFO queue having an input adapted to receive data from the first processor core and an output adapted to send data to the second processor core; a first out-of-band message transfer channel for sending a message from the first processor core to the second processor core; a second out-of-band message transfer channel for sending a message from the second processor core to the first processor core.
- 77. The system according to claim 76 wherein the interprocessor interface further comprises:
a first interrupt channel adapted to allow the first processor core to interrupt the second processor core; and a second interrupt channel adapted to allow the second processor core to interrupt the first processor core.
- 78. The system according to claim 77 wherein the interprocessor interface further comprises at least one register adapted to store an interrupt vector.
- 79. The system according to claim 76 wherein the interprocessor interface has functionality accessible by the first processor core memory mapped to a first memory space understood by the first processor core, and has functionality accessible by the second processor core memory mapped to a second memory space understood by the second processor core.
- 80. A resource sharing system comprising:
first processing means and second processing means, the first processor means managing a resource means which is to be made available to the second processing means; a first interprocessor communications protocol means running on the first processing means, and a second interprocessor communications protocol means running on the second processing means which is a peer to the first interprocessor communications protocol means; a physical layer interconnection means between the first processing means and the second processing means; a first application layer means on the first processing means and a corresponding second application layer means on the second processing means, the first application layer means and the second application layer means together being adapted to arbitrate access to the resource means between the first processing means and the second processing means using the first interprocessor communications protocol means, the physical layer interconnection means and the second intercommunications protocol means to provide a communication channel between the first application layer means and the second application layer means.
- 81. The system according to claim 80 further comprising an application layer state machine means running on at least one of the first processing means and second processing means adapted to define a state of the resource means.
- 82. The system according to claim 80 further comprising:
for each of a plurality of resource means to be shared, a respective first application layer means on the first processing means and a respective corresponding second application layer means on the second processing means, the respective first application layer means and the respective second application layer means together being adapted to arbitrate access to the resource means between the first processing means and the second processor means, using the first interprocessor communications protocol means, the physical layer interconnection means and the second intercommunications protocol means to provide a communication channel between the respective first application layer means and the respective second application layer means.
- 83. The system according to claim 82 wherein one of the two interprocessor communications protocol means is a scheduled process, and the other of the two interprocessor communications protocols is designed for efficiency and orthogonality between application layer entities running on the processing means.
- 84. The system according to claim 83 further comprising:
for each resource means to be shared a respective resource specific interprocessor resource arbitration messaging protocol means; for each resource a respective application layer state machine means running on at least one of the first and second processing means adapted to define a state of the resource means.
- 85. The system according to claim 84 wherein the first and second interprocessor communications protocols means are adapted to exchange data using a plurality of priorities by providing a respective transmit channel queue means and a respective receive channel queue means for each priority, and by serving higher priority channel queue means before lower priority queue means.
- 86. The system according to claim 85 wherein each state machine means maintains a state of the resource means and identifies how incoming and outgoing messages of the associated resource specific messaging protocol means affect the state of the state machine means.
RELATED APPLICATIONS
[0001] This application claims the benefit of Provisional Application 60/240,360 filed Oct. 13, 2000, Provisional Application 60/242,536 filed Oct. 23, 2000, Provisional Application 60/243,655 filed Oct. 26, 2000, Provisional Application 60/246,627 filed Nov. 7, 2000, Provisional Application 60/252,733 filed Nov. 22, 2000, Provisional Application 60/253,792 filed Nov. 29, 2000, Provisional Application 60/257,767 filed Dec. 22, 2000, Provisional Application 60/268,038 filed Feb. 23, 2001, Provisional Application 60/271,911 filed Feb. 27, 2001, Provisional Application 60/280,203 filed Mar. 30, 2001, and Provisional Application 60/288,321, filed May 3, 2001.
Provisional Applications (11)
|
Number |
Date |
Country |
|
60240360 |
Oct 2000 |
US |
|
60242536 |
Oct 2000 |
US |
|
60243655 |
Mar 2001 |
US |
|
60246627 |
Nov 2000 |
US |
|
60252733 |
Nov 2000 |
US |
|
60253792 |
Nov 2000 |
US |
|
60257767 |
Dec 2000 |
US |
|
60268038 |
Feb 2001 |
US |
|
60271911 |
Feb 2001 |
US |
|
60280203 |
Mar 2001 |
US |
|
60288321 |
May 2001 |
US |