The present invention relates generally to managing the allocation of resources in a network, and in particular embodiments, to techniques and mechanisms for systems and methods for real time context based isolation and virtualization.
A typical semiconductor device, such as a transceiver device, includes various interconnected hardware components to provide various functional processes, such as, transmission, reception, despreading, decoding, Fast Fourier Transforms (FFTs), signal measurements, Multiple Input Multiple Output (MIMO) operations, direct memory access (DMA) operations, memory-related operations, and the like. Multiple hardware components (sometimes referred to as intellectual property (IP) blocks) may be involved in the execution of each process. For example, each process may include one or more threads, which are executed by one or more IP blocks in the device. Fault isolation and virtualization of different processes are desirable for efficient management of system resources.
Timesharing systems may be implemented on some devices. While such timesharing systems may provide good isolation and virtualization among different processes, timesharing systems generally cannot guarantee real-time constraints and typically only work on homogeneous systems (e.g., systems with one type of processor). In contrast, many embedded systems are heterogeneous systems (e.g., systems having more than one type of processor) that can provide real-time guarantees. However, isolation and virtualization of processes on embedded systems is limited.
Technical advantages are generally achieved, by embodiments of this disclosure which describe systems and methods for real time context based isolation and virtualization.
In accordance with an embodiment, a method includes receiving, by an intellectual property (IP) block within a computing system, a transaction request and determining, by the IP block, a context corresponding to the transaction request. The method further includes determining, by the IP block, a view of the computing system defined by the context and processing, by the IP block, the transaction request in accordance with the view of the computing system defined by the context.
In accordance with another embodiment, an intellection property (IP) block includes processing circuitry and a computer readable storage medium storing programming for execution by the processing circuitry. The programming including instructions to receive a transaction request, receive a context identifier (ID) of a context corresponding to the transaction request, determine a system view defined by the context, and process the transaction request in accordance with the system view defined by the context.
In accordance with yet another embodiment, a method includes receiving, by a controller in a computing system, a service request including a thread and determining, by the controller, a context for the thread. The method further includes configuring, by the controller, a view of the computing system by the context on a respective context table at each of a plurality of intellectual property (IP) blocks in the computing system. After configuring, by the controller, the view of the computing system defined by the context, a service corresponding to the service request is initiated.
In accordance with yet another embodiment, a controller includes a processor and a computer readable storage medium storing programming for execution by the processor. The programming including instructions to receive a service request including a thread, determine a context for the thread, configure a system view defined by the context on a respective context table at each of a plurality of intellectual property (IP) blocks, and after configuring the system view defined by the context, initiate a service corresponding to the service request.
For a more complete understanding of the present disclosure, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Corresponding numerals and symbols in the different figures generally refer to corresponding parts unless otherwise indicated. The figures are drawn to clearly illustrate the relevant aspects of the embodiments and are not necessarily drawn to scale.
The making and using of embodiments of this disclosure are discussed in detail below. It should be appreciated, however, that the concepts disclosed herein can be embodied in a wide variety of specific contexts, and that the specific embodiments discussed herein are merely illustrative and do not serve to limit the scope of the claims. Further, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of this disclosure as defined by the appended claims.
Various embodiments are described within a specific context, namely isolation and virtualization in a heterogeneous, real-time environment of a transceiver device. However, various embodiment devices may be used in any semiconductor device having different hardware components where isolation and virtualization of processes is desired.
Various embodiments implement contexts within a heterogeneous, real-time environment in order to provide isolation and virtualization of different processes run by a system (e.g., a semiconductor device). A process may include one or more threads, and each thread may run within a pre-defined context. One or more threads may run within a same context. The system may configure these contexts on individual hardware components (referred to as IP blocks), and each context defines what system resources, capabilities, priority, security, and the like are available to threads operating within the context. For example, different IP blocks offer resources and capabilities to threads in accordance with the context of the threads, which an IP block can determine using a context identifier (ID) transmitted with every transaction within the system. Thus, resource management and access permissions may be enforced by individual IP blocks within the system, which provides an efficient, real-time mechanism for visualization and isolation of embedded processing resources.
As further illustrated by
In system 100, hypervisor 102 configures contexts for threads running within system 100 and IP blocks 104. Context configuration may be performed at any time in the system, such as, link time, boot time, runtime, combinations thereof, and the like. In an embodiment, hypervisor 102 receives a service request from a user. The service request may include one or more threads. Hypervisor 102 determines a context of each thread in the service request, and hypervisor 102 also determines whether the relevant contexts (e.g., the contexts of the threads) are configured at IP blocks 104. In some embodiments, contexts for different threads are determined by a system engineer, who is configuring and/or programming the system. For example, the engineering may make a map of possible system operations and divide the operations into different contexts. Once the mapping is complete, hypervisor 102 configures contexts at individual IP blocks according to the mapping.
Each IP block 104 includes a mechanism to support contexts locally. For example, hypervisor 102 may update applicable tables at IP blocks 104 with the resources and capabilities available to threads running within the context (referred to as a thread's view of the system). For example, a thread only sees available system resources/capabilities as defined by the thread's context, and other system resources/capabilities outside of the thread's context are not visible to the thread. Each IP block 104 may define its various views of the system (e.g., context-specific views of the system) and present an applicable view of the system based on a context of a requester (e.g., a context of a thread requesting an operation).
In an embodiment, contexts configured in an IP block's context table remain indefinitely. In another embodiment, contexts in a context table may be periodically cleared or updated depending on system configuration. For example, memory IP blocks' context tables may track access permissions to specific memory locations for each context. These access permissions may change as an application runs, and the memory IP blocks may request buffers/release outdated contexts and/or request updates to contexts at periodic intervals. In another example, proxy IP blocks (e.g., DMA HACs) may pass a context to a controlling IP block, and the proxy IP blocks may not keep any information on contexts between threads. For example, the proxy IP block may clear its context table after a thread is completed. Other context table clearing and/or updating schemes may be implemented in other embodiments. Thus, the management of context tables within each IP block may vary between different of IP blocks (e.g., between different types of IP blocks) depending on system configuration.
After hypervisor 102 determines the relevant context(s) are configured at relevant IP blocks 104, hypervisor 102 initiates the service request by passing the threads and the context of each thread to processor(s) 108. Operating in a supervisor mode, processor 108 then stores the context of the next thread to execute (“n” in
Subsequently, Processor 108 processes the thread by transmitting transaction requests to applicable IP blocks 104, and a context ID (“n” in
In various embodiments, a controller (e.g., a hypervisor) configures tables at each IP block with the relevant contexts operating within the system. For example,
In some embodiments, one or more IP block within the system may not include any context tables (e.g., table size zero). In such embodiments, the IP blocks may assume all contexts are permitted and may process requests accordingly. However, the IP blocks may still accept a context ID with each received transaction request and pass along the context ID with each transmission related to the transaction request. For example, although such IP blocks may not change its behavior with different contexts, the IP blocks may still pass along contexts that received at an input to an output.
By configuring context tables at each IP block within the system, individual IP blocks may enforce contexts locally without additional input from a controller (e.g., a hypervisor or an operating system) while the IP block is processing a transaction. In an embodiment, an IP block (e.g., a FFT HAC) receives a transaction request in a supervisor mode (e.g., a local supervisor receives a transaction request). The IP block validates the request based on the context (e.g., as determined by a context ID received with the transaction request). Subsequently, the IP block sets its “user context” to the appropriate context for the request (Context A or Context B) and the IP block performs the requested transaction (e.g., an FFT) in the appropriate context in a user mode. When running in a user mode, the IP block may be prevented from changing its context. This provides isolation and prevents user applications from impersonating other contexts. When the transaction finishes, the IP block transitions back to supervisor mode to determine what thread to run next and repeats the cycle.
For example,
In an embodiment, DMA HAC 216 acts as a proxy and passes the transaction request and the context ID to a memory device (e.g., SL2 memory 218). SL2 memory 218 determines the resources and capabilities available to threads in Context A and processes the request accordingly. For example, based on context table 210 (see
In another embodiment, DMA HAC 216 may process the request in accordance with the system capabilities and resources defined by Context A. For example, SL2 memory capabilities and resources of Context A may be known to DMA HAC 216 (e.g., through a table at DMA HAC 216), and DMA HAC 216 may throttle access to SL2 memory 218 for any threads running in a context without the appropriate permissions. In such embodiments, DMA HAC 216 only transmits transaction requests to SL2 memory 218 for threads running in contexts providing the appropriate capabilities/resources. Thus, access to individual IP Blocks 104 can be controlled at any point in the system by configuring context tables at individual IP blocks 104. Generic IP blocks (e.g., DMA HAC 216) may simply pass a thread and corresponding context, or the generic IP blocks may control access to subsequent IP blocks.
In some embodiments, a messaging system 220 may be used to change a context of data as illustrated by
In various embodiments, changing contexts may be useful to pass data between different functions in the system. For example, measurements in a device may be performed in a first context while data transfers (e.g., transmission/reception) may operate in a second context. In such embodiments, messaging system 220 may be used to transmit measurement data (e.g., channel quality) taken in the first context to data transfer operations running in the second context. In other embodiments, contexts of data may be changed for other reasons.
As described above, various embodiments implement contexts within a heterogeneous, real-time environment in order to provide isolation and virtualization of different processes run by a computing system (e.g., system 100). Contexts define a specific view of the system to threads running within the context. For example, a context may define the available resources, capability, priority, protection, combinations thereof and the like available to threads running within the context. Various IP blocks in a system process threads in the thread's context. When an IP block receives a transaction request, an indicator (e.g., a context ID) is transmitted to indicate a context of an initiator of the request. The IP block may inherit the context of the initiator, and using the system view defined by the context, the IP block determines the functionality, priority, resources, protection, and/or the like available to process the requested transaction. Information may be passed between contexts through static messaging channels. Thus, resource management and access permissions may be enforced by individual IP blocks within the system, which provides an efficient, real-time mechanism for visualization and isolation of embedded processing resources.
It has further been observed that by implementing the embodiments described above, isolation can be implemented in a system to reduce software effort as illustrated by graph 500 of
In some embodiments, the processing system 600 is included in a network device that is accessing, or part otherwise of, a telecommunications network. In one example, the processing system 600 is in a network-side device in a wireless or wireline telecommunications network, such as a base station, a relay station, a scheduler, a controller, a gateway, a router, an applications server, or any other device in the telecommunications network. In other embodiments, the processing system 600 is in a user-side device accessing a wireless or wireline telecommunications network, such as a mobile station, a user equipment (UE), a personal computer (PC), a tablet, a wearable communications device (e.g., a smartwatch, etc.), or any other device adapted to access a telecommunications network.
In some embodiments, one or more of the interfaces 610, 612, 614 connects the processing system 600 to a transceiver adapted to transmit and receive signaling over the telecommunications network.
The transceiver 700 may transmit and receive signaling over any type of communications medium. In some embodiments, the transceiver 700 transmits and receives signaling over a wireless medium. For example, the transceiver 700 may be a wireless transceiver adapted to communicate in accordance with a wireless telecommunications protocol, such as a cellular protocol (e.g., long-term evolution (LTE), etc.), a wireless local area network (WLAN) protocol (e.g., Wi-Fi, etc.), or any other type of wireless protocol (e.g., Bluetooth, near field communication (NFC), etc.). In such embodiments, the network-side interface 702 comprises one or more antenna/radiating elements. For example, the network-side interface 702 may include a single antenna, multiple separate antennas, or a multi-antenna array configured for multi-layer communication, e.g., single input multiple output (SIMO), multiple input single output (MISO), multiple input multiple output (MIMO), etc. In other embodiments, the transceiver 700 transmits and receives signaling over a wireline medium, e.g., twisted-pair cable, coaxial cable, optical fiber, etc. Specific processing systems and/or transceivers may utilize all of the components shown, or only a subset of the components, and levels of integration may vary from device to device.
Although the description has been described in detail, it should be understood that various changes, substitutions and alterations can be made without departing from the spirit and scope of this disclosure as defined by the appended claims. Moreover, the scope of the disclosure is not intended to be limited to the particular embodiments described herein, as one of ordinary skill in the art will readily appreciate from this disclosure that processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, may perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.