One or more embodiments generally relate to graphics processing and, in particular, a hybrid mode interpolator for graphics processing.
Graphics rendering on graphical processing units (GPUs) requires large amount of computation in pixel varying attribute interpolation, which uses a lot of energy and silicon area.
One or more embodiments generally relate to graphics processing using a hybrid mode interpolator for resource reduction. In one embodiment, a method for processing pixel information includes pushing pixel varying attributes to a register file of a shader processing element. In one embodiment, at least a portion of the pixel varying attributes are pulled based on a control flow in the shader processing element. In one embodiment, said at least a portion of the pixel varying attributes are interpolated.
In one embodiment, a method for processing pixel information includes pushing pixel varying attributes to a texture unit. In one embodiment, at least a portion of the pixel varying attributes are pulled based on a control flow in the shader processing element. In one embodiment, said at least a portion of the pixel varying attributes is interpolated.
In one embodiment, a GPU for an electronic device comprises one or more processing elements coupled to a memory device. In one embodiment, the one or more processing elements: push pixel varying attributes to a register file for a shader processing element, provide functionality to the shader processing element for pulling at least a portion of the pixel varying attributes based on a control flow in the shader processing element, and perform interpolation using an interpolation unit for said at least a portion of the pixel varying attributes.
In one embodiment, a GPU for an electronic device comprises one or more processing elements coupled to a memory device. In one embodiment, the one or more processing elements: push pixel varying attributes to a texture unit, provide functionality to a shader processing element for pulling at least a portion of the pixel varying attributes based on a control flow in the shader processing element, and perform interpolation using an interpolation unit for said at least a portion of the pixel varying attributes.
These and other aspects and advantages of one or more embodiments will become apparent from the following detailed description, which, when taken in conjunction with the drawings, illustrate by way of example the principles of the one or more embodiments.
For a fuller understanding of the nature and advantages of the embodiments, as well as a preferred mode of use, reference should be made to the following detailed description read in conjunction with the accompanying drawings, in which:
The following description is made for the purpose of illustrating the general principles of one or more embodiments and is not meant to limit the inventive concepts claimed herein. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations. Unless otherwise specifically defined herein, all terms are to be given their broadest possible interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc.
One or more embodiments generally relate to graphics processing using a hybrid mode interpolator for resource reduction. In one embodiment, a fixed function interpolator is implemented that interacts with the shader core to reduce resource consumption, such as power and physical hardware area in most common performance cases while still supporting the complete specification of advanced graphics APIs.
In one embodiment, several power and area optimizations are provided, such as adaptively calculating a plane equation base parameter in a setup unit, interpolating at 4×4, 8×8 or larger size blocks (instead of 2×2 quads) in the interpolator unit to save computations, sharing a large block interpolator with reciprocal quadratic interpolation logic used for perspective division (for saving physical hardware area), and intelligent scheduling for maximizing the efficiency of the interpolator.
In one or more embodiments, a hybrid push and pull mode is used to support the complete OpenGL 4.3 and DirectX 11 functionality and maximize power saving for most common performance cases. In one embodiment, the push mode provides the interpolator (IPA) to interpolate the pixel varying attributes based on the Rasterizer output pixel location and valid mask information without Shader Core intervention.
In one or more embodiments, the push mode includes benefits, such as: removing interpolation instructions, division or reciprocal and multiplication instructions required by perspective correction, and preamble texture instructions from the Shader program executed in the Shader Processing Elements (PE); saves Shader PE energy and area on instruction fetch, decode, scheduling and execution as Shader execution is more expensive in terms of energy and area; simplifies Shader PE scheduling, instruction issuance and pipeline control; allows direct forwarding to the Texture unit (TEX) for preamble texture processing; reduces the Shader PE hardware cost for texture latency compensation and reduces energy by eliminating extra data movement between the IPA, PE and TEX; supports Shader bypass to power down entire Shader PE(s); and releases plane equation data earlier to save physical storage area.
In one embodiment, a method provides for processing pixel information. In one embodiment, pixel varying attributes are pushed to a register file of a shader processing element. In one embodiment, at least a portion of the pixel varying attributes are pulled based on a control flow of the shader processing element. In one embodiment, interpolation is performed for the at least a portion of pixel varying attributes.
Any suitable circuitry, device, system or combination of these (e.g., a wireless communications infrastructure including communications towers and telecommunications servers) operative to create a communications network may be used to create communications network 110. Communications network 110 may be capable of providing communications using any suitable communications protocol. In some embodiments, communications network 110 may support, for example, traditional telephone lines, cable television, Wi-Fi (e.g., an IEEE 802.11 protocol), Bluetooth®, high frequency systems (e.g., 900 MHz, 2.4 GHz, and 5.6 GHz communication systems), infrared, other relatively localized wireless communication protocol, or any combination thereof. In some embodiments, the communications network 110 may support protocols used by wireless and cellular phones and personal email devices (e.g., a Blackberry®). Such protocols may include, for example, GSM, GSM plus EDGE, CDMA, quadband, and other cellular protocols. In another example, a long range communications protocol can include Wi-Fi and protocols for placing or receiving calls using VOIP, LAN, WAN, or other TCP-IP based communication protocols. The transmitting device 12 and receiving device 11, when located within communications network 110, may communicate over a bidirectional communication path such as path 13, or over two unidirectional communication paths. Both the transmitting device 12 and receiving device 11 may be capable of initiating a communications operation and receiving an initiated communications operation.
The transmitting device 12 and receiving device 11 may include any suitable device for sending and receiving communications operations. For example, the transmitting device 12 and receiving device 11 may include mobile telephone devices, television systems, cameras, camcorders, a device with audio video capabilities, tablets, wearable devices, and any other device capable of communicating wirelessly (with or without the aid of a wireless-enabling accessory system) or via wired pathways (e.g., using traditional telephone wires). The communications operations may include any suitable form of communications, including for example, voice communications (e.g., telephone calls), data communications (e.g., e-mails, text messages, media messages), video communication, or combinations of these (e.g., video conferences).
In one embodiment, all of the applications employed by the audio output 123, the display 121, input mechanism 124, communications circuitry 125, and the microphone 122 may be interconnected and managed by control circuitry 126. In one example, a handheld music player capable of transmitting music to other tuning devices may be incorporated into the electronics device 120.
In one embodiment, the audio output 123 may include any suitable audio component for providing audio to the user of electronics device 120. For example, audio output 123 may include one or more speakers (e.g., mono or stereo speakers) built into the electronics device 120. In some embodiments, the audio output 123 may include an audio component that is remotely coupled to the electronics device 120. For example, the audio output 123 may include a headset, headphones, or earbuds that may be coupled to communications device with a wire (e.g., coupled to electronics device 120 with a jack) or wirelessly (e.g., Bluetooth® headphones or a Bluetooth® headset).
In one embodiment, the display 121 may include any suitable screen or projection system for providing a display visible to the user. For example, display 121 may include a screen (e.g., an LCD screen) that is incorporated in the electronics device 120. As another example, display 121 may include a movable display or a projecting system for providing a display of content on a surface remote from electronics device 120 (e.g., a video projector). Display 121 may be operative to display content (e.g., information regarding communications operations or information regarding available media selections) under the direction of control circuitry 126.
In one embodiment, input mechanism 124 may be any suitable mechanism or user interface for providing user inputs or instructions to electronics device 120. Input mechanism 124 may take a variety of forms, such as a button, keypad, dial, a click wheel, or a touch screen. The input mechanism 124 may include a multi-touch screen.
In one embodiment, communications circuitry 125 may be any suitable communications circuitry operative to connect to a communications network (e.g., communications network 110,
In some embodiments, communications circuitry 125 may be operative to create a communications network using any suitable communications protocol. For example, communications circuitry 125 may create a short-range communications network using a short-range communications protocol to connect to other communications devices. For example, communications circuitry 125 may be operative to create a local communications network using the Bluetooth® protocol to couple the electronics device 120 with a Bluetooth® headset.
In one embodiment, control circuitry 126 may be operative to control the operations and performance of the electronics device 120. Control circuitry 126 may include, for example, a processor, a bus (e.g., for sending instructions to the other components of the electronics device 120), memory, storage, or any other suitable component for controlling the operations of the electronics device 120. In some embodiments, a processor may drive the display and process inputs received from the user interface. The memory and storage may include, for example, cache, Flash memory, ROM, and/or RAM/DRAM. In some embodiments, memory may be specifically dedicated to storing firmware (e.g., for device applications such as an operating system, user interface functions, and processor functions). In some embodiments, memory may be operative to store information related to other devices with which the electronics device 120 performs communications operations (e.g., saving contact information related to communications operations or storing information related to different media types and media items selected by the user).
In one embodiment, the control circuitry 126 may be operative to perform the operations of one or more applications implemented on the electronics device 120. Any suitable number or type of applications may be implemented. Although the following discussion will enumerate different applications, it will be understood that some or all of the applications may be combined into one or more applications. For example, the electronics device 120 may include an automatic speech recognition (ASR) application, a dialog application, a map application, a media application (e.g., QuickTime, MobileMusic.app, or MobileVideo.app), social networking applications (e.g., Facebook®, Twitter®, etc.), an Internet browsing application, etc. In some embodiments, the electronics device 120 may include one or multiple applications operative to perform communications operations. For example, the electronics device 120 may include a messaging application, a mail application, a voicemail application, an instant messaging application (e.g., for chatting), a videoconferencing application, a fax application, or any other suitable application for performing any suitable communications operation.
In some embodiments, the electronics device 120 may include a microphone 122. For example, electronics device 120 may include microphone 122 to allow the user to transmit audio (e.g., voice audio) for speech control and navigation of applications 1-N 127, during a communications operation or as a means of establishing a communications operation or as an alternative to using a physical user interface. The microphone 122 may be incorporated in the electronics device 120, or may be remotely coupled to the electronics device 120. For example, the microphone 122 may be incorporated in wired headphones, the microphone 122 may be incorporated in a wireless headset, the microphone 122 may be incorporated in a remote control device, etc.
In one embodiment, the camera module 128 comprises one or more camera devices that include functionality for capturing still and video images, editing functionality, communication interoperability for sending, sharing, etc., photos/videos, etc.
In one embodiment, the GPU module 129 comprises processes and/or programs for processing images and portions of images for rendering on the display 121 (e.g., 2D or 3D images). In one or more embodiments, the GPU module may comprise GPU hardware and memory (e.g., IPA 460,
In one embodiment, the electronics device 120 may include any other component suitable for performing a communications operation. For example, the electronics device 120 may include a power supply, ports, or interfaces for coupling to a host device, a secondary input mechanism (e.g., an ON/OFF switch), or any other suitable component.
One or more embodiments adaptively perform the As interpolation based on the location of the triangle seed point, when the triangle seed point is inside the tile 330 being rendered. In one embodiment, the attribute of the seed vertex (V0) is used as the plane equation base parameter in pixel interpolation. In one embodiment, the calculation of As is then saved. In one embodiment, when the seed point is outside the current tile 330, As is calculated at 320 in a triangle 310 setup.
In one embodiment, the IPA 0460 sends push mode interpolation results to the vector Register File in a Processing Element (PE) (e.g., 452, 453, 454 or 455) of the PE Quad 451 through a Load Store Unit (LSU) (e.g., 456 or 458). The LSU notifies the WSQ 457 after writing last attribute data for a warp to the vector Register File, so that the WSQ 457 updates the status of the warp and makes it ready for shader execution. In one embodiment, if the Pull IPA mode is enabled, the Pixel Shader will make pull IPA requests by executing the pull IPA instructions. The IPA 0460 performs pull mode interpolation. The IPA 0460 sends pull mode interpolation results to the vector Register File in the PE through the LSU.
In one embodiment, the IPA 0460 passes primitive done signal to the PSC 472 when it finishes a primitive. When the PSC 472 receives the primitive done signals for a primitive from both IPA 0460 and IPA 1461, it will return the final primitive done signal to the Setup Unit (SU) 490. The SU 490 also includes the plane equation table (PET) 491 that supplies attribute plane equations and triangle seed positions to the shared block interpolator and reciprocal unit (SBR) (e.g., 680, 690,
In one embodiment, the IPA GState specifies whether the Push mode and Pull mode interpolation are enabled. In one embodiment, when the Pull IPA is off: the IPA 0460 (or IPA 1461) interpolates all attributes and pushes the results into the Register File in a PE (e.g., PE 0452, PE 1454, PE 2453 or PE 3455) based on the Push to Register File attribute mask defined in the GState. In one example, the Push to Register File attribute mask contains 128 bits, and each bit represents a valid flag that specifies whether the associated scalar attribute component needs to be pushed to the Register File. The plane equations are released as soon as all interpolation associated with the primitive is completed. The plane equations are generated by the SU 490 and stored in the PET 491.
In one embodiment, when the Pull IPA is on: the IPA 0460 (or IPA 1461) interpolates a portion of attributes and performs a push based on the Push to Register File attribute mask. In one example, the Pixel Shader executes the pull interpolation instruction includes a 16-bit mask to specify which of the 16 consecutive attribute components to interpolate and send to the Register File. The pull IPA instruction supports DX11/OpenGL4.x style programmable pixel offset interpolation. The interpolation mode per PS input element (V#) is defined in the IPA GState based on the shader declaration. The pull IPA instruction may override the interpolation location defined in the IPA GState. In one embodiment, the last request of the pull IPA instruction must specify the “end” flag so that the IPA can manage to release the data structures related to the pixel shader warps and return the primitive done signal to the SU 490 to release the plane equations.
In one embodiment, the IPA 0460 handles the varying attribute interpolation at every pixel or sub-sample as well as a merger of the pixel screen (e.g., screen 320,
In one embodiment, as soon as the IPA 0460 receives the XY screen coordinates of a 2×2 or 3×3 pixel quad and there is space available in the output buffer 476, the IPA 0460 will start processing the pixel block.
In one embodiment, when a primitive covers more than one 2×2 or 3×3 quad in the 8×8 pixel block, the IPA Control (IPA CTL) performs optimizations to reuse the existing 8×8 interpolation result without recalculating the result as well as avoiding any unnecessary plane equation reads, which reduces resource consumption (e.g., processing power, physical hardware area, etc.).
In one embodiment, the PAM 465 receives the pixel quad information for the PSC 472 and packs the quad positions, pixel and sample masks, primitive IDs and 3×3 quad flags into warp data structures. The PAM 465 further requests the attribute plane equations and triangle seed positions from the PET 491 in the SU 490 and passes the information to the shared block interpolator/reciprocal unit (SBR) 466 to perform the push mode interpolation.
In one embodiment, the PRB 463 stores the Pull IPA requests from the PE (e.g., PE 0452, PE 1454, PE 2453 or PE 3455,
In one embodiment, the interpolation is split into two steps: 8×8 block interpolation and pixel interpolation. For pixels within the same 8×8 block, the block results can be re-used, allowing the block interpolator (e.g., block interpolator 0466, block interpolator 1468,
block interpolation: Vb=Ps+(Xb−Xs)*Px+(Yb−Ys)*Py
bits: 42=24+16*24+16*24
reciprocal: f(1/x)=c0−c1*(x−a)+c2*(x−a)̂2
bits: 26−17*17+11*14.
In one embodiment, the SBR 466 receives the attribute plane equations, triangle seed positions and the X, Y coordinates of the 8×8 block that the input pixel quads reside in and performs the block interpolation. The result of the block interpolation is forwarded to the Quad Pixel Interpolator (e.g., quad pixel interpolator 0467, quad pixel interpolator 1469) and used as the base for the quad pixel interpolation. The SBR 466 also performs the reciprocal calculation of the W value used in the perspective correction.
In one embodiment, the quad pixel interpolator performs the quad pixel interpolation based on the attribute plane equations, the 8×8 block interpolation results and the X, Y offsets of the pixels within the 8×8 block. In one example, the W buffer (WBF) 473 stores the interpolated W values from quad pixel interpolator as well as the W reciprocals from the SBR 466, and it sends the interpolated W values to the SBR 466 for reciprocal calculation and W reciprocals to the PCM 475 for the final multiplication.
In one embodiment, the Scheduler (SCH) 464 schedules between push mode and pull mode interpolation requests on a per warp basis, and also sequences the requests of the SBR 466 and the quad pixel interpolator for block, pixel interpolation and W reciprocal calculation.
The PCM 475 performs the multiplication of W reciprocals with every interpolated attribute value at the selected interpolation location. The Output Buffer (OBF) 476 collects the final outputs of the interpolation results after perspective correction. The OBF 476 compensates the latency of the interpolation pipeline and helps to smooth out the output traffic to the interconnect buses.
In one embodiment, the block interpolator 0466 or block interpolator 1468 is shared with the reciprocal quadratic interpolation logic of S5 that is used for perspective division.
In one embodiment, the PSC interface 620 may provide one 2×2 or 3×3 quad per cycle and each quad buffer 660 entry holds four quads of positions, primitive IDs and sample masks, in order to keep the pipeline going; the input packer 630 should keep at least four quads of information.
In one embodiment, the quad buffer 660 can hold 32 warps each including 16 quads. When the pull IPA is enabled, the quad information needs to be kept until the last pull request in the pixel shader PS finishes, which may be close to the entire pixel shader lifetime. In one example, the data in the quad buffer 660 for each quad: 3×3: is 3×3 quad; primID: 7 bits; position X, Y: 7 bits ×2; mask: 32 bits, up to 8×MSAA; 54 bits/quad*16 quads/warp*32 warps=27,648 bits.
The coalescer unit 650 groups the 2×2 or 3×3 quads based on their 8×8 670 locations and primIDs; the quads within the same 8×8 that have the same primID are merged into one single attribute request to the PET 491 (
In one embodiment, the coalescer unit 650 maintains the quad indexes for these 8×8 670 data structures and use these to generate the destination addresses in the output buffer 476 (
Each entry in the attribute buffer contains 96 bits for the three floating point 32 numbers (Ps, Px, Py). Based on the GState attribute table and the list of primitives, the order of retrieval from the PET 491 is defined. If there is a perspective mode, then the W attributes are read first. Otherwise, the process starts with (attribute 0, component 0).
Since the attributes in the PET 491 are packed by the SU 490, the attribute fetch unit 640 needs to translate the logic Attribute Slot IDs specified in the GState table or the pull IPA requests to the packed attribute offsets in the PET 491. The IPA 0460 (or IPA 1461) may generate the correct address mapping table when the GStates are loaded to the local state buffer.
In one embodiment, the second section of this PAM 610 uses the position data from the input buffers to compute the block and pixel offsets. In one example, the seed position is a pair of 15-bit numbers in unsigned 7.8 format. The 7-bit integer supports a tile of 128×128 pixels, and the 8-bit fraction supports 64×64 sub-pixels. If the actual seed position is located in the tile, the position and pixels are presented as-is from the SU 490. Otherwise, pixels are pre-interpolated to the first rasterized pixel or sample within the tile. These two conditions are transparent to the IPA 0460 (or IPA 1461). The quad position is a pair of unsigned 7-bit integers within a 128×128 tile.
In one embodiment, the interpolation is performed in two steps, and two sets of position offsets are required. In one example, for the first offset, block offset: attributes are interpolated to the origin of an 8×8 block, so all pixels within the 8×8 block may use that result as the starting point, reducing the number of computations. Block offset from seed is a signed 8.8 number. For the second offset, pixel offset: this offset uses unsigned 4.4 format. The lower 3-bit integer defines a location within the 8×8 block, the MSB provide an extra guard bit to support out of 8×8 range interpolation for a 3×3 quad. The 4-bit fraction supports 16×16 sub-pixels which are used by various interpolation modes. The pixel offsets are computed in stage 1 to minimize data toggling. If the pixel mask is 0 (from stage 0), then that pixel's section is disabled.
In one embodiment, there are three interpolation modes: (1) constant: use pixels from the PET 491 (no position dependency); (2) linear interpolation; (3) linear interpolation with perspective division. There are four interpolation locations:
1. Center: quad position=(Xq, Yq)
P0=(Xq+0.5, Yq+0.5)
P1=(Xq+1.5, Yq+0.5)
P2=(Xq+0.5, Yq+1.5)
P3=(Xq+1.5, Yq+1.5).
2. Centroid: based on the MSAA mode, if all sampling points are covered by the primitive, the pixel position is at the center of the 16×16 sub-pixels (same as the linear mode). Otherwise, the first covered sampling point is the pixel position.
3. Sample: these are the three MSAA modes, shown as 16×16 sub-pixels: the number of quads is multiplied by the sampling frequency. For example, if there are two quads (P3 P2 P1 P0) and (P7 P6 P5 P4) at the input, and two sampling points S0 and S1 in the 2×MSAA mode, then four quads are generated and contribute 16 pixels toward the 128-pixel maximum/warp.
S0 of original quad (P3 P2 P1 P0)->new quad (P03 P02 P01 P00)
S1 of original quad (P3 P2 P1 P0)->new quad (P07 P06 P05 P04)
S0 of original quad (P7 P6 P5 P4)->new quad (P11 P10 P09 P08)
S1 of original quad (P7 P6 P5 P4)->new quad (P15 P14 P13 P12).
4. Snapped, supported in the pull IPA request only: the sub-pixel sample locations are provided by the shader instructions in pull IPA request.
Table 1 shows the summary of the data types used by the PAM 410.
In one embodiment, the IPA PRB 463 may hold 16 outstanding IPA requests. The pull requests are processed in the order that they are received. The Warp Sequencer manages the outstanding requests by incrementing the count when a pull IPA request is issued and by decrementing it when a pull request is returned to the PE register file (RF) so that the WSQ 457 (
In one embodiment, hardware interfaces may comprise various entries for inputs and outputs for: the pixel shader to the IPA 0460 (or IPA 1461,
In one embodiment, various entries for registers, masks, and modes may include the IPA 0460 (or IPA 1461,
In one embodiment, a GState table may contain the interpolation mode, the interpolation location and the precision for each varying attribute component. In one embodiment, the attribute components are interpolated in the push IPA phase based on the Push2RFMask mask defined in the GState, the Z and W is placed at the 1st and 2nd attribute component slot. The results of the interpolated attribute components are packed based on the Push2RFMask mask and are sent to the Register File at the starting address specified in StartRFAddr. In one or more embodiments, the entries in the GState table may be amended (e.g., expanded, reduced, adapted, etc.) as required.
In one embodiment, block 1020 provides pulling at least a portion of the pixel varying attributes based on a control flow in the shader processing element. In one embodiment, block 1030 provides interpolating said at least a portion of the pixel varying attributes (e.g., using the block interpolator 466/468 of the IPA 460).
In one embodiment, process 1000 may include adaptively determining a plane equation base parameter using a vertex of a primitive as an origin, and sharing a large block interpolator (e.g., IPA 0460,
In one embodiment, the process 1000 may include reusing a result of the interpolation without recalculating and fetching plane equations if a primitive covers a particular area in a tile block. In one embodiment, adaptive determination of the plane equation base parameter is performed upon the primitive determined to reside inside the tile block (e.g., a tile 330,
In one embodiment, process 1000 may include a texture unit performing preamble texture processing based on a mask, and the mask comprises a push or preload attribute mask. In one embodiment, process 1000 may include pushing the pixel varying attributes and pulling said at least a portion of the pixel varying attributes for reducing processing and resulting in reduced shader PE energy consumption that based on reduced instruction: decode, execution, and latency compensation for interpolation and texture sampling.
In one embodiment, in process 1000 the selected blocks may include an area size larger than a pixel quad. In one embodiment, the particular area in the tile block may comprise more than one pixel quad in the tile block. In one embodiment, in process 1000, a fixed function interpolation unit (e.g., IPA 0460,
The communication interface 1117 allows software and data to be transferred between the computer system and external devices through the Internet 1150, mobile electronic device 1151, a server 1152, a network 1153, etc. The system 1100 further includes a communications infrastructure 1118 (e.g., a communications bus, cross bar, or network) to which the aforementioned devices/modules 1111 through 1117 are connected.
The information transferred via communications interface 1117 may be in the form of signals such as electronic, electromagnetic, optical, or other signals capable of being received by communications interface 1117, via a communication link that carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an radio frequency (RF) link, and/or other communication channels.
In one implementation of one or more embodiments in a mobile wireless device (e.g., a mobile phone, tablet, wearable device, etc.), the system 1100 further includes an image capture device 1120, such as a camera 128 (
In one embodiment, the system 1100 includes graphics processing module 1130 that may implement processing similar as described regarding the triangle scheme 300 (
As is known to those skilled in the art, the aforementioned example architectures described above, according to said architectures, can be implemented in many ways, such as program instructions for execution by a processor, as software modules, micro-code, as computer program product on computer readable media, as analog/logic circuits, as application specific integrated circuits, as firmware, as consumer electronic devices, AV devices, wireless/wired transmitters, wireless/wired receivers, networks, multi-media devices, etc. Further, embodiments of said Architecture can take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment containing both hardware and software elements.
One or more embodiments have been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to one or more embodiments. Each block of such illustrations/diagrams, or combinations thereof, can be implemented by computer program instructions. The computer program instructions when provided to a processor produce a machine, such that the instructions, which execute via the processor create means for implementing the functions/operations specified in the flowchart and/or block diagram. Each block in the flowchart/block diagrams may represent a hardware and/or software module or logic, implementing one or more embodiments. In alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures, concurrently, etc.
The terms “computer program medium,” “computer usable medium,” “computer readable medium”, and “computer program product,” are used to generally refer to media such as main memory, secondary memory, removable storage drive, a hard disk installed in hard disk drive. These computer program products are means for providing software to the computer system. The computer readable medium allows the computer system to read data, instructions, messages or message packets, and other computer readable information from the computer readable medium. The computer readable medium, for example, may include non-volatile memory, such as a floppy disk, ROM, flash memory, disk drive memory, a CD-ROM, and other permanent storage. It is useful, for example, for transporting information, such as data and computer instructions, between computer systems. Computer program instructions may be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
Computer program instructions representing the block diagram and/or flowcharts herein may be loaded onto a computer, programmable data processing apparatus, or processing devices to cause a series of operations performed thereon to produce a computer implemented process. Computer programs (i.e., computer control logic) are stored in main memory and/or secondary memory. Computer programs may also be received via a communications interface. Such computer programs, when executed, enable the computer system to perform the features of the embodiments as discussed herein. In particular, the computer programs, when executed, enable the processor and/or multi-core processor to perform the features of the computer system. Such computer programs represent controllers of the computer system. A computer program product comprises a tangible storage medium readable by a computer system and storing instructions for execution by the computer system for performing a method of one or more embodiments.
Though the embodiments have been described with reference to certain versions thereof; however, other versions are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the preferred versions contained herein.
This application claims the priority benefit of U.S. Provisional Patent Application Ser. No. 61/991,349, filed May 9, 2014, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61991349 | May 2014 | US |