The technology discussed below relates in some aspects to techniques for streaming screen content from a source device to a sink device.
With modern electronic devices, it sometimes occurs that a user desires to display content, such as video, audio, and/or graphics content, from one electronic device on another electronic device. In many instances the ability to convey the content wirelessly is also desired. Generally speaking, in such a wireless display system, a first wireless device “source device” may provide content via a wireless link to a second wireless device “sink device” where the content can be played back or displayed. The content may be played back at both a local display of the source device and at a display of the sink device.
By utilizing wireless capabilities to form a wireless connection between the two devices, a source device can take advantage of better display and/or audio capabilities of a sink device (e.g., a digital television, projector, audio/video receiver, high-resolution display, etc.) to display content that is initially stored in, or streamed to, the source device. As the demand for such technologies continues to increase, research and development continue to advance and enhance the user experience.
The following summarizes some aspects of the present disclosure to provide a basic understanding of the discussed technology. This summary is not an extensive overview of all contemplated features of the disclosure, and is intended neither to identify key or critical elements of all aspects of the disclosure nor to delineate the scope of any or all aspects of the disclosure. Its sole purpose is to present some concepts of one or more aspects of the disclosure in summary form as a prelude to the more detailed description that is presented later.
Various examples and implementations of the present disclosure facilitate transmission of graphics data from a source device to a sink device over a universal serial bus (USB) communication channel. According to at least one aspect of this disclosure, source devices may include a universal serial bus (USB) communications interface, a graphics processing unit (GPU), and a processing circuit coupled to the USB communications interface and the GPU. The processing circuit may include logic to capture GPU-executable video data at an input of the GPU, where the GPU-executable video data includes a set of graphics commands. The processing circuit may further include logic to transmit a graphics domain data frame on a data plane via the USB communications interface, where the graphics domain data frame includes the GPU-executable video data. The processing circuit may also include logic to transmit at least one command message on a management plane via the USB communications interface.
Further aspects provide methods operational on source devices and/or source devices including means to perform such methods. One or more examples of such methods may include capturing video data at an input of a graphics processing unit (GPU), where the video data includes a set of graphics commands executable by a GPU. A graphics domain data frame may be transmitted on a data plane via a universal serial bus (USB) communications channel, where the graphics domain data frame includes the captured video data. At least one command message may also be transmitted on a management plane via the USB communications channel.
Additional aspects provide sink devices including a universal serial bus (USB) communications interface, data streaming logic, a graphics processing unit (GPU) and a display device. The data streaming logic may be configured to receive a graphics domain data frame on a data plane via the USB communications interface, where the graphics domain data frame includes video data including a set of graphics commands executable by a graphics processing unit. The data streaming logic may be further configured to receive at least one command message on a management plane via the USB communications interface. The GPU may be configured to render the video data included in the received graphics domain data frame, and the display device may be configured to render the video data.
Still further aspects provide methods operational on sink devices and/or sink devices including means to perform such methods. One or more examples of such methods may include receiving a graphics domain data frame on a data plane via a universal serial bus (USB) communications channel, where the graphics domain data frame includes video data with a set of graphics commands executable by a graphics processing unit. At least one command message may be received on a management plane via the USB communications channel. The video data included in the received graphics domain data frame may be rendered, and the rendered video data may be displayed.
Other aspects, features, and embodiments associated with the present disclosure will become apparent to those of ordinary skill in the art upon reviewing the following description in conjunction with the accompanying figures.
The description set forth below in connection with the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts and features described herein may be practiced. The following description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well known circuits, structures, techniques and components are shown in block diagram form to avoid obscuring the described concepts and features.
The various concepts presented throughout this disclosure may be implemented across a broad variety of wireless communication systems, network architectures, and communication standards. Referring now to
The source device 102 may be an electronic device adapted to transmit screen content data 108 to a sink device 104 over a communication channel 106. Examples of a source device 102 include, but are not limited to devices such as smartphones or other mobile handsets, tablet computers, laptop computers, e-readers, digital video recorders (DVRs), desktop computers, wearable computing devices (e.g., smart watches, smart glasses, and the like), and/or other communication/computing device that communicates, at least partially, through wireless and/or non-wireless communications.
The sink device 104 may be an electronic device adapted to receive the screen content data 108 conveyed over the communication channel 106 from the source device 102. Examples of a sink device 104 may include, but are not limited to devices such as smartphones or other mobile handsets, tablet computers, laptop computers, e-readers, digital video recorders (DVRs), desktop computers, wearable computing devices (e.g., smart watches, smart glasses, and the like), televisions, monitors, and/or other communication/computing device with a visual display and with wireless and/or non-wireless communication capabilities.
The communication channel 106 is a channel capable of propagating communicative signals between the source device 102 and the sink device 104. In some examples, the communication channel 106 may be a Universal Serial Bus (USB) communication channel. For instance, the USB-compliant communication channel 106 may be a wired communication channel 106 implementing wired USB (e.g., USB 2.0, USB 3.0, etc.). In other instance, the USB-compliant communication channel 106 may be a wireless communication channel 106 implementing wireless USB (WUSB) (as promoted by the Wireless USB Promoter Group). The USB-compliant communication channel 106 may be a media agnostic USB (MAUSB) implementation in at least some examples. As used herein, the term USB or USB interface may accordingly include wired USB, wireless USB, and media agnostic USB.
As depicted by
Graphics domain transmission methods can be beneficial in several aspects. For example, if the sink device 104 employs a display with a greater resolution than the source device 102, the sink device 104 can employ the graphics commands (e.g., OpenGL/ES commands or vendor-specific commands) and texture elements to render the frame at a higher resolution with similar quality. Another example includes the ability to send a texture element that may be used in many frames, enabling the source device 102 to send the texture element a single time to be employed by the sink device 104 to render several different frames.
The processing circuitry 202 includes circuitry arranged to obtain, process and/or send data, control data access and storage, issue commands, and control other desired operations. The processing circuitry 202 may include circuitry adapted to implement desired programming provided by appropriate media, and/or circuitry adapted to perform one or more functions described in this disclosure. For example, the processing circuitry 202 may be implemented as one or more processors, one or more controllers, and/or other structure configured to execute executable programming and/or execute specific functions. Examples of the processing circuitry 202 may include a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic component, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may include a microprocessor, as well as any conventional processor, controller, microcontroller, or state machine. The processing circuitry 202 may also be implemented as a combination of computing components, such as a combination of a DSP and a microprocessor, a number of microprocessors, one or more microprocessors in conjunction with a DSP core, an ASIC and a microprocessor, or any other number of varying configurations. These examples of the processing circuitry 202 are for illustration and other suitable configurations within the scope of the present disclosure are also contemplated.
The processing circuitry 202 can include circuitry adapted for processing data, including the execution of programming, which may be stored on the storage medium 206. As used herein, the term “programming” shall be construed broadly to include without limitation instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
In some instances, the processing circuitry 202 may include a graphics processing unit (GPU) 208 and/or a data streaming circuit or module 210. The GPU 208 generally includes circuitry and/or programming (e.g., programming stored on the storage medium 206) adapted for processing video data and rendering frames of video data based on one or more graphics commands (e.g., OpenGL/ES commands, vendor-specific commands) and texture elements for display by a user interface.
The data streaming circuit/module 210 may include circuitry and/or programming (e.g., programming stored on the storage medium 206) adapted to stream video data in the form of graphics commands (e.g., OpenGL/ES commands, vendor-specific commands) and texture elements to a sink device over a USB communications interface. In some examples, the data streaming circuit/module 210 may send command messages in a management plane and data messages in a data plane, as described in more detail below. In some examples, the data streaming circuit/module 210 may capture the video data (e.g., graphics commands and/or texture elements) to be sent as data message at an input of a GPU, such as the GPU 208.
As used herein, reference to circuitry and/or programming associated with the source device 200 may be generally referred to as logic (e.g., logic gates and/or data structure logic).
The communications interface 204 is configured to facilitate wireless and/or wired communications of the source device 200. For example, the communications interface 204 may include circuitry and/or programming adapted to facilitate the communication of information bi-directionally with respect to one or more sink devices. In at least one example, the communications interface 204 may be coupled to one or more antennas (not shown), and may include wireless transceiver circuitry, including at least one receiver 212 (e.g., one or more receiver chains) and/or at least one transmitter 214 (e.g., one or more transmitter chains). The communications interface 204 may be configured as a USB interface according to at least one example. Such a USB interface is capable of facilitating USB-compliant communication of information bi-directionally with respect to one or more sink devices.
The storage medium 206 may represent one or more processor-readable devices for storing programming, such as processor executable code or instructions (e.g., software, firmware), electronic data, databases, or other digital information. The storage medium 206 may also be used for storing data that is manipulated by the processing circuitry 202 when executing programming. The storage medium 206 may be any available media that can be accessed by a general purpose or special purpose processor, including portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing and/or carrying programming. By way of example and not limitation, the storage medium 206 may include a processor-readable storage medium such as a magnetic storage device (e.g., hard disk, floppy disk, magnetic strip), an optical storage medium (e.g., compact disk (CD), digital versatile disk (DVD)), a smart card, a flash memory device (e.g., card, stick, key drive), random access memory (RAM), read only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrically erasable PROM (EEPROM), a register, a removable disk, and/or other mediums for storing programming, as well as any combination thereof.
The storage medium 206 may be coupled to the processing circuitry 202 such that at least some of the processing circuitry 202 can read information from, and write information to, the storage medium 206. That is, the storage medium 206 can be coupled to the processing circuitry 202 so that the storage medium 206 is at least accessible by the processing circuitry 202, including examples where the storage medium 206 is integral to the processing circuitry 202 and/or examples where the storage medium 206 is separate from the processing circuitry 202 (e.g., resident in the source device 200, external to the source device 200, distributed across multiple entities).
The storage medium 206 may include programming stored thereon. Such programming, when executed by the processing circuitry 202, can cause the processing circuitry 202 to perform one or more of the various functions and/or process steps described herein. In at least some examples, the storage medium 206 may include data streaming operations 216. The data streaming operations 216 are adapted to cause the processing circuitry 202 to stream video data in the form of graphics commands (e.g., OpenGL/ES commands, vendor-specific commands) and texture elements to a sink device. In some examples, the data streaming operations 216 may include management plane operations and/or data plane operations.
The storage medium 206 may also include application modules 218 which may each represent an application provided by an entity that manufactures the source device 200, programming operating on the source device 200, and/or an application developed by a third-party for use with the source device 200. Examples of application modules 218 may include applications for gaming, shopping, travel routing, maps, audio and/or video presentation, word processing, spreadsheets, voice and/or calls, weather, etc. One or more application modules 218 may include texture elements associated therewith. For example, where a gaming application of the application modules 218 entails the slicing of falling fruit (e.g., watermelons, avocados, pineapples, etc.), there may be texture elements associated with the gaming application that may include a graphical representation of each of the types of fruit, as well as backgrounds. Such texture elements may be stored in a plurality of formats, such as RGBα 8888, RGBα 4444, RGBα 5551, RGB 565, Yα 88, and α8.
According to one or more aspects of the present disclosure, the processing circuitry 202 is adapted to perform (independently or in conjunction with the storage medium 206) any or all of the processes, functions, steps and/or routines for any or all of the source devices described herein (e.g., source device 102, source device 200). As used herein, the term “adapted” in relation to the processing circuitry 202 may refer to the processing circuitry 202 being one or more of configured, employed, implemented, and/or programmed (in conjunction with the storage medium 206) to perform a particular process, function, step and/or routine according to various features described herein.
Turning now to
The processing circuit 302 is arranged to obtain, process and/or send data, control data access and storage, issue commands, and control other desired operations. The processing circuit 302 may include circuitry adapted to implement desired programming provided by appropriate media and/or circuitry adapted to perform one or more functions described in this disclosure. The processing circuit 302 may be implemented and/or configured according to any of the examples of the processing circuitry 202 described above.
In some instances, the processing circuit 302 may include a graphics processing unit (GPU) 310 and/or a data streaming circuit or module 312. The GPU 310 generally includes circuitry and/or programming (e.g., programming stored on the storage medium 306) adapted for processing received video data and rendering frames of video data based on one or more texture elements and graphics commands for display by a user interface.
The data streaming circuit/module 312 may include circuitry and/or programming (e.g., programming stored on the storage medium 306) adapted to receive streamed video data from a source device. In some examples, the data streaming circuit/module 312 may receive video data over a USB communication channel. The data streaming circuit/module 312 may further provide the video data to the GPU 310 to be rendered for presentation at display 308.
As used herein, reference to circuitry and/or programming associated with the sink device 300 may be generally referred to as logic (e.g., logic gates and/or data structure logic).
The communications interface 304 is configured to facilitate wireless and/or wired communications of the sink device 300. For example, the communications interface 304 may include circuitry and/or programming adapted to facilitate the communication of information bi-directionally with respect to one or more source devices. In at least one example, the communications interface 304 may be coupled to one or more antennas (not shown), and includes wireless transceiver circuitry, including at least one receiver 314 (e.g., one or more receiver chains) and/or at least one transmitter 316 (e.g., one or more transmitter chains). The communications interface 304 may be configured as a USB interface according to at least one example. Such a USB interface is capable of facilitating USB-compliant communication of information bi-directionally with respect to one or more source devices.
The storage medium 306 may represent one or more processor-readable devices for storing programming, such as processor executable code or instructions (e.g., software, firmware), electronic data, databases, or other digital information. The storage medium 306 may be configured and/or implemented in a manner similar to the storage medium 206 described above.
The storage medium 306 may be coupled to the processing circuit 302 such that the processing circuit 302 can read information from, and write information to, the storage medium 306. That is, the storage medium 306 can be coupled to the processing circuit 302 so that the storage medium 306 is at least accessible by the processing circuit 302, including examples where the storage medium 306 is integral to the processing circuit 302 and/or examples where the storage medium 306 is separate from the processing circuit 302 (e.g., resident in the sink device 300, external to the sink device 300, distributed across multiple entities).
Like the storage medium 206, the storage medium 306 includes programming stored thereon. The programming stored by the storage medium 306, when executed by the processing circuit 302, causes the processing circuit 302 to perform one or more of the various functions and/or process steps described herein. For example, the storage medium 306 may include data streaming operations 318 adapted to cause the processing circuit 302 to receive video data from a source device via USB, and to facilitate the rendering of the video data. Thus, according to one or more aspects of the present disclosure, the processing circuit 302 is adapted to perform (independently or in conjunction with the storage medium 306) any or all of the processes, functions, steps and/or routines for any or all of the sink devices described herein (e.g., sink device 104, sink device 300). As used herein, the term “adapted” in relation to the processing circuit 302 may refer to the processing circuit 302 being one or more of configured, employed, implemented, and/or programmed (in conjunction with the storage medium 306) to perform a particular process, function, step and/or routine according to various features described herein.
In operation, the source device 200 can transmit video data over a USB interface to the sink device 300, where the video data can be displayed by the sink device 300.
The data streaming logic 406 may generate a plurality of frames adapted for transmission of the captured video data over a USB communication channel 408. The transmitter 214 can output the video data to the sink device 300 over the USB communication channel 408. The transmitter 214 may be configured to output the video data as a wireless transmission and/or as a wired transmission, according to various implementations.
At the sink device 300, the video data sent over the USB communication channel 408 is received at the receiver 314. The receiver 314 can be configured to receive the video data as wireless transmissions and/or wired transmissions. The data streaming logic 410 (e.g., the data streaming circuit/module 312 and/or the data streaming operations 318) may process the received frames of the video data (e.g., graphics commands and texture elements), and can provide the video data to the GPU 310. The GPU 310 renders the graphics commands and texture elements into displayable frames for presentation at the display 308 of the sink device 300.
According to an aspect of the present disclosure, the graphics domain transmissions over a USB communication channel may include data transmissions in a data plane and command message transmissions in a management plane.
The management plane can be configured to convey USB descriptors, also referred to herein as management commands (e.g., GET, SET, and NOTIF), over the USB communication channel 506 via a bulk endpoint (1 IN and 1 OUT). The management plane may also employ an optional interrupt endpoint (1 IN) or an optional isochronous endpoint (1 IN).
According to an aspect of the present disclosure, the management commands transmitted on the management plane can be employed to enable a communication session including graphics domain transmissions. As noted, the management commands can include GET, SET, and NOTIF commands. A GET command may be employed by the source device 200 to retrieve properties from the sink device 300. A SET command may be employed by the source device 200 to set a value of one or more properties at the sink device 300. A NOTIF command is employed by the sink device 300 to notify the source device 200 of one or more items, such as a change in a property value through external means.
A management sequence typically includes two phases. A first phase includes the source device 200 sending a command message to the sink device 300 over the management plane. The command message includes sufficient information for the sink device 300 to determine which property of the graphics domain is being referenced. After the sink device 300 decodes the received command message, the second phase includes execution of the command by the sink device 300, and return of an appropriate response message indicating either success or error.
After the graphics domain stream is initiated, it may occur that some change occurs at the sink device 300. In response to such a change, the sink device 300 may send a NOTIF command message 610 to the source device 200. The NOTIF command message 610 may include reason codes adapted to notify the source device 200 of a change in one or more property values by some external means.
The command messages sent on the management plane may be formatted with a header section and a payload section.
The header section 700 may further include a reserved field 704 and a vendor field 706. The vendor field 706 can be configured to indicate whether the payload is in a default format or in a vendor-specific format. According to at least one example, the default format may be an Augmented Backus-Naur Form (ABNF).
A type field 708 is included to indicate the type of command message that is included. For example, the type field 708 may be configured to indicate whether the command message is a GET command, SET command, or NOTIF command.
The header section 700 further includes an ID field 710. The ID field 710 is configured to identify the graphics domain management entity and its version. The ID field 710 can be significant if the management plane endpoint is being shared with other USB traffic. The header section 700 can include a length field 712 indicating the length of the payload section.
Referring back to
Still referring to
The data plane may be configured to convey graphics domain data messages over the USB communication channel 506 via dedicated endpoints. For example, the data plane may employ a bulk endpoint (1 IN and 1 OUT) or an isochronous endpoint (1 IN and 1 OUT).
The graphics domain transmissions can be sent in graphics domain data frames including a header section and a payload section.
The header section 800 may further include a reserved field 808 and a timestamp field 810. The timestamp field 810 can be configured to indicate the presentation time for the graphics domain data frame to ensure time synchronization. For example, the timestamp field 810 may indicate the offset in milliseconds from the beginning of the graphics domain data stream when the present frame is to be rendered. That is, the timestamp field 810 may indicate the time T at which the data frame is to be rendered with respect to the start of the stream (T−0). In at least one implementation, the timestamp field 810 can range from 0 to (232−1) milliseconds (unsigned 32-bit number). The source device 200 and the sink device 300 may be synchronized either through use of an isochronous endpoint for the data plane or through use of other mechanisms (e.g., 802.1as) and a bulk endpoint.
The header section 800 further includes a frame sequence number field 812 and a token sequence number field 814. The frame sequence number field 812 is adapted to indicate the sequence number of the graphics domain data frame. In at least one example, the frame sequence number field 812 can start at 0, and can increment by 1 for each new graphics domain data frame.
The token sequence number field 814 is adapted to indicate the token number in the graphics domain data frame. A single graphics domain data frame may include a single token, or may include multiple tokens within a single frame. In at least one example, the token sequence number field 814 can start at 1, and can increment by the number of tokens included in the graphics data frame.
In some instances, two or more graphics domain data frames may have the same value for the frame sequence number field 812 if they carry fragments of the same payload. In such instances, the value of the token sequence number field 814 of the graphics domain data frame carrying the first fragment of the payload indicates the number of tokens present in the graphics data frame, while the token sequence number field 814 of the graphics data frames carrying the remaining fragments of the payload can be set to 0. The header section 800 can include a length field 816 indicating the length of the payload section.
The argument list field 904 of the payload section 900 can include a list of arguments associated with the token identifier field 902. A pointer to a memory location in the argument list can be de-referenced and substituted with a length field indicating the length of the content being pointed by the pointer, followed by the actual content being pointed by the pointer. The content may be texture information, array information, shader information, etc.
By way of an example of the payload fields described above, a source device 200 may send a frame with a value in the token identifier field 902 specifying a particular function. By way of example, the function may be a texture, a vertices, a shader, etc. Accordingly, the sink device 300 knows that the token is associated with a texture, a vertices, a shader etc., and also knows how many arguments are associated with the specified function and what the argument types will be. Because the source device 200 and sink device 300 know the function type, how many arguments there will be and the argument type, the values transmitted from the source device 200 to the sink device 300 simply need to be parsed.
Referring again to
Turning to
At 1004, the source device 200 may transmit a graphics domain data frame on a data plane. For example, the source device 200 may include logic (e.g., data streaming circuit/module 210 and/or data streaming operations 216) to transmit a graphics domain data frame with the captured video data on the data plane via the communications interface 204. The transmission can be sent over a USB communication channel. As noted above, the data plane can employ a bulk endpoint and/or an isochronous endpoint according to USB communications. The graphics domain data frame may be configured as described above with reference to
At 1006, the source device 200 may transmit a command message on a management plane. For example, the source device 200 may include logic (e.g., data streaming circuit/module 210 and/or data streaming operations 216) to transmit a command message on the management plane via the communications interface 204. The transmission can be sent over a USB communication channel. As noted above, the management plane can employ a bulk endpoint, an interrupt endpoint, and/or an isochronous endpoint according to USB communications. The payload of the command message may include a GET command message or a SET command message, as described above.
As noted above, the data plane can employ a bulk endpoint and/or an isochronous endpoint according to USB communications. The graphics domain data frame may be configured as described above with reference to
At 1104, the sink device 300 may also receive at least one command message on a management plane. For example, the sink device 300 may include data streaming logic (e.g., data streaming circuit/module 312 and/or data streaming operations 318) to receive a command message via the communications interface 304. The command message can also be received over the USB communication channel on the management plane. As noted above, the management plane can employ a bulk endpoint, an interrupt endpoint, and/or an isochronous endpoint according to USB communications. The payload of the command message may include a GET command message or a SET command message, as described above.
At 1106, the sink device 300 can render the received video data. For example, the sink device 300 may render the video data included in the received graphics domain data frame at the GPU 310. That is, the GPU 310 may render the video data based on the included graphics commands and texture elements).
At 1108, the sink device 300 can display the rendered video data. For example, the display 308 may visually present the video data rendered by the GPU 310.
While the above discussed aspects, arrangements, and embodiments are discussed with specific details and particularity, one or more of the components, steps, features and/or functions illustrated in
While features of the present disclosure may have been discussed relative to certain embodiments and figures, all embodiments of the present disclosure can include one or more of the advantageous features discussed herein. In other words, while one or more embodiments may have been discussed as having certain advantageous features, one or more of such features may also be used in accordance with any of the various embodiments discussed herein. In similar fashion, while exemplary embodiments may have been discussed herein as device, system, or method embodiments, it should be understood that such exemplary embodiments can be implemented in various devices, systems, and methods.
Also, it is noted that at least some implementations have been described as a process that is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination corresponds to a return of the function to the calling function or the main function. The various methods described herein may be partially or fully implemented by programming (e.g., instructions and/or data) that may be stored in a processor-readable storage medium, and executed by one or more processors, machines and/or devices.
Those of skill in the art would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware, software, firmware, middleware, microcode, or any combination thereof To clearly illustrate this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system.
The various features associate with the examples described herein and shown in the accompanying drawings can be implemented in different examples and implementations without departing from the scope of the present disclosure. Therefore, although certain specific constructions and arrangements have been described and shown in the accompanying drawings, such embodiments are merely illustrative and not restrictive of the scope of the disclosure, since various other additions and modifications to, and deletions from, the described embodiments will be apparent to one of ordinary skill in the art. Thus, the scope of the disclosure is only determined by the literal language, and legal equivalents, of the claims which follow.
The present application for Patent claims priority to Provisional Application No. 62/195,691 entitled “Media Agnostic Graphics Offload” filed Jul. 22, 2015, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
62195691 | Jul 2015 | US |