Video and/or other media may be sent between computing devices over a network. In some examples, videos may be encoded by a server, sent to a client computing device, decoded and played back while subsequent portions of the video are still being transmitted to the client computing device by the server. Such video transmission and playback is often referred to as “streaming”. Network conditions can change during streaming due to changes and/or increases in network traffic. For example, network conditions may sometimes deteriorate which may lead to delays in streaming of video and/or other media files.
Provided herein are technical solutions to improve sending of video and other types of data that may reduce problems associated with changing network conditions.
In the following description, reference is made to the accompanying drawings which illustrate several embodiments of the present invention. It is understood that other embodiments may be utilized and mechanical, compositional, structural, electrical operational changes may be made without departing from the spirit and scope of the present disclosure. The following detailed description is not to be taken in a limiting sense, and the scope of the embodiments of the present invention is defined only by the claims of the issued patent.
The transmission and presentation of information using streaming delivery technology is rapidly increasing. Various forms of streaming technology and, in particular, hypertext transfer protocol (HTTP) streaming, may employ adaptive bitrate streaming, in which a video stream is encoded using multiple renditions that may differ with respect to various transmission attributes (e.g., bitrates, resolutions, profiles, frame rates, etc.). In adaptive bitrate streaming, video streams are encoded into small segments (typically 2-10 seconds), and each segment starts with an instantaneous decoder refresh frame (IDR-frame). An IDR-frame is a special intra-coded picture frame (I-frame) that causes all reference pictures in the DPB (decoded picture buffer) to be flushed, so that no subsequent video frames can reference any picture prior to the IDR-frame. This means that each segment is self-decodable (i.e., doesn't depend on reference pictures in previous segments).
One challenge related to adaptive bitrate streaming is the desire to reduce end-to-end latency, jitter, and other undesirable effects caused by network conditions while maintaining a sufficiently high video quality. In adaptive bitrate streaming, larger segment durations may tend to increase latency. Thus, one simple technique for reducing latency involves the reduction of segment duration. However, the reduction of segment duration may result in more frequent transmission of I-frames, which have large data sizes and are computational resource intensive and inefficient to encode. Transmission of the I-frames can cause spikes in network traffic due to the larger data size of such frames relative to inter-coded frames.
Techniques for improved encoding and decoding of reference frames used in video streaming are described herein. In digital video technology, a video may be represented by a number of video frames that may be displayed in sequence during playback. A video frame is comprised of rows and columns of pixels. The resolution of a particular video frame is described by the width of the frame, in terms of a first number of pixels, by the height of the frame, in terms of a second number of pixels. Video frames may be compressed using different picture types or frame types, such as Intra-coded picture frames, predicted picture frames, and/or bi-predictive frames. The term “frame” can refer to an entire image captured during a time interval (e.g., all rows and columns of pixels comprising the particular image). The term “picture” can refer to either a frame or a field. A “field” is a partial image of a frame, which can be represented by either the odd-numbered or even-numbered scanning lines of the frame. Reference frames are frames of a compressed video that are used to define future frames and come in various types. A compressed video may comprise one or more frames that do not include all of the pixel data within the frames themselves, but rather reference pixel values of other frames (e.g., reference frames). Intra-coded picture frames (“I-frames”) include detailed pixel data in order to be self-decodable and to provide reference pixel values for other inter-coded picture frames. As a result, I-frames do not require other video frames in order to be decoded, but provide the lowest amount of data compression. Predicted picture frames (“P-frames”) contain only the changes in the pixel values from the previous frame, and therefore P-frames use data from previous frames to decompress the P-frame. As a result, P-frames are more compressible than I-frames. Bi-predictive picture frames (“B-frames”) can be decoded using both previous and forward frames for data reference. As set forth above, frequent transmission of I-frames can cause network congestion and/or jitter because of their increased size (e.g., the number of bits of data comprising the I-frame) relative to the P-frames and B-frames. In accordance with embodiments of the present invention, frames used as reference frames in video streaming may be divided into multiple parts comprising a lower quality reference frame and a number of reference frame enhancement layers. For example, full quality I-frame data may be divided into multiple parts for transmission, the multiple parts comprising lower (e.g., reduced) quality I-frame data and a plurality of I-frame enhancement layer data. In various other examples, lower quality I-frames and enhancement layer data used to enhance the reference quality of the lower quality I-frames may be generated from “raw” image data, such as image data captured by an image sensor of a camera. The size of the lower quality I-frame data may be referred to as “lower” herein because the number of bits comprising the lower quality I-frame may be less than the number of bits required to store an enhanced quality I-frame resulting from the combination of the lower quality I-frame with one or more I-frame enhancement layer data. In some examples, the size of the lower quality I-frame may be less than the size of the full quality I-frame from which the lower quality I-frame was generated. In some further examples, the size of the lower quality I-frame, in terms of a number of bits, may be similar to, or less than, data sizes of other inter-coded video frames, such as the P-frames and/or B-frames of the particular video stream being encoded. Accordingly, sending lower quality I-frames may not result in the spikes in network traffic characteristic of full-quality I-frames because the sizes of those lower quality I-frames more closely compares to the sizes of the P-frames and/or B-frames. Each of the I-frame enhancement layers may be combined with and sent together with one of the subsequent inter-coded frames such as P-frames and/or B-frames in order to normalize frame data size among the frames of the particular adaptive bitrate video stream being encoded and sent to one or more recipient computing devices. For example, I-frame enhancement layer data may be sent together with subsequent P-frame or B-frame data by including the I-frame enhancement layer data in a payload of a transmission packet along with P-frame and/or B-frame data. Although the examples described herein generally refer to improved encoding and decoding techniques for I-frames, it will be understood that these techniques may be applied to any reference frame. For example, a P-frame may be divided into a lower quality P-frame and one or more P-frame enhancement layers, for transmission. Additionally, techniques such as those described herein, may be applied to other types of reference data that may be sent over a network using transmission packets that are relatively small in terms of a number of bits. Upon receipt, a recipient device may incrementally improve the quality of reference data by assembling a larger, more detailed file from the data from the plurality of transmission packets while reducing the amount of bandwidth required for transmission. Subsequently received files may benefit from the incrementally improved reference data.
Upon receipt of the video stream encoded in accordance with the techniques described above, a recipient computing device may reconstruct the full-quality reference frame (e.g., a full quality I-frame) by combining the lower quality reference frame data (e.g., lower quality I-frame data) and the plurality of enhancement layer data (e.g., I-frame enhancement layer data) which have been received and stored in memory. The recipient computing device may incrementally improve the reference quality of the lower quality reference frame (e.g., an I-frame) by incorporating image data of each enhancement layer (e.g., I-frame enhancement layer data) with the image data of the previously-enhanced reference frame until the image data of the full-quality reference frame (e.g., a full-quality I-frame) is reassembled in memory. Each subsequent inter-coded frame may benefit from incremental increases in reference quality due to the enhancement of the lower quality reference frame with received reference frame enhancement layers.
In the example of
In some examples, a transmitted video stream may be encoded using a number of different renditions, which may each differ from one another with respect to one or more image quality-related attributes, such as bitrates, resolutions, profiles, frame rates, and others. Accordingly, in various examples, encoder 111 may encode video stream 142 in multiple, different renditions.
Encoder 111 may be effective to encode data into one or more frames, such as I-frames, P-frames, and B-frames described herein. Encoder 111 may be effective to identify an I-frame 120 or other reference frame. In an example, I-frame 120 may be a first I-frame of a segment of the video stream 142. As previously noted, an I-frame is typically much larger in size in terms of an amount of memory needed to store an I-frame relative to inter-coded frames such as P-frames or B-frames. Encoder 111 may be effective to convert I-frame 120 into a lower quality I-frame I0 and a plurality of I-frame enhancement layers 140 (including I-frame enhancement layers L1, L2, . . . , LN). In some examples, lower quality I-frame I0 may be of a lower bitrate relative to I-frame 120, but may be the same resolution. In some other examples, encoder 111 may generate lower quality I-frame I0 from data that has not been previously encoded and/or compressed into a video format including intra-coded and/or inter-coded reference frames.
For example, as depicted in
Encoder 111 may divide the data comprising I-frame 120 to generate a lower quality I-frame I0 having a lower bitrate relative to I-frame 120 and one or more I-frame enhancement layers 140 (represented in
In another example, I-frame enhancement layer L1 may include image data providing additional details related to chroma, chrominance, luma, luminance, or other parameters associated with pixels in I-frame 120. As will be described in further detail below, decoder 131 of recipient 130 may combine I-frame enhancement layer L1 with lower quality I-frame I0. After combination of I-frame enhancement layer L1 with lower quality I-frame I0, the I-frame resulting from the combination will be enhanced relative to lower quality I-frame I0 since the enhanced I-frame includes new image data not included in lower quality I-frame I0. Accordingly, the enhanced I-frame resulting from the combination of I-frame enhancement layer L1 and lower quality I-frame I0 may provide a better reference for decoding subsequently-received P-frames and/or B-frames.
In various examples, decoder 131 of recipient 130 may combine I-frame enhancement layers L1, L2, . . . , LN with lower quality I-frame I0 upon receipt of each of the I-frame enhancement layers L1, L2, and LN. The enhanced I-frames resulting from the combination of lower quality I-frame I0 and one or more I-frame enhancement layers 140 may be stored in a buffer 132 of recipient 130. Buffer 132 may be a memory configured to be in communication with decoder 131 and one or more processing units of recipient 130. Additionally, in some examples, upon the creation of a new enhanced I-frame (e.g., I-frames I0′ and/or I0″) based on receipt of an additional I-frame enhancement layer 140, the previous I-frame corresponding to the same time in video stream 142 may be overwritten in, or otherwise removed from, buffer 132.
For example, recipient 130 may initially receive lower quality I-frame I0 at a first time t0. Thereafter, at a second time t1, recipient 130 may receive a first I-frame enhancement layer L1 corresponding to lower quality I-frame I0. I-frame enhancement layer L1 may correspond to lower quality I-frame I0 because I-frame enhancement layer L1 and lower quality I-frame I0 were both created from the same full-quality, larger-sized I-frame 120. Decoder 131 may combine image data of I-frame enhancement layer L1 with lower quality I-frame I0 to produce a first enhanced quality I-frame I0′. First enhanced quality I-frame I0′ may be stored in buffer 132 for use as a reference by subsequently-received inter-coded frames. Thereafter, at a second time t2 recipient 130 may receive a second I-frame enhancement layer L2 corresponding to lower quality I-frame I0. Decoder 131 may combine image data of I-frame enhancement layer L2 with first enhanced quality I-frame I0′ to produce a second enhanced quality I-frame I0″. Upon generation of second enhanced quality I-frame I0″, decoder 131 may overwrite first enhanced quality I-frame I0′ in buffer 132 with second enhanced quality I-frame I0″. Second enhanced quality I-frame I0″ may include all of the image data included in first enhanced quality I-frame I0′ plus additional image data included in second I-frame enhancement layer L2. Once recipient 130 has received all of the I-frame enhancement layers 140 (L1, L2, and L3, in the current example) decoder 131 may be effective to reproduce full quality I-frame 120 by combining lower quality I-frame I0 with each of the subsequently received I-frame enhancement layers 140. Sending I-frame 120 as a smaller-sized lower quality I-frame I0 and a series of separately sent I-frame enhancement layers 140 can avoid problems associated with sending very large I-frames followed by a series of smaller-sized inter-coded frames. Such differences in frames size can cause unfavorable network conditions such as latency and jitter, and can cause buffer overflow on the recipient device. Accordingly, converting a full quality I-frame, such as I-frame 120 into a smaller, lower quality I-frame I0 and a series of enhancement layers 140 can reduce the variance in frame size for frames sent over network 102. In various examples, encoder 111 may select the size of lower quality I-frame I0 when generating lower quality I-frame I0 from full quality I-frame 120. Encoder 111 may consider various factors when determining a size of lower quality I-frame I0 and/or when determining how many I-frame enhancement layers to generate for a particular full-quality I-frame 120. Such factors may include available bandwidth on a communication channel between transmitter 100 and recipient 130, average jitter and/or latency on the communication channel between transmitter 100 and recipient 130, the average size of inter-coded frames of video stream 142, and/or characteristics of recipient 130, such as a size of buffer 132 and/or a speed or type of decoding being used by decoder 131.
In various examples, encoder 111 may select particular inter-coded frames for combination with I-frame enhancement layers 140 so that the resulting hybrid blocks are less than or equal to a target frame size. Additionally, the lower quality I-frame I0 may be generated to be less than or equal to the target frame size. Accordingly, frame size may be normalized in the video stream 142. For example, a size of lower quality I-frames I0 may be selected that is within a tolerance band (e.g., +/−0.5%, 1%, 2%, 5%, 15%, 17%, 25%, 26.3%, etc.) of a target frame size. Similarly, particular inter-coded frames may be selected for combination with particular I-frame enhancement layers 140 so that the resulting hybrid blocks are within a tolerance band of the target frame size.
In the example depicted in
As illustrated in
As indicated by the arrow depicted within decoder 131 in
To continue the example, recipient 130 may receive hybrid block 208b at a time t2. Decoder 131 may separate I-frame enhancement layer data L2 (including image data of I-frame enhancement layer L2) from P-frame data 202b. P-frame data 202b may be stored in buffer 132. Decoder 131 may combine I-frame enhancement layer L2 with first enhanced quality I-frame 302 to produce a second enhanced quality I-frame 304. Adding additional enhancement layer data, such as pixel value updates and/or differences, to first lower quality I-frame 302 to produce second enhanced quality I-frame 304 may provide a better reference frame for subsequently-received inter-coded frames. Decoder 131 may store second enhanced quality I-frame 304 in buffer 132. Second enhanced quality I-frame 304 may be used as a reference to decode subsequently-received inter-coded frames while second enhanced quality I-frame 304 is stored in buffer 132.
Decoder 131 may continue to receive and separate hybrid blocks until a final hybrid block 208n is received for a particular full-quality I-frame 120 at a time tN. Decoder 131 may separate I-frame enhancement layer data LN from P-frame data 202n. P-frame data 202n may be stored in buffer 132. Decoder 131 may combine I-frame enhancement layer LN with the currently-stored enhanced quality I-frame (e.g., second enhanced quality I-frame 304 or the most recently stored enhanced quality I-frame) to reassemble full-quality I-frame 120. Decoder 131 may store full quality I-frame 120 in buffer 132. Full-quality I-frame 120 may be used as a reference to decode subsequently-received inter-coded frames until an instantaneous decode refresh frame (IDR frame) is used to clear buffer 132.
The process of
As described previously, each of enhancement layers L1, L2, and L3 may be used to improve lower quality I-frame I0. For example, enhancement layers L1, L2, and L3 may include image data combinable with lower quality I-frame I0 by an enhanced I-frame decoder of a recipient computing device to generate an enhanced quality I-frame I0′. In the example, an enhanced I-frame decoder may be effective to combine enhancement layers L1, L2, and L3 with lower quality I-frame I0 to reassemble the initial full-quality I-frame, such as I-frame 120 depicted in
The process may continue from operation 410 to operation 420 at which the first lower quality I-frame may be sent from the transmitter to the recipient device. For example, with reference to
The process may continue from operation 420 to operation 430 at which an enhancement layer of the plurality of enhancement layers is sent. For example, with reference to
The process may continue from operation 430 to operation 440 at which a determination is made whether or not additional enhancement layers corresponding to the lower quality I-frame are to be sent. For example, with reference to
If a determination is made that no further enhancement layers remain to be sent, the process may continue from operation 440 to operation 450 at which the next frame in the video stream may be sent. For example, with reference to
The process of
The process may continue from operation 510 to operation 520 at which an enhancement layer is received for the stored I-frame. For example, with reference to
The process may continue from operation 520 to operation 530 at which the received enhancement layer may be combined with an I-frame to generate an enhanced I-frame. For example, with reference to
The process may continue from operation 530 to operation 540 at which a determination is made whether additional I-frame enhancement layers have been received. If so, the process may return to operation 540 and the additional I-frame enhancement layers may be combined by decoder 131 with the corresponding I-frame stored in buffer 132. For example, if I-frame enhancement layer L2 is received, decoder 131 may determine that I-frame enhancement layer L2 corresponds to enhanced quality I-frame I0′. Accordingly, decoder 131 may combine data from I-frame enhancement layer L2 (e.g., image data) with enhanced quality I-frame to generate another enhanced quality I-frame I0″. Enhanced quality I-frame I0″ may be a better reference for subsequently-received inter-coded frames relative to enhanced quality I-frame I0′, as enhanced quality I-frame I0″ may comprise more detailed image data relative to enhanced quality I-frame I0′. The enhanced quality I-frame I0″ may be stored in a memory such as buffer 132, and in some cases, may overwrite enhanced quality I-frame I0′. As more I-frame enhancement layers are received, the reference quality may be progressively improved by combining the I-frame enhancement layers with the currently stored I-frame. In some examples, after receipt of all enhancement layers for a particular I-frame, decoder 131 may be effective to reassemble the full-quality I-frame (e.g., I-frame 120 depicted in
The process may continue from operation 540 to operation 550 at which the next frame in the video stream may be decoded. For example, the next frame in video stream 142 received by recipient 130 may be a P-frame. In such a case, the P-frame may use the enhanced I-frame stored in buffer 132 as a reference frame. Indeed, in many cases, the full-quality I-frame (such as I-frame 120 depicted in
Among other benefits, a system in accordance with the present disclosure may allow progressive coding of high quality I-frames (and/or other reference frames) while optimizing transmission characteristics of the bitstream. Sending lower quality reference frames may reduce jitter, latency and network traffic spikes during transmission. Additionally, reference frame enhancement layers may be used to progressively “regenerate” or “reassemble” the original, high-quality reference frame. Reference frame enhancement layers (e.g., I-frame enhancement layers) may be sent together with inter-coded frames, such as P-frames and/or B-frames. In some cases, particular inter-coded frames may be selected for combination with the reference frame enhancement layers such that the combined hybrid blocks are unlikely to cause network congestion or other performance issues, based upon currently available bandwidth. Inter-coded frames received subsequently to reference frame enhancement layers may benefit from the enhanced reference frame resulting from the combination of the lower quality reference frame and the reference frame enhancement layers. Such subsequently-received inter-coded frames may be decoded using the enhanced reference frame. In various examples, video segments that include a relatively static background without a large amount of motion being depicted from frame-to-frame may be especially beneficial to encode using the techniques described herein. For such video segments, a lower quality I-frame, or other reference frame, may be acceptable for decoding subsequent inter-coded frames until an enhanced quality I-frame, or other reference frame, can be built up via the subsequently received enhancement layers. Examples of such “static background” video segments may include video conferences and/or other video-chat. Video segments that have large amounts of motion being depicted from frame-to-frame may not be ideal for the techniques described herein, as such “high motion” video segments may benefit more from higher quality I-frames and/or from using additional bandwidth to enhance P-frames. Examples of such high motion video may include a sports video, a video depicting splashing water, a video depicting a car chase, or other videos with a rapidly changing background.
An example system for sending and providing data will now be described in detail. In particular,
These services may be configurable with set or custom applications and may be configurable in size, execution, cost, latency, type, duration, accessibility and in any other dimension. These web services may be configured as available infrastructure for one or more clients and can include one or more applications configured as a platform or as software for one or more clients. These web services may be made available via one or more communications protocols. These communications protocols may include, for example, hypertext transfer protocol (HTTP) or non-HTTP protocols. These communications protocols may also include, for example, more reliable transport layer protocols, such as transmission control protocol (TCP), and less reliable transport layer protocols, such as user datagram protocol (UDP). Data storage resources may include file storage devices, block storage devices and the like.
Each type or configuration of computing resource may be available in different sizes, such as large resources—consisting of many processors, large amounts of memory and/or large storage capacity—and small resources—consisting of fewer processors, smaller amounts of memory and/or smaller storage capacity. Customers may choose to allocate a number of small processing resources as web servers and/or one large processing resource as a database server, for example.
Data center 85 may include servers 76a and 76b (which may be referred herein singularly as server 76 or in the plural as servers 76) that provide computing resources. These resources may be available as bare metal resources or as virtual machine instances 78a-d (which may be referred herein singularly as virtual machine instance 78 or in the plural as virtual machine instances 78). Virtual machine instances 78c and 78d are rendition switching virtual machine (“RSVM”) instances. The RSVM virtual machine instances 78c and 78d may be configured to perform all, or any portion, of the techniques for improved rendition switching and/or any other of the disclosed techniques in accordance with the present disclosure and described in detail above. As should be appreciated, while the particular example illustrated in
The availability of virtualization technologies for computing hardware has afforded benefits for providing large scale computing resources for customers and allowing computing resources to be efficiently and securely shared between multiple customers. For example, virtualization technologies may allow a physical computing device to be shared among multiple users by providing each user with one or more virtual machine instances hosted by the physical computing device. A virtual machine instance may be a software emulation of a particular physical computing system that acts as a distinct logical computing system. Such a virtual machine instance provides isolation among multiple operating systems sharing a given physical computing resource. Furthermore, some virtualization technologies may provide virtual resources that span one or more physical resources, such as a single virtual machine instance with multiple virtual processors that span multiple distinct physical computing systems.
Referring to
Network 102 may provide access to computers 72. User computers 72 may be computers utilized by users 70 or other customers of data center 85. For instance, user computer 72a or 72b may be a server, a desktop or laptop personal computer, a tablet computer, a wireless telephone, a personal digital assistant (PDA), an e-book reader, a game console, a set-top box or any other computing device capable of accessing data center 85. User computer 72a or 72b may connect directly to the Internet (e.g., via a cable modem or a Digital Subscriber Line (DSL)). Although only two user computers 72a and 72b are depicted, it should be appreciated that there may be multiple user computers.
User computers 72 may also be utilized to configure aspects of the computing resources provided by data center 85. In this regard, data center 85 might provide a gateway or web interface through which aspects of its operation may be configured through the use of a web browser application program executing on user computer 72. Alternately, a stand-alone application program executing on user computer 72 might access an application programming interface (API) exposed by data center 85 for performing the configuration operations. Other mechanisms for configuring the operation of various web services available at data center 85 might also be utilized.
Servers 76 shown in
It should be appreciated that although the embodiments disclosed above discuss the context of virtual machine instances, other types of implementations can be utilized with the concepts and technologies disclosed herein. For example, the embodiments disclosed herein might also be utilized with computing systems that do not utilize virtual machine instances.
In the example data center 85 shown in
In the example data center 85 shown in
It should be appreciated that the network topology illustrated in
It should also be appreciated that data center 85 described in
In at least some embodiments, a server that implements a portion or all of one or more of the technologies described herein may include a computer system that includes or is configured to access one or more computer-accessible media.
In various embodiments, computing device 15 may be a uniprocessor system including one processor 10 or a multiprocessor system including several processors 10 (e.g., two, four, eight or another suitable number). Processors 10 may be any suitable processors capable of executing instructions. For example, in various embodiments, processors 10 may be embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC or MIPS ISAs or any other suitable ISA. In multiprocessor systems, each of processors 10 may commonly, but not necessarily, implement the same ISA. In an example where transmitter 100 (depicted in
In an example where recipient 130 (depicted in
In one embodiment, I/O interface 30 may be configured to coordinate I/O traffic between processor 10, system memory 20 and any peripherals in the device, including network interface 40 or other peripheral interfaces. In some embodiments, I/O interface 30 may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory 20) into a format suitable for use by another component (e.g., processor 10). In some embodiments, I/O interface 30 may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. In some embodiments, the function of I/O interface 30 may be split into two or more separate components, such as a north bridge and a south bridge, for example. Also, in some embodiments some or all of the functionality of I/O interface 30, such as an interface to system memory 20, may be incorporated directly into processor 10.
Network interface 40 may be configured to allow data to be exchanged between computing device 15 and other device or devices 60 attached to a network or networks 102, such as other computer systems or devices, for example. In various embodiments, network interface 40 may support communication via any suitable wired or wireless general data networks, such as types of Ethernet networks, for example. Additionally, network interface 40 may support communication via telecommunications/telephony networks, such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs (storage area networks) or via any other suitable type of network and/or protocol.
In some embodiments, system memory 20 may be one embodiment of a computer-accessible medium configured to store program instructions and data as described above for implementing embodiments of the corresponding methods and apparatus. However, in other embodiments, program instructions and/or data may be received, sent or stored upon different types of computer-accessible media. Generally speaking, a computer-accessible medium may include non-transitory storage media or memory media, such as magnetic or optical media—e.g., disk or DVD/CD coupled to computing device 15 via I/O interface 30. A non-transitory computer-accessible storage medium may also include any volatile or non-volatile media, such as RAM (e.g., SDRAM, DDR SDRAM, RDRAM, SRAM, etc.), ROM (read only memory) etc., that may be included in some embodiments of computing device 15 as system memory 20 or another type of memory. Further, a computer-accessible medium may include transmission media or signals such as electrical, electromagnetic or digital signals conveyed via a communication medium, such as a network and/or a wireless link, such as those that may be implemented via network interface 40.
A network set up by an entity, such as a company or a public sector organization, to provide one or more web services (such as various types of cloud-based computing or storage) accessible via the Internet and/or other networks to a distributed set of clients may be termed a provider network. Such a provider network may include numerous data centers hosting various resource pools, such as collections of physical and/or virtualized computer servers, storage devices, networking equipment and the like, needed to implement and distribute the infrastructure and web services offered by the provider network. The resources may in some embodiments be offered to clients in various units related to the web service, such as an amount of storage capacity for storage, processing capability for processing, as instances, as sets of related services and the like. A virtual computing instance may, for example, comprise one or more servers with a specified computational capacity (which may be specified by indicating the type and number of CPUs, the main memory size and so on) and a specified software stack (e.g., a particular version of an operating system, which may in turn run on top of a hypervisor).
A number of different types of computing devices may be used singly or in combination to implement the resources of the provider network in different embodiments, for example computer servers, storage devices, network devices and the like. In some embodiments a client or user may be provided direct access to a resource instance, e.g., by giving a user an administrator login and password. In other embodiments the provider network operator may allow clients to specify execution requirements for specified client applications and schedule execution of the applications on behalf of the client on execution platforms (such as application server instances, Java™ virtual machines (JVMs), general-purpose or special-purpose operating systems, platforms that support various interpreted or compiled programming languages such as Ruby, Perl, Python, C, C++ and the like or high-performance computing platforms) suitable for the applications, without, for example, requiring the client to access an instance or an execution platform directly. A given execution platform may utilize one or more resource instances in some implementations; in other implementations, multiple execution platforms may be mapped to a single resource instance.
In many environments, operators of provider networks that implement different types of virtualized computing, storage and/or other network-accessible functionality may allow customers to reserve or purchase access to resources in various resource acquisition modes. The computing resource provider may provide facilities for customers to select and launch the desired computing resources, deploy application components to the computing resources and maintain an application executing in the environment. In addition, the computing resource provider may provide further facilities for the customer to quickly and easily scale up or scale down the numbers and types of resources allocated to the application, either manually or through automatic scaling, as demand for or capacity requirements of the application change. The computing resources provided by the computing resource provider may be made available in discrete units, which may be referred to as instances. An instance may represent a physical server hardware platform, a virtual machine instance executing on a server or some combination of the two. Various types and configurations of instances may be made available, including different sizes of resources executing different operating systems (OS) and/or hypervisors, and with various installed software applications, runtimes and the like. Instances may further be available in specific availability zones, representing a logical region, a fault tolerant region, a data center or other geographic location of the underlying computing hardware, for example. Instances may be copied within an availability zone or across availability zones to improve the redundancy of the instance, and instances may be migrated within a particular availability zone or across availability zones. As one example, the latency for client communications with a particular server in an availability zone may be less than the latency for client communications with a different server. As such, an instance may be migrated from the higher latency server to the lower latency server to improve the overall client experience.
In some embodiments the provider network may be organized into a plurality of geographical regions, and each region may include one or more availability zones. An availability zone (which may also be referred to as an availability container) in turn may comprise one or more distinct locations or data centers, configured in such a way that the resources in a given availability zone may be isolated or insulated from failures in other availability zones. That is, a failure in one availability zone may not be expected to result in a failure in any other availability zone. Thus, the availability profile of a resource instance is intended to be independent of the availability profile of a resource instance in a different availability zone. Clients may be able to protect their applications from failures at a single location by launching multiple application instances in respective availability zones. At the same time, in some implementations inexpensive and low latency network connectivity may be provided between resource instances that reside within the same geographical region (and network transmissions between resources of the same availability zone may be even faster).
As set forth above, content may be provided by a content provider to one or more clients. The term content, as used herein, refers to any presentable information, and the term content item, as used herein, refers to any collection of any such presentable information. A content provider may, for example, provide one or more content providing services for providing content to clients. The content providing services may reside on one or more servers. The content providing services may be scalable to meet the demands of one or more customers and may increase or decrease in capability based on the number and type of incoming client requests. Portions of content providing services may also be migrated to be placed in positions of lower latency with requesting clients. For example, the content provider may determine an “edge” of a system or network associated with content providing services that is physically and/or logically closest to a particular client. The content provider may then, for example, “spin-up,” migrate resources or otherwise employ components associated with the determined edge for interacting with the particular client. Such an edge determination process may, in some cases, provide an efficient technique for identifying and employing components that are well suited to interact with a particular client, and may, in some embodiments, reduce the latency for communications between a content provider and one or more clients.
In addition, certain methods or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments.
It will also be appreciated that various items are illustrated as being stored in memory or on storage while being used, and that these items or portions thereof may be transferred between memory and other storage devices for purposes of memory management and data integrity. Alternatively, in other embodiments some or all of the software modules and/or systems may execute in memory on another device and communicate with the illustrated computing systems via inter-computer communication. Furthermore, in some embodiments, some or all of the systems and/or modules may be implemented or provided in other ways, such as at least partially in firmware and/or hardware, including, but not limited to, one or more application-specific integrated circuits (ASICs), standard integrated circuits, controllers (e.g., by executing appropriate instructions, and including microcontrollers and/or embedded controllers), field-programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), etc. Some or all of the modules, systems and data structures may also be stored (e.g., as software instructions or structured data) on a computer-readable medium, such as a hard disk, a memory, a network or a portable media article to be read by an appropriate drive or via an appropriate connection. The systems, modules and data structures may also be sent as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission media, including wireless-based and wired/cable-based media, and may take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). Such computer program products may also take other forms in other embodiments. Accordingly, the present invention may be practiced with other computer system configurations.
Although the flowcharts and methods described herein may describe a specific order of execution, it is understood that the order of execution may differ from that which is described. For example, the order of execution of two or more blocks or steps may be scrambled relative to the order described. Also, two or more blocks or steps may be executed concurrently or with partial concurrence. Further, in some embodiments, one or more of the blocks or steps may be skipped or omitted. It is understood that all such variations are within the scope of the present disclosure.
It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and subcombinations are intended to fall within the scope of this disclosure.
In addition, conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps.
Although this disclosure has been described in terms of certain example embodiments and applications, other embodiments and applications that are apparent to those of ordinary skill in the art, including embodiments and applications that do not provide all of the benefits described herein, are also within the scope of this disclosure. The scope of the inventions is defined only by the claims, which are intended to be construed without reference to any definitions that may be explicitly or implicitly included in any incorporated-by-reference materials.
Number | Name | Date | Kind |
---|---|---|---|
9131110 | Yassur et al. | Sep 2015 | B2 |
9854270 | Ramasubramonian et al. | Dec 2017 | B2 |
20040001547 | Mukherje | Jan 2004 | A1 |
20050012647 | Kadono et al. | Jan 2005 | A1 |
20060188025 | Hannuksela | Aug 2006 | A1 |
20090003440 | Karczewicz | Jan 2009 | A1 |
20100166058 | Perlman | Jul 2010 | A1 |
20120185570 | Bouazizi et al. | Jul 2012 | A1 |
20140082054 | Denoual et al. | Mar 2014 | A1 |
20140254669 | Rapaka | Sep 2014 | A1 |
20150049806 | Choi | Feb 2015 | A1 |
20150085927 | Sjöberg et al. | Mar 2015 | A1 |
20150139325 | Chuang | May 2015 | A1 |
20150334420 | DeVleeschauwer et al. | Nov 2015 | A1 |
20160150236 | Maeda | May 2016 | A1 |
20160330453 | Zhang et al. | Nov 2016 | A1 |
20170359596 | Kim et al. | Dec 2017 | A1 |
Entry |
---|
Author unknown; Adaptive Bitrate Streaming; Wikipedia; Retrieved Nov. 8, 2016 from https://en.wikipedia.org/wiki/Adaptive_bitrate_streaming. |