1. Field of the Invention
The present invention relates to method and system for managing and controlling streaming in an on-demand server. More particularly, the present invention provides a centralized tool for asset management, content propagation, configuration and status monitoring, and real-time stream control so as to service simultaneously a large number of streams of audio, video and or other data formats being played out to customers on a network.
2. Description of the Art
In order for an on-demand service operator (ODSO) to provide content to their customers, by way of a video on demand (VOD), television on demand (TOD), subscription video on demand (SVOD), or other on-demand service, the on-demand service operator has to integrate a on-demand server into the operational framework of the network or ODSO distribution system. The main areas of functionality required to deliver the service in the ODSO network or distribution system are: on-demand server provisioning, content ingest management, session setup/stream management, and on-demand service assurance. Heretofore, various proprietary Business Management Systems (BMS) required inordinate labor and equipment to establish and implement an on-demand service. A need developed for an open BMS standard configured to a memory-based on-demand server application.
A open BMS accorded the ISA standard from the Pegasus Time Warner Communications is an integrated BMS platform to provide the service level frame work for services like MOD, PPV. Other standards such as J2EE of Telenet, RTSP by nCube, a combination RTSP plus LSCP of UPC, and RTSP plus XML by NGOD. However, problems arose in implementing the ISA standard as proprietary BMS were configured to proprietary disk-based on-demand servers such as systems from Seachange, Kasenna, Concurrent, Arroyo and the like. Additional costs are associated with such disk-based on-demand systems from programming to maintenance costs. As a result, a need developed for a cost effective, efficient open on-demand server having an open BMS configured to the ISA standard or other open standard. The need is satisfied by the memory-based on-demand server of Broadbus Technologies, Inc., manufactured under the name B-1, as it provides an open BMS platform configured to be integrated and provide video-on-demand (VOD), satellite video on demand (SVOD), television on demand (TOD), and audio on demand (AOD) services by streaming such content to customers requesting the content. Moreover, a need developed for management and control of the massive streaming density of a memory-based on-demand server and other functions as is satisfied by the present invention. The present invention's management and control of the massive streaming density advantageously uses a graphical user interface to provide point-and-click access key functions including: Server Configuration; Asset Management; Content Ingest; Content Propagation; Real-Time Streaming Status and Monitoring; Alarm Management and Problem Resolution; Meaningful Service Statistics; and Customized Reporting.
The invention described in the instant application overcomes these limitations and more by providing for a method and system, which permits dynamic manage content in the memory based on-demand server to maximize the streaming to the user interface. The intention can be implemented by one or more software modules to effectuate a client-server file. Some embodiments of the invention also facilitate better designs for devices designed to operate in a manner to enable dynamic bandwidth balancing across multiple, memory based VOD servers so as to maximize concurrency.
Such advantages are achieved using the flexibility of an open protocol platform for the BMS while taking advantage of the dynamic ability of a memory-based on-demand server. In particular, devices connected in an on-demand system can be advantageously be used for reporting, messaging, and other streaming control functions without utilizing bandwidth when carrying out such control functions. In addition, the management and control of streaming is accomplished in a way to conserve bandwidth resources and that devices are able to change their bandwidth requirements dynamically without significantly affecting streaming to users.
It should be understood, as would be apparent to one of ordinary skill in the art, that many embodiments of the invention are possible, which do not necessarily require changes to the operating of a memory-based on-demand server and system, and are not limited to the management and control of streaming.
Additional features and advantages of the invention will be made apparent from the following detailed description of some of the many possible illustrative embodiments which proceeds with reference to the accompanying figures. A method and system is disclosed for dynamic managing of content in a memory-based on-demand server to maximize the streaming to the user interface, maximizing concurrent streams, generating reports, dynamic balancing of on-demand server platforms to balance the content in memory across multiple, memory-based on-demand servers so as to maximize concurrency, memory usage, modeling, reporting and television on demand.
It is an object and advantage of the present invention to manage content in the memory-based on-demand server to maximize the user interface. The method and system may include ingesting, managing and streaming a content object file. The method and system is configured to schedule ingest, utilize RAM memory efficiently, and balance the content object file across and/or between one or more on-demand servers; and is configured to enable direct communications with the BMS, other third party components, and on-demand server components including server, blade and port status. The method and system is configured to maintain concurrency of streams of the content object file; and dynamically control and optimize memory utilization while minimizing access to disk storage by monitoring CAM usage so as to remove the content from resident CAM memory and page the same adaptively and to eliminate creating separate files for trick playback of content streamed to a customer.
It is an object and advantage of the present invention to balance the content in VOD memory across multiple, memory-based on-demand servers so as to maximize concurrency, modeling of concurrency for television on demand, memory usage, and reporting data for streaming content over time.
It is an object and advantage of the present invention to utilize an open interface to be managed by third parties gSOAP, HTTP, RTSP interface in a memory-based on-demand server.
It is an object and advantage of the present invention to ingest and load balance content made available for television-on-demand across multiple, memory-based on-demand servers and multiple ports.
It is an object and advantage of the adaptive open architecture of the present invention in integrating on-demand server, without disrupting our core, to differing BMS protocols.
It is an object and advantage of the present invention to control streams, devices and other server components on a system, blade and port basis to allow for the decommissioning of disabled components prior to scheduling ingests into the memory-based on-demand server.
In the preferred embodiment of the present invention, a system for controlling streaming of content from the memory-based, on-demand server consists of a processor; memory coupled to the processor, where the processor ingests content over a protocol bus; the processor repeatedly (a) establishes a content session for obtaining content by interfacing with a business management system (BMS); (b) ingests content as an object by interfacing with an Asset Management System (AMS) and/or a catcher; and/or (c) ingests movies-on-demand (MOD) applications APPS; and the processor controls the streaming of the content stored in memory of the memory-based on-demand server to a customer on-demand.
These and other advantages of the present invention are best understood with reference to the drawings, in which:
The system and method for managing and controlling the streaming from an on-demand server of the present invention is configured to utilize, for example, a solid state, memory based, on-demand server manufactured by Broadbus Technologies, Inc. under the tradenames B-1 and SBB-1, into the operational framework an on demand service operator (ODSO) or other network distribution system with and without the support of the Business Management System (BMS) platform to provide video-on-demand (VOD), subscription video on demand (SVOD), television on demand (TOD), and audio on demand (AOD) services by streaming such content to customers requesting the content. The streaming controller of the present invention operates on a server for controlling the memory based, on-demand server, which can be one or more on-demand servers that store content files on an external RAID 5 storage array, consisting of hardware components and integrated software capable of ingesting and streaming content. The streaming controller has many functional units that may be implemented in hardware wired circuitry, by programming a general purpose processor, or by any combination of hardware and software as is illustrated in the drawings and detailed description that show that each of the functional units can correspond to a sequence of instructions stored in memory.
Referring to the description herein some terms should be defined for the benefit and clear understanding of the present invention. The BMS is a network-based application that manages interactive digital product offerings within the Interactive Services Architecture (ISA) headend. The ISA is a Time Warner Pegasus specification that defines a common interface and framework for adding digital video services. As used herein the asset is a content file with supporting metadata that can be ingested into the on-demand server and streamed to a destination STB. Content files typically are digital files containing audio and video encodings as defined by the Moving Pictures Experts Group (MPEG). The design of stream control process application is fully compatible with ISA specification and is independent of the BMS integration. Certain functions and requirements are needed to provide network resource allocation and management & control system (MCS) such as, for example, as deployed in an ISA-conformant cable network. When integrated with an ISA-based cable network, the BMCS is configured to work with other third party devices or entities within the network to identify, reserve, and release the resources needed to deliver content to downstream Set Top Boxes (STBs), DSL and or cable modems. A STB is also known as a Digital Home Communications Terminal (DHCT), which is a computing device capable of receiving and decoding content streams, that typically runs the client-side movie on demand (MOD) application.
Referring to
Throughout this detailed description content 22 can generally refer to data, audio, visual or audio-visual works such as, for example, a movie file in an MPEG format that is played out to a customer's TV set. It will be appreciated, of course, that the context 22 and examples chosen by the applicant to illustrate the present invention, namely, pulling and playing a movie file to a customer's TV set were chosen for ease of illustrating the principals of the method and system of managing the resources of a RAM based on-demand server according to an exemplary embodiment the present invention. Content 22 also can be obtained and streamed out to a customer in other applications and formats. For example, the content 22 can be another form of data and in another format, specifically, an audio MP3 file. Moreover, data comes in numerous other formats that are streamed to a customer on-demand where it is still advantageous to manage server resources when the server is providing streaming to many, multiple end users in way to display and seamlessly play the requested content 22. As a result, managing information dynamically using a volatile memory based on-demand server across a world-wide network has other applications such as, for example, managing instant messaging, e-mail, streaming video conferencing, audio communications and the like.
Referring to
Referring to
The streaming controller 72 has several internal processes functioning to configure the system 112 (both the on-demand server 74 and the resource allocator 120), an ORB 114 connection for CORBA, processing error messages and system alarms 116, a network management system (NMS) 118 (typically operating SNMP standard), and the resource allocator 120 configures both adaptive system interfaces ASI and GigE ports. The stream controller's 72 major functions are to (1) configure 112 the on-demand server 74 and provide stream data to the resource allocator 120; (2) to ingest content 22; (3) to stream the content from the on-demand server 74; and (4) monitor system alarms, error messages and SNMP messages.
The streaming controller 72 is configured to control dynamically streaming functions such as, for example, loading content 22 entirely into memory 64, loading portions or segments of content 22 into memory 66, managing near-term-storage (NTS) bandwidth 68 limits and or limitations, and managing of playback functions 70 such as trick play modes and analyzing the speed of playing out the trick play in the stream to customers on-demand. For example, in the on-demand server of the present invention, content 22 demanded by an end user can be either pulled entirely into the memory or pulled into memory in segments from disk storage 26. Information about streams must be maintained for several reasons including recovery information from Resource Allocation status so as to recover, for example, after a failure of the Stream Controller 62. Another reason to maintain information about the streams is more historical so as to maintain information about subscriber behavior like number of trick commands and their type, pause duration, etc.
The Stream Controller 72 records information about active and suspended streams, whereby active streams are streamed from on-demand server 28 and suspended streams are streams for which the session was destroyed but the media file was not streamed up to the end by the Stream Controller 62 such as, for example, when the user resumes watching a paused program content 22 the Stream Controller 72 can provide index information to resume the playback. The Stream Controller 72 operates to receive packages containing assets and MPEG content files sent from a media provider or other asset owner to the cable or satellite network. The cable or satellite network uses a catcher 122 to receive the packages and assets, which the catcher forwards to the asset manager application. The catcher 122 is an application that managers the transfer of packages from the pitcher and then transmits the assets to the on-demand server system. The asset manager application records the association between the assets and their related content files and signals the FTP transfer of the content 22 from the catcher to the on-demand server 28. The Stream Controller 72 maintains the association between the contents 22 and the on-demand server 28 that store them on its NTS 26. The main functionality of the Stream Controller 72 is to:
Referring to
In the implementation with the cable network operator, the stream controller 22 is configured to control one or more of the on-demand server 130 in order to provide the functionality required to ingest content files, set-up VOD session at subscriber request and record status information about VOD streams. An advantage of the present invention is to integrate and utilize the BMS framework and or system existing at the cable operator site. As illustrated in
In a hardware context, concurrency is the number of streams requesting a piece of content. Resident content means content entirely contained in the memory of the on-demand server 28. Segment content is a segment, page or tile of content contained in the memory of the on-demand server 28, whereby only a window around the current stream location is kept in memory. A segment, for purposes of this patent application, is an 8 megabyte piece of content 22. Load is meant to indicate making a content 22 resident in the memory of the on-demand server 28. A credit is meant to be a portion or chunk of near-term-storage (NTS) bandwidth (BW), which in the present application is set to a throughput of four (4) megabits-per-second (mbps), which is the rate relating to the number of credits required by the stream. An overlap occurs each time one or more streams use the same segment of content 22 at the same time. A memory limit is a total memory capacity or amount of memory that can be allocated to streaming. A bandwidth limit is a limit of the total bandwidth that can be allocated, whereby setting the bandwidth limit to high may cause trick modes to stall due to unavailable bandwidth BW. A segmented or paging trick play speed limit means the maximum speed a stream that is segmented content 22 is allowed to be played out at in the trick play mode which has an impact on bandwidth requirements.
The system and method for managing and controlling streaming can be configured as software utilizing a web-based graphical user interface advantageously to provide point-and-click access to all of the key functional areas of managing networked memory based on-demand servers including:
The streaming controller includes a resource allocator that is a subsystem responsible for determining the best path for each stream and negotiating for the required Quaternary/Quadrature Amplitude Modulation (QAM) resources. The resource allocator, using information provided during the provisioning process, formulates a map of all possible stream routes, factors in current stream activity, and calculates the best stream path based on on-demand server and QAM resource utilization.
The streaming controller includes a Session and Resource Manager (SRM), which manages and controls network resources. Session Gateway, which translates between the DSM-CC world of the SRM and the CORBA-based ISA world. The SRM cannot answer the question “which resources should be used;” it merely answers whether a resource can be used. To set up a session, the BMCS must identify to the SRM the specific resources required, hence the need for a widget that answers the “which” question. Also, not all of the required resources are visible to the SRM (for example, Harmonic NSG boxes) and so must be managed by something within the VOD Server/BMCS.
The on-demand Manager of the present invention enables the cable service operator to configure and control one or more VOD Servers in order to provide the functionality required to ingest content files, set-up VOD sessions at subscriber request and record status information on VOD streams. The VOD Manager must be able to set up stream-based sessions in cooperation with existing cable network resource management systems. The resource allocator part of the VOD Manager allows the VOD Manager to do this intelligently, and to manage devices installed with the on-demand server that are not visible to the existing resource managers.
As is illustrated in
Referring to
Referring to
Other aspects of the stream commander software in conjunction with the memory based on demand server are described herein.
1. On-Demand Server Integration
a. On-Demand Server Configuration (Blades, Ports)
The system components required for managing and controlling the streaming from an on-demand server for the streaming controller of the present invention include a client-server architecture coupled to a BMS. The Client requirements include a web browser such as, for example, Internet Explorer 6.0 or later (Windows), Netscape Navigator 7.0 or later (Windows), or Mozzila 1.0.1 or later (Linux) with Java and Javascript enabled on such browser. The Server requirements include a computer with 512 MB RAM memory, a processor Pentium [6] 2.4 GHz processor Redhat Linux version: AS 3.0, and a database Oracle 10 g. stream control process requires the presence of the following ISA components to ensure integration within the headend: Catcher/Asset Management System (AMS), BMS, MOD Client/Server Applications (ShowRunner/XOD), and SA DNCS/SRM. The streaming controller software application hooks into the ISA bus by registering with the CORBA naming service running on the Business Management System (BMS). This enables direct communications with the BMS and other third-party ISA components. The streaming controller communicates with the on-demand server over the 10/100 Ethernet management port. The streaming controller Client provides an intuitive graphical user interface (GUI) that enables central management of one or more on-demand servers.
The streaming controller system can be determined initially for the configuration, monitoring and management of one or more on-demand servers, whereby the streaming controller process and/or application software features the ability to both set and view configuration parameters on servers and ensure that servers are online and accessible. The streaming controller process and/or application software also displays Hierarchical Object Trees that illustrate the one or more on-demand servers and device topologies, whereby such can be represented as selectable objects within collapsible explorer trees for point-and-click navigation across systems and components. Before one or more on-demand servers can be controlled and managed it must be added to the streaming controller process and/or application software that advantageously utilizes a browser-Based GUI enabling the management of on-demand servers from any supported network-accessible Web browser. The present invention can utilize a wizard to assist and easy add one or more on-demand servers by specifying the name and IP address of the server. In operation, the streaming controller process utilizes such defined name string to label each on demand server within GUI displays and uses such defined IP address to locate each on demand server. During an add operation, the streaming controller adds each on demand server to its local database and attempts to synchronize with the server to obtain the latest topology and configuration. The management IP address of each on demand server can also be coded through the system Command Line Interface (CLI) using the shelf ip address command and verify the current management IP address using the show shelf command. If the IP address is not reachable, then the added on demand server to the server database will appear but it will be unable to synchronize with it to obtain the latest configuration. A failed sync operation returns the following error: Failed to synchronize with on-demand server use Sync button on server configuration. If this message appears, ensure that the on demand on-demand server is online to perform a manual sync operation against the newly added server before adding new on-demand servers the storage cluster must be added to the associated on-demand servers.
The process for managing and controlling the streaming from the on-demand server is configured with a graphical user interface (GUI) in a Web browser running on a workstation that has network access to the streaming controller application server and provides a real-time listing of alarms and events across all on-demand systems. Access to the streaming controller software is provided by a welcome page and password protection. The GUI generally has menu bar, object tree and provides a visual indication of current alarm conditions as follows:
The streaming controller process and/or application software can have Control Room page, stream control Menu Bar page, and Object Tree page. The Control Room page provides a real-time listing of alarms and events across all systems. The Object Tree page provides a hierarchical representation of on-demand server systems and allows for easy navigation across system components. The stream control Menu Bar page provides access to stream control primary functional areas, thereby remaining accessible throughout all displays that launch in the stream control main window, with such menu bar page also includes: Server Management, Services, Control Room, Resources, Administration, Reports, Help, and Logout.
b. Discovering the On-Demand Server Configuration
The process for managing and controlling the on-demand server includes a dynamic discovery of each on-demand server topologies (blades and ports) and current configuration values and it can be on a Server Topology Discovery page. Moreover, a region is a logical grouping of one or more headends enabling configuration of the name and description of the selected region. In the stream control process object hierarchy, a headend is a logical grouping of one or more on-demand servers. The headend object provides access to the following screens:
A storage cluster is an administrative grouping of one or more on-demand servers that share the same external storage. The on-demand server storage cluster object provides access to screens enabling the provision of the cluster and information about storage volumes contained in the cluster. Tabs at the storage cluster level of the object hierarchy include:
A server object represents an on-demand server and provides access to the following screens:
A blade object represents a single on-demand server blade and provides access to the following screens:
A port object represents a single port, or network interface, on the on-demand server and provides access to the following screens:
As shown in
c. Content Synchronization
As shown in
d. Stream Synchronization
As shown in
In operation, the procedure to add and synchronize the on-demand server includes: step 1, entering the name, management IP address, and network mask for the on-demand server to be added; (2) In the stream control process, selecting the Storage Cluster to which this headend belongs; and (3) perform the stream (sync) operation. When performing a Stream Sync operation on 1024 streams, the user specifies a Stream ID from where to start the stream synchronization process; stream control process synchronizes 1024 Stream IDs at a time starting with the Stream ID specified as the starting point for Stream Sync. For example, if the operator specified 10000 as the ContentID from which to start synchronizing, a stream control process retrieves a list of streams that have Stream ID values from 10000 to 11024, and deletes any stream that exists on the on-demand server but for which stream control process has no record.
The streaming controller process and/or application software can perform a stream sync operation against each on-demand server to remove unknown or illegal (not know by stream control process) streams, whereby the software can instruct the user to perform a stream sync operation, by following these steps:
e. Events/Alarms from On-Demand Server
The Control Room page of the streaming controller process and/or application software provides a complete real-time listing of alarm conditions and events across all on-demand server systems in several categories including SNMP Event Notifications, Multi-Level Alarm Views, Alarm Severity Levels, and Alarm Details. The Alarm categories provide real-time alarm monitoring at the server, blade, and port component levels and supportive drill-down menus to detailed alarm information. For example, streaming controller process and/or application has enabled dynamic notification of on-demand server events using Simple Network Management Protocol (SNMP) Traps and captures such information and populates the Stream database. The streaming controller process and/or application can report alarms in categories of Alarm Severity Levels such as, for example, Critical, Major, Minor, and Informational. Moreover, the software can visually display Alarm Severity Levels in a Traffic Light icon shown in the upper-right corner of the streaming controller process display provides a visual notification of the highest level alarm condition reported by any server in the stream control process management domain. Alarm counts next to each severity light indicate the total number of active (uncleared) alarms as follows:
As shown in
Initially, stream control process ships with a default storage cluster already added to the stream database labeled as StorageCluster1 in the object tree. Any other on-demand server added to the stream control process thereafter can belong to the default storage cluster or assigned a new cluster by adding entering a name and description for the new storage cluster and refreshing the Object Tree will show the new cluster. Each additional on-demand server added to the stream control process creates a multiple storage cluster, whereby multiple storage clusters reflect how any on-demand servers and storage arrays are physically grouped. Moreover, all on-demand servers in a storage cluster share the same external storage or otherwise have a shared content library, whereby on-demand servers within a storage cluster can stream content contained on any storage array in the same cluster enabling multiple on-demand servers to share the same content libraries, and advantageously, eliminate the need to propagate content across multiple server systems. The cluster makes new content available to all on-demand servers using a single content ingest, as shown in
As shown in
Cluster configuration involves specifying one on-demand server as master and one or more on-demand servers as slaves. stream control process facilitates provisioning of storage clusters by allowing you to specify the on-demand server to function as Master, then automatically defaulting the remaining servers in the cluster to slaves. If only a single on-demand server exists in a storage cluster, the on-demand server functions as its own master and is considered a cluster of 1, in which case no additional configuration is required. If you have added more than one on-demand server to the same storage cluster, you must perform the configuration described in this section. You can delete storage clusters that you no longer need. When you delete a storage cluster, all on-demand servers within the cluster are removed as well. To remove a storage cluster from stream control process, follow these steps:
a. Configure a On-Demand Server Cluster
Storage Cluster—An administrative grouping of on-demand servers that share the same external storage. Enables multiple on-demand servers to use the same external storage volumes.
b. Configure Multiple Standalone VOD Servers
3. BMS Integration
a. Ingest Content
4. Additional SC Functionality
a. Manual Ingest on VOD Server
The stream control process enables you to perform manual asset ingests. During the manual ingest operation, stream control process performs a pull content ingest operation. Pull content ingest refers to the ability of the master on-demand server in the storage cluster to connect to a remote system (such as a catcher or Asset Management System) and initiate transfer the content into the storage cluster. As part of the manual pull ingest operation, you must specify the FTP URL of the content file. stream control process downloads this URL to the master on-demand server. The on-demand server then uses this URL to initiate FTP transfer of the content file as shown in FIG. Manual content ingest overrides normal ISA channels and is provided for testing and integration purposes only as 1. On-demand server initiates an FTP connection to the remote server (catcher); and
2. Content is transferred to the VODserver using FTP.
b. Manual Trick Commands on On-Demand Server via SOAP
The streaming controller process and/or application software provides for streaming from the on-demand servers stream directly from Dynamic Random Access Memory (DRAM). Before the on-demand server can stream content, it must retrieve the content from near term storage (NTS) and load it into memory. As is explained more fully in copending U.S. patent application Ser. No. ______, the on-demand server of the present invention can support two content management modes that dictate how content is handled within the system during stream operations: content paging and memory resident content. Content paging is the process by which the on-demand server loads into memory only the portion of content it requires to stream at a given moment. Content paging helps to conserve on-demand server DRAM when supporting high numbers of unique content streams. When paging content, the on-demand server logically segments the content into 8 MB portions-referred to as tiles—and retrieves from NTS only the tiles that it needs to stream at a given moment. As tiles are streamed, they are removed from memory and replaced with new tiles as required for seamless continuity of the stream. Memory resident content involves loading the entire content file into memory and keeping it there until streaming is complete. Memory resident content helps to maximize performance when the same content must support high numbers of streams. The on-demand server dynamically marks as memory resident only content that is servicing the most streams. When content is marked as memory resident, the on-demand server loads the entire content file into memory and keeps it there; this prevents the on-demand server from having to continuously retrieve portions of the content from external storage. The ability to stream both paged and memory resident content enables intelligent utilization of on-demand server DRAM to ensure the highest performance for the most popular content, whereby the on-demand servers use a dynamic paging algorithm to determine which content to make memory resident and which content to page. Dynamic content paging refers to the on-demand server's ability to swap content between paging and memory resident modes as it determines which content to page and which content to mark as memory resident. The on-demand server makes this determination based on use count. This dynamic paging algorithm prevents you from having to manually designate which content to page and which content to make memory resident. Use count is defined as the number of streams currently playing the content. For example, if two streams are playing the same content that content has a use count, or concurrency, of two. Only content with the highest use counts are marked as memory resident. As additional streams are created and use counts fluctuate, content is automatically swapped between memory resident and paging modes so that only contents possessing the highest use counts are memory resident. This ensures the greatest performance for content servicing the highest number of streams. Memory resident content remains in memory until replaced by content with a higher use count at which time the displaced content reverts back to paging mode.
5. Stream Management
a. Active Streams
As shown in
b. Stream History
The streaming controller process and/or application software provides the ability to view the complete stream history for all on-demand servers. As streams are destroyed they are removed from the active stream table and placed in the Stream History report. Moreover, as described herein, the user can generate reports for valid active streams—streams initiated by stream control process as a result of normal ISA operations.
6. Content Management
The streaming controller process and/or application software provides the ability to view a complete listing of assets (Asset List) that have been ingested into all on-demand server systems. The streaming controller process and/or application software provides the ability to view storage management features enable you to view and manage content contained in near-term storage (NTS) volumes (Storage Management). The streaming controller process and/or application software provides the ability to view synchronize content (Content Sync) between stream control process and the on-demand server to ensure that near term storage (NTS) only contains content that stream control process can use.
a. Assets
The streaming controller process and/or application software provides the ability to displays detailed information about the selected asset and enables you to perform the following actions:
The streaming controller process and/or application software provides a Configuration tab at the blade level to access to key blade parameters, which is important in the stream control process for the on-demand server's topology, including blade and ports. The streaming controller process is configured to verifiy that each port is online and configured to perform its intended function—ingest, stream, ingest and stream, or storage. The streaming controller verification process discovers the correct on-demand server's topology and ensure that the blade and ports are online and configured to perform their intended functions, as follows:
b. Ports
The streaming controller process and/or application software provides provides a Configuration tab at the port level so as to determine and configure port connectivity. Ports are the physical interfaces that provide network connectivity. Blades are available in both 4-port and 8-port configurations and come equipped with Gigabit Ethernet ports, Fibre Channel ports, or both. While Fibre Channel ports are only used for connection to external storage, Gigabit Ethernet ports can be configured to perform one of the following functions:
c. Ingest
Content ingest is the process of ingesting physical content files (MPEG-2, etc.) into the on-demand server system (SBB-1™ or B-1™) for streaming and storage. The system uses File Transfer Protocol (FTP) to transfer content from a remote server (such as a catcher) to one or more ingest ports on the on-demand server. The system supports both FTP “pull” and FTP “push” content ingest. In the FTP “pull” content ingest process, the on-demand server essentially functions as an FTP client and initiates transfer of the content from the remote server. In the FTP “push” content ingest process, the on-demand server functions as an FTP server and allows a remote client to initiate FTP transfer of the content to the on-demand server, Pull content ingest refers to the ability of the on-demand server to pull content from a remote server (catcher). Pull content ingest is typically used within video on demand applications where content already exits and is available ahead of time.
In a pull ingest operation, stream control process provides the on-demand server with the FTP URL describing the location of the content. After receiving this URL, the on-demand server initiates transfer of the content from the remote location.
d. Volume
Viewing Ingest Activity for a Headend. Ingest information at the headend level provides the number of active ingests for each on-demand server within the selected headend. To view ingest activity for a selected headend, follow these steps:
The streaming controller process and/or application software stores stream information in an Oracle SQL database (Stream database). The database resides on the stream control process application server and contains on-demand server configuration information, statistical data, alarms, stream history and suspended stream information. To help conserve database resources, stream control process runs an automated purging process that removes obsolete data at pre-defined intervals:
The streaming controller process and/or application software trap event notifications from the one or more on-demand servers to generate generate event notifications to alert you to configuration changes, state transitions, and error conditions as they occur on the video server. The system can send these notifications to remote hosts in the form of Simple Network Management Protocol (SNMP) traps. Each on-demand server sends event notifications to stream control process over the Simple Object Access Protocol (SOAP) interface. The stream control process then saves a copy of the event notification to its local database and passes a copy to the SNMP agent running on the stream control process application server. Stream Commander uses the copy contained in its database to populate Graphical User Interface (GUI) alarm displays; the SNMP agent translates its copy of the notification into an SNMP trap then forwards the trap to all hosts defined in the trap forwarding table.
a. JBoss
The streaming controller process and/or application software forwards the trap event notifications from the one or more on-demand servers to IP host destinations using The trap forwarding table defines the IP host destinations to which Stream Commander forwards SNMP traps. You can view and edit the trap forwarding table as described in the following procedures:
The catcher receives content in the form of content packages. Each content package is stored in a directory and contains both an XML file describing the content and the content file itself. Manual content ingest using Stream Commander requires that you specify the FTP URL to the content file. If you do not specify a title in the Title field of the manual ingest window, you must also specify the XML file associated with the content file, in which case the XML file must be named XML.ADI.
Although exemplary embodiments of the present invention have been shown and described with reference to particular embodiments and applications thereof, it will be apparent to those having ordinary skill in the art that a number of changes, modifications, or alterations to the invention as described herein may be made, none of which depart from the spirit or scope of the present invention. All such changes, modifications, and alterations should therefore be seen as being within the scope of the present invention.
This application claims the benefit of U.S. Provisional Application No. 60/576,095, filed Jun. 1, 2004, U.S. Provisional Application No. 60/576,269, filed Jun. 1, 2004, U.S. Provisional Application No. 60/576,338, filed Jun. 1, 2004, and U.S. Provisional Application No. 60/576,402, filed Jun. 1, 2004.
Number | Date | Country | |
---|---|---|---|
60576095 | Jun 2004 | US | |
60576269 | Jun 2004 | US | |
60576338 | Jun 2004 | US | |
60576402 | Jun 2004 | US |