The invention will now be described, by way of non-limiting example, with reference to the figures of the annexed drawings, in which.
In brief, the system and method proposed enable a MultiMedia Framework to access the functions of remote components in a transparent manner, i.e., as if they were present in the local device.
There is hence proposed a system that extends the concept of MultiMedia Framework, or multimedia infrastructure, typically used for connecting audio/video components within a device, to a home network, in particular to applications of a streaming type, but also to real-time applications, such as video-telephony or Voice-over-IP. According to the invention, it is envisaged that individual devices notify the multimedia components supported thereby through a service-discovery protocol, in a preferred embodiment the UPnP protocol. In this way, a multimedia application can be set by a controller, in particular through an API, conveniently integrated through the connection of components that are physically located in different apparatuses.
MultiMedia Frameworks are, as has been said, usually employed within a single device for connecting components, such as input/output ports, coders, decoders, mixers, for the purpose of implementing complex applications. The infrastructures represented by the MultiMedia Framework enable management of different kinds of media, amongst which audio, video and images.
The multimedia components manipulate data buffers that contain multimedia data transmitted at high rates, consequently requiring optimized data paths. Beyond this, descriptive metadata of the content of the data buffers can be extracted from or introduced into said components.
MultiMedia Frameworks enable the components to be instantiated, configured and connected together for implementing a specific application. In the present description, by “instantiation” is meant the operation of creation of objects with corresponding allocation of memory and initialization of fields, as is known from object programming, for example, Java programming.
As has been said, the system proposed herein extends said traditional and known concept of MultiMedia Framework, in so far as it enables a single device to create a multimedia application through the connection of components that are physically located in multiple devices connected to it through one or more communication networks. All this is obtained without having to change or modify the MultiMedia Framework and the applications that use the services thereof.
The system of distributed processing of multimedia contents proposed envisages:
Designated by the reference number 11 is a block representing a first device connected on a communication network 20. Said first device 11 sends a multimedia content I1 on the communication network 20 to a second device 12, which carries out a processing on the multimedia content I1, obtaining the multimedia content I2, which is transmitted on said communication network to a third device 13, which is able to carry out a further processing, i.e., reproduce the multimedia content I2.
The first device 11 can be a mobile phone which requests display of a recorded video film, the multimedia content I1, on a television set. The video film is recorded on the mobile phone 11, using a video compression standard MPEG4 or H.264. However, the television set, of a digital type, accepts, instead, preferably video films in MPEG2 format, corresponding in
Given the high computational complexity of the transcoding function, neither the mobile phone 11 nor the television set 13 would be able to carry said function. However, using a home PC as second device 12 it is possible to execute said transcoding operation provided that a component suited for executing said operation is present within said PC.
According to the invention, the distributed-processing system 10 envisages that the first device 11, i.e., the mobile phone, implements procedures that can discover the transcoding component in the PC corresponding to the second device 12 and create a graph equivalent substantially to the block diagram of
The mobile phone 11 consequently sees a list of multimedia components published as available on the communication network 20. However, said mobile phone 11 also requires verification of whether the communication between the input ports and output ports of the components is possible. The mobile phone 11 is hence able to create a complete graph, which is illustrated via the block diagram of
In the context of the UPnP protocol, which is described in an on-line publication titled “Understanding UPnP” at http://www.upnp.org/resources/whitepapers.asp, and in the publication M. Jeronimo, J. Weast, “UPnP design by example: a software developers' guide to Universal Plug 'n Play”, Intel Press, ISBN0971786119, the service-discovery operations usually envisage that the devices export entire applications through their publication on the communication network. In the system according to the invention it is instead envisaged to render the components of the applications available, with a consequently higher level of detail, so that it will be possible to build complex distributed applications, combining components coming from different devices, under the control of any Control Point device, for example a Control Point device in the context of the UPnP protocol.
In practice, the list of the multimedia components published available on the communication network 20 can be gathered by any Control Point and presented in a MultiMedia Framework as if the components were all local.
An application, which is run, for example, on the first device 11, can consequently use the MultiMedia Framework at this degree of detail, i.e., on the basis of the list of components available at the Control Point 14, which can be accessed to create the desired graph of components.
Elements necessary to enable connection between ports of remote components can be instantiated automatically by the MultiMedia Framework, without the application thereof being informed.
Within the system for distributed processing of multimedia contents according to the invention, a first important aspect requires taking into account the fact that the components are associated with input ports and/or output ports, through which the data buffers are exchanged. The connection of components in the graph created by the MultiMedia Framework corresponds in effect to a connection between ports of said components, which are ideally homogeneous, i.e., they must deal with buffers that transport the same type of data.
A second important aspect of the distributed-processing system regards control of the components. The components are ideally configured by an application that makes use thereof at the moment of initialization; however, it is possible to modify the parameters in runtime.
This is usually obtained within the MultiMedia Framework by means of complex data structures that bear the parameters important for performing the configurations of the components according to the multimedia domain (audio, video, images, graphic, etc.). With reference to the system described in
In order to define a standard mode of presenting the configuration of the component, it is envisaged in the system and method proposed to resort to the service-discovery function of the UPnP standard, which represents the component configuration by means of XML documents. Since the UPnP service-discovery function provides only a standard mode of representing the information, but does not define the parameters necessary for configuring the components, the system according to the invention envisages using preferably as reference for said definition the Khronos OpenMAX Integration Layer API (hereinafter IL API) in so far as said interface enables extensive application in a large number of platforms.
For an illustration of said standard, see, e.g., “OpenMAX—the standard for media library portability”, available at http://www.khronos.org/openmax/.
It is then to be noted that, in the context of devices that can process multimedia contents connected to a communication network there are ideally distinguished at least two categories of components: components that have a local meaning; and components that do not have a local meaning.
For example, for an audio/video-player component, the position within the communication network or within the environment in which said communication network is displayed is of some importance. Consider, for example, a television set in a lounge or in the kitchen. Conversely, a transcoder component can be positioned anywhere within the home communication network, provided that it performs the functions required of it.
As regards the components the physical location of which within the network is important from the applicational standpoint, it is consequently possible to adopt, in the context of the system proposed, a convention that will identify the position thereof, for example using the name of the component. In this way, a component of a “renderer” type, for example, a video display, such as the television set 13, located in the lounge, could be referred to as “OMX.ST.Video.Renderer.Lounge”, using a syntax of the type defined for the Khronos OpenMAX IL API. The creation of the name of the component that will include also its location is the responsibility of the IL core of the API 113. It is possible to associate the function of the component to its position, thanks to the service-discovery protocol, on the hypothesis that the remote devices render this information available. This could require a procedure of configuration assisted by the user in the installation stage. It is clear that, once this information on the location of the component is made available to the MultiMedia Framework through the standard IL API 113, the applications can draw immediate benefit therefrom.
The distributed MultiMedia Framework proposed can be applied to many different systems, from PCs to mobile phones or to set-top boxes.
For example, in a mobile phone, traditional MultiMedia Frameworks, such as Symbian MMF/MDF, are evolving towards the support of the OpenMAX standard, in which local multimedia components can be controlled through the aforementioned IL API. As illustrated in the diagram of
The API 113 according to the Khronos OpenMAX Integration Layer standard enables control and connection of multimedia components, especially for platforms that offer hardware acceleration for multimedia contents.
Designated by LC in
Within said API 113 it is hence envisaged to define data structures for component configuration for each type of component in the various Audio, Video, and Imaging domains. It is moreover envisaged to set conventions for passage of the data buffers and to define the connections between the ports of the components, said operation being also in itself known and defined as “data tunnelling” in the OpenMAX standard. For example, the data tunnel can be set so that the output data buffer of a component is passed directly to the input port of the next component in the processing chain. The operation of data tunnelling in the system and method according to the invention must also take into account the connections with remote components, this not being envisaged by the API 113 in its basic configuration of OpenMAX IL API. In this case, the data buffers are ideally passed through the communication network 20 that connects the devices. To do this suitable communication protocols are ideally used, such as HTTP/TCP/IP or RTP/UDP/IP. A high-bandwidth path (HBP) 202 between the components is then made available through the media receiver 12c and the media transmitter 11b, respectively located in the PC 12 and in the mobile phone 11, which can be obtained via client-server software that operates in the network nodes. In this connection, designated by 115 in the PC 12 is the server level that exchanges data with the application level 111 in the mobile phone 11. When a data tunnel is created between two components not resident on the same device, an HBP 202 is created; i.e., all the resources necessary for transport of the data buffers between the ports of the components are to be instantiated, possibly taking into account the QoS requirements for the data that are to be transported. The management of the resources necessary for proper operation of the HBP 202 is performed by the IL core, i.e., the software library that implements the IL API.
Consequently, when remote multimedia components C are discovered in the communication network 20, for example through the UPnP service-discovery protocol, which in
In this way, the MultiMedia Framework 112, which does not require modifications, can make use both of the local components LC and of the remote components C. In particular, as has been said, the blocks designated by RC in
Whenever a function call belonging to the standard of the API 113 is invoked on a proxy RC of the remote component, it is ideally sent to the corresponding remote component C for execution. In the case of a configuration command, this is, for example, translated into a SOAP (Simple Object Access Protocol) message 203, whilst the data-passage functions use the HBP 202 made available through the media transmitters and receivers lib and 12c. For this purpose
Consequently, without prejudice to the principle of the invention, the details of construction and the embodiments may vary, even significantly, with respect to what is described and illustrated herein purely by way of non-limiting example, without thereby departing from the scope of the invention, as defined in the ensuing claims.
For instance, it is possible to use as procedure designed to discover components on the network, instead of the UPnP standard, other protocols defined by IETF, such as the Service Location Protocol (SLP).
Number | Date | Country | Kind |
---|---|---|---|
TO2006A000500 | Jul 2006 | IT | national |