The field of the disclosure is that of creating and rendering multimedia scenes, on any type of terminal, and in particular on terminals having an internal operating system (OS) offering all interactivity capabilities available on conventional microcomputers.
More precisely, the disclosure relates to improving the interactivity for such terminals, such as mobile telephones, electronic organisers (PDA), etc.
A multimedia scene, within the meaning of this document, consists of objects each having various characteristics (sizes, colours, animation, content, . . . ), according to known techniques, which in particular have been the subject of standards, e.g., such as SVG (Scalable Vector Graphics, a language for describing vector graphics) or VRML (Virtual Reality Modelling Language).
Such scenes can be programmed by a developer, so as to enable interactivity with the user of a terminal on which they are played. A specific user command can result in a specific action (selection or movement of an object, starting a video, . . . ). These actions or operations can in particular correspond to <<sensors,>> according to VRML or MPEG terminology.
Besides the keypad on microcomputers, the user has a mouse, or similar means, at their disposal, which make it possible to move a pointer on the screen, and to click in order to select an object or start an operation. This interface element is very ergonomic and thus frequently used.
However, although some mobile telephones integrate a similar function, in the form of a stylus or other control device (such as a paddle or <<joystick>>), this technique is far from being made common on small-sized and/or low-cost devices.
In this case, the terminal has neither the interface nor let alone the software means enabling the control of such an interface. In other words, the operating system cannot interpret commands designed for a pointer that it does not possess.
Accordingly, a developer of multimedia scenes wishing to propose a scene capable of being played on any type of terminal has only two solutions, neither of which is satisfactory.
According to a first solution, the scene is developed without using the man-machine interface associated with operating a pointer. The result of this is increased complexity of use and programming, and dissatisfaction on the part of the users of terminals having such an interface.
According to a second solution, two versions of the scene are developed, with and without pointer control. In this case, the production time is of course increased, and the two versions do not react in exactly the same way. Furthermore, it is necessary to provide for a specific control management based on the specific capabilities of the terminal, in order to choose which version to use.
Furthermore, the users of terminals without pointer control have only a degraded version of the scene, which is likely to not satisfy them, and some functions will not be able to be used.
In particular, an exemplary objective of the disclosure is to mitigate these various disadvantages.
More precisely, an exemplary objective of the disclosure is to provide a technique for constructing and rendering multimedia scenes which makes it possible to circumvent the absence of a pointer-type interface control in the operating system of a terminal.
An aspect of the disclosure relates to method of constructing multimedia scenes intended to be rendered on at least one terminal, comprising at least one multimedia object to which properties can be assigned, enabling the behaviour thereof to be controlled in said scene.
According to an exemplary embodiment of the invention, at least one of said scenes includes at least one object, referred to as a pointer object, to which a pointer property is assigned such that it reacts to actions carried out by a user of a terminal, including:
Thus, according to an embodiment of the invention, control of the pointer is not ensured conventionally, by the operating system of the terminal, but by the multimedia scene itself. In a simple and effective way, it is thereby possible to have the use of a pointer, and the associated actions, even on a terminal which does not integrate this function into its operating system.
In other words, control of the pointer is transferred within the scene, which makes it possible to not only have it available for use in a terminal which did not originally have it, but to also develop only one optimised scene for all the terminals.
This approach also remains to be particularly simple: it consists substantially in the creation of a new type of object, or more precisely a new object property for multimedia scenes.
According to a first advantageous approach of an embodiment of the invention, said pointer property can be assigned to any type of object of said multimedia scene having a visual component.
This makes it possible to not only have conventional pointers (arrows, for example), but more generally speaking any type of pointer, including graphic objects, videos . . . without any particular complexity.
According to a second approach of an embodiment of the invention, said pointer property can only be assigned to an object of said multimedia scene of a type belonging to a predetermined selection of object types.
At least one of said actions for moving and/or for selecting is preferably associated with pressing on a keyboard key of said terminal.
Of course, other modes of transmitting actions can be considered, based on the means equipping the terminal (including its own pointer control means, if it has any).
Said scene preferably includes at least one object, referred to as a sensitive object, intended to react with said pointer object, when they are at least partially superimposed.
In order to facilitate detection of this superimposing, it is advantageously provided for said pointer object to include a specific aiming point, referred to as the focal point.
According to one particular embodiment of the invention, said focal point is the origin of a system of local coordinates of said pointer object.
An embodiment of the invention preferably provides for at least one step for superimposing said focal point and a point of one of said sensitive objects.
Said superimposing step is advantageously used for detecting an entry of said pointer onto one of said sensitive objects and/or an exit of said pointer with respect to one of said sensitive objects.
Thus, an entry or an exit can result in transmission of an event corresponding to said sensitive object.
In particular, a selection action carried out during superimposing advantageously results in the transmission of a validation event to the sensitive object concerned.
According to one particular aspect of an embodiment of the invention, it is possible to provide for said movements to be carried out in blocks of N pixels, N being an integer less than the smallest dimension of one of said sensitive objects present in the scene.
Said operations preferably include events corresponding to predetermined action semantics.
In particular, this can involve higher-level actions, such as drag-and-drop, or <<sensors,>> according to VRML terminology.
An embodiment of the invention also relates to signals carrying at least one multimedia scene produced according to the above-described method, and intended to be rendered on at least one terminal.
An embodiment of the invention also relates to computer programs including program instructions for constructing such multimedia scenes.
According to another aspect of an embodiment of the invention, the latter also relates to computer programs including program instructions for running these multimedia scenes.
A program such as this can be installed on a terminal, e.g., in the form of a component to be downloaded (<<plug-in>>), which will complete software already present on the terminal, making it possible to play multimedia scenes.
An embodiment of the invention also relates to multimedia terminals making it possible to render such multimedia scenes, and to the corresponding method of rendering multimedia scenes, already present on the terminal. Of course, it can also be an integral part of such software.
According to yet another aspect, an embodiment of the invention relates to servers containing at least one such multimedia scene, and to data media (disks, storage devices . . . ) carrying such scenes.
Finally, an embodiment of the invention relates to a pointer object of such a multimedia scene. According to an embodiment of the invention, an object such as this is assigned a pointer property such that it reacts to actions carried out by a user of a terminal, including:
As a clearly identifiable essential constituent, an object such as this is an intermediate component of a multimedia scene according to an embodiment of the invention, which in and of itself has a novel and inventive technical effect.
Other characteristics and advantages will become more apparent upon reading the following description of a preferred embodiment of the invention, given as a single, non-limiting and illustrative example, and from the appended drawings.
The example of
Of course, in its memory, the terminal includes software for rendering multimedia scenes, e.g., in the SVG format, integrating the control of the cursor property according to an embodiment of the invention.
In the example shown in
A polygonal object 13 with seven sides represents an arrow the tip of which is turned upward and to the left. This <<pointer>> object can be moved over the entire screen.
An embodiment of the invention is based on the creation of this pointer object, sensitive objects, and the corresponding control.
Thus, the author of the scene created this arrow object 13 with a specific attribute, for example:
isVirtualPointer=<<true>>.
This attribute gives the arrow object 13 a virtual pointer behaviour. It behaves like the hardware pointer available on the operator systems that support it.
The arrow object 13 has a certain size, and in order for the selection operations to be accurate, one point of the arrow object (in this case the tip of the arrow) is chosen as the focal point 131, i.e., the point situated beneath the tip of the arrow at the top left of the object. This point is the origin of the system of local coordinates of the arrow object, i.e., the coordinate point 0.0.
In order to control the movement of this virtual pointer, the author of the scene has created four actions associated with four keys of the keypad. The key <<2>> triggers an action which moves the arrow object 131 five pixels (for example) upward. In the same way, the keys <<6>>, <<8>> and <<4>> trigger an action which moves the arrow object 13 five pixels towards the right, bottom and left, respectively.
The choice of an increment size of 5 pixels presumes that the sensitive objects are of a size greater than 5 pixels, so that the movement of the virtual pointer does not skip over one of the sensitive objects. In other words, movements are preferably carried out in blocks of N pixels, N being an integer lower than the smallest dimension of the sensitive objects present in the scene.
In order to control the sensitivity of the sensitive objects to the virtual pointer, the multimedia reader verifies, for each movement of the arrow object, whether the focal point of the virtual pointer meets one of the following conditions:
In the example shown in
An embodiment of the invention also makes it possible to emulate a selection operation, or <<click>>. In the example shown, one key of the keypad is by default associated by the reader with validation, e.g., the key <<5>>.
When this key is pressed, the reader verifies whether the focal point of the virtual pointer is situated on one of the sensitive objects. If this is the case, the reader sends a validation event to the object pointed at. For example, the menu for the restaurant R1 is displayed only if this validation event has been received. Other operations (e.g., a telephone call) are of course possible, and are linked solely to programming by the author.
If this is not the case, the reader sends the validation event to the validation manager by default, if the author has defined one.
Several different validation events can of course be defined, and be associated with key combinations, with various keys, with multiple presses (<<double click>>) and/or with the execution of one or more previous operations.
In a simplified manner,
The author first defines 21 a multimedia scene, and in particular a set of objects each having their own properties. Within this framework, he assignes 22 the pointer property isVirtualPointer=<<true>> to one or more objects, and then associates a movement control 23 to each pointer object, e.g., in the form of a movement of N pixels for each pressing of predetermined keys.
Next, the author identifies 24 one or more sensitive objects, and then associates 25 with them actions to be carried out, depending on whether the pointer enters upon, remains on and/or exits from the sensitive object. These actions can be simple, complex and multiple.
In particular, this can involve events corresponding to higher-level action semantics, such as <<drag-and-drop>> or VRML <<sensors>>. For example, passing the pointer over a sensitive object can result in it being set into motion (e.g., rotation of a world map), enable it to be moved (either linearly, in the form of a <<drag-and-drop>> movement, or an any manner (rotation, depthwise movement . . . )), or the starting of a specific operation (opening of another scene, or a menu, starting or stopping a video, . . . ).
The author also programmes 26 the emulation of one or more <<clicks,>> associated when applicable with various objects, and with a default command, when the pointer is not superimposed over a sensitive object.
The author can also programme control of the edges of the image 27, making it possible to move this image when the pointer comes up against an edge of the screen. In the example of
In the same way,
The terminal thus receives the scene 31, and the objects which compose it, programmed according to the method of
It also detects the superimposing 34 of the pointer (more precisely its focal point) and a sensitive object, and produces the operations associated with an entry upon or an exit from a sensitive object.
Finally, it ensures the emulation of a <<click>> 35, or, where applicable, several types of <<clicks>>, and starts the associated operations, based on the position of the pointer.
Numerous alternative implementations can of course be considered.
In particular, the multimedia scene can be anything, provided that it comprises a certain number of objects sensitive to the pointer, like buttons, a form, an image with regions of interest, a game board with bricks or flying saucers . . . .
By way of example,
The focal point of the virtual pointer can be moved anywhere in relation to the visual form of the pointer, e.g., by creating this visual form in a transformation object (like a <g> in SVG).
The choice of the focal point as origin of the system of local coordinates of the pointer object is a simple choice, but any other choice is possible, including a case-by-case choice by explicitly indicating the position of the focal point in the object declared as the virtual pointer, e.g., by a attributefocalPointPosition=<<10 10>.
Of course, the name and the value of isVirtualPointer=<<true>> are replaceable by any unambiguous combination conferring the identical semantics upon a graphic object, or validating such semantics if they are defined by default on all the objects.
The actions ensuring movement of the cursor are not necessarily keystrokes, but any user action via an available means, keypad, special keys, voice recognition, joystick, jog dial/scroll wheel, . . .
The movements of the virtual pointer can be steady or not, isotropic or not, or vary over time or not.
The sensitive objects can be static or moving (as in a game).
The pointer_entry, pointer_exit and validation events can be implemented entirely or partially, and other more complex events can be defined in the same way: distinction between pressing and releasing, <<drag-and-drop>> behaviour, . . . .
An aspect of the disclosure provides a technique for implementing multimedia scenes, which penalises neither users equipped with a terminal having a pointer control, nor users equipped with a terminal not having one.
An aspect of the disclosure provides such a technique, which does not require a developer to develop several versions of the same scene, nor to implement complex development.
An aspect of the disclosure provides such a technique, which can be implemented on the majority of terminals, with or without an integrated pointer control, without any hardware modification, on both new terminals as well as already distributed terminals.
An aspect of the disclosure provides such a technique, which is not costly, whether in terms of processing time or in terms of memory capacity.
Although the present disclosure have been described with reference to one or more examples, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the disclosure and/or the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
0503048 | Mar 2005 | FR | national |
This Application is a Section 371 National Stage Application of International Application No. PCT/EP2006/061061, filed Mar. 27, 2006 and published as WO 2006/103209 A1 on Oct. 5, 2006, not in English.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2006/061061 | 3/27/2006 | WO | 00 | 2/7/2008 |