This application claims the benefit under 35 U.S.C. §119(a) of Korean Patent Applications No. 10-2013-0010058, filed on Jan. 29, 2013, and 10-2013-0095652, filed on Aug. 12, 2013, in the Korean Intellectual Property Office, the entire disclosures of which are incorporated herein by references for all purposes.
1. Field
The following description relates to an augmented reality (AR) technology.
2. Description of Related Art
Augmented broadcasting, as an augmented reality (AR) service, is one example of an enhanced broadcasting service that provides viewers with a vivid sense of reality while smoothly blending augmented content into the broadcast content, unlike traditional digital TV broadcasting services that are transmitted to the viewers from broadcasting service providers in a unidirectional manner. For augmented broadcasting, a receiver terminal, such as a digital TV or a mobile device, may need to set a particular area as an augmentation region in a scene of a broadcast program, and obtain an augmented content for the augmented broadcast program. The broadcasting service provider transmits a broadcast program and the relevant augmentation region information to the receiver terminal. The receiver terminal uses the transmitted augmentation region information to obtain augmented content associated with an augmentation channel selected by the viewer, and outputs the obtained augmented content on the augmentation region.
An AR service is generally an overlay of augmented content on an image captured by a camera equipped in a receiver terminal. For example, when a user runs an AR application and activates a camera in a mobile device in order to find out the location of a destination, the mobile device identifies the user's current location and direction based on data obtained from a global positioning system (GPS) sensor, a compass sensor, a gyro sensor, or the like, and displays the direction of the destination on an image captured by the camera. However, the general AR service simply provides an overlay of augmented content on a screen, and allows only a limited interaction between the augmented content and a user.
The following description relates to an OutputActuator node which is capable of providing a realistic augmented reality (AR) service through an interaction between an augmented object and a user, a Moving Picture Experts Group (MPEG) terminal with the OutputActuator node, and a method for controlling an actuator using the OutputActuator node.
In one general aspect, there is provided an OutputActuator node including: an enabled field indicating whether the OutputActuator node is activated or not; a url field designating an actuator to which a command is delivered for control of the actuator; and an eventName field containing a command list for operating the designated actuator.
In another general aspect, there is provided A Moving Picture Experts Group (MPEG) terminal including: an OutputActuator node configured to deliver data obtained from a scene to a target actuator by designating the target actuator, generating a command list for operating the target actuator and transmitting the command to the target actuator.
In yet another general aspect, there is provided A method of controlling an actuator using an OutputActuator node, the method including: designating a target actuator to which a command for control is delivered; a generating a command list for operating the target actuator; and storing information about the target actuator and the generated command list in a scene descriptor.
Other features and aspects may be apparent from the following detailed description, the drawings, and the claims.
Throughout the drawings and the detailed description, unless otherwise described, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The relative size and depiction of these elements may be exaggerated for clarity, illustration, and convenience.
Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. The present disclosure may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these exemplary embodiments are provided so that the present disclosure is thorough, and will fully convey the scope of the invention to those skilled in the art.
Throughout the drawings and the detailed description, unless otherwise described, the same drawing reference numerals are understood to refer to the same elements, features, and structures. The relative size and depiction of these elements may be exaggerated for clarity, illustration, and convenience.
An augmented reality (AR) service is an overlay of augmented content on a screen. The AR service may only change objects on the screen during the interaction between a user and the augmented content. If sensory information can be utilized in interaction between the user and augmented content and the change in augmented content can control an actual actuator, it may be possible to provide more realistic AR services to a user. In exemplary embodiments of the present invention, a node capable of controlling an actuator is added to a scene descriptor of Moving Picture Experts Group-4 (MPEG-4), for example, binary format for scenes (BIFS), thereby enabling the control of the actual actuator, and hence the real actuator can be included in a scene configuration in association with the existing scene descriptor.
With respect to MPEG-4 BIFS, a configuration of a screen to be displayed to a viewer is referred to as a scene, which is a new concept that has never been introduced to MPEG-1 and MPEG-2 which only deal with standardized video encoding schemes. On the contrary, MPEG-4 scheme is able to encode a particular object, and deal with a compression-encoded video and also a specific object generated by designating a parameter. In addition, in the MPEG-4 scheme, it is possible to change a scene constituting one MPEG-1 or MPEG-2 video to a scene into which a plurality of objects are combined, and thus an MPEG-4 system may require an element to describe a scene in an effort to specify display methods and properties of objects. The scene refers to one displayed image containing various media objects including a still image, text, a moving picture image, audio, and the like. A scene descriptor is thus required to indicate spatial positions and temporal relationships among these objects. MPEG-4 standardizes the scene descriptor as BIFS.
Basic elements of BIFS are nodes. One group of nodes makes scene description feasible, and each node represents each object in the scene spatially and temporally. The node is assigned properties and environment variables according to an element in the node, which is referred to as a “field.” In addition, the field provides a handle that processes an event, such as a mouse click, in association with a sensor and route node.
In exemplary embodiments of the present invention, there is provided a method for allowing a user to have a realistic experience with an augmented object and interact with the augmented object in an effort to provide a more realistic AR service, without simply overlaying the augmented object on a screen. For example, when a viewer pets a puppy as an augmented object, the viewer should be able to feel the puppy with his/her hand and the augmented puppy may react by wagging its tail. To this end, according to the exemplary embodiments of the present invention, a sensor for detecting a user's location, a method for defining a reaction of an augmented object in accordance with the detected user's location, a method for creating a control command to control a haptic actuator for providing a realistic feeling of petting the puppy, and a method for delivering the control command to the haptic actuator are provided.
Hereinafter, a method and apparatus for controlling an actuator in scene configuration information by adding an OutputActuator node, which controls an actuator, to MPEG-4 BIFS will be described with reference to the accompanying drawings. Further, a method and apparatus for providing augmented reality services that enables five-sense interaction with an augmented object by associating an OutputActuator node with an InputSensor node and controlling the actuator based on sensory information obtained by the sensor.
Referring to
Referring to
Each field constituting the OutputActuator node may be described by Table 2 as shown below.
Referring to
In one example, message standards may be defined as below, in an effort to deliver a command to an actuator.
Referring to
Referring to
Referring to
In response to receiving a plurality of events, the OutputActuator nodes 400-1, 400-2, and 400-3 generate DDFs from the received events and transmit the generated DDFs to the respective MPEG-V actuators 410-1, 410-2, and 410-3. In this case, each of the OutputActuator nodes 400-1, 400-2, and 400-3 may transmit a command in the form of a message to the corresponding MPEG-V actuator 410-1, 410-2, and 410-3, the command including a nodeID field representing an identifier of each OutputActuator Node and a command field for transmission of the command to the target MPEG-V actuator. A compositor 42 combines and arranges media objects on a screen 420 according to the scene descriptor.
In the example shown in
Referring to
In the example shown in
Referring to
A definition of command standards for each actuator type that ensure the compatibility between BIFS and the actuator may be provided as shown in Table 4 below.
Referring to Table 4, a light actuator receives input events from the OutputActuator node, one indicating a light intensity and another indicating a light color.
Referring to Table 5, a vibration actuator receives an input event that indicates vibration intensity from the OutputActuator node.
Referring to Table 6, a tactile actuator receives an input event that indicates a tactile intensity from the OutputActuator node. Any addition to the above command standards is possible according to a control parameter of an actuator, with reference to Tables 4 through 6.
Referring to
In one example, the OutputActuator node may receive a command from an InputSensor node so as to control the actuator. The command is generated by the InputSensor node that receives sensing information in DDF form from a sensor and transforms the sensing information to the command. The OutputActuator node updates a command list for operating a target actuator in response to receiving the command from the InputSensor node, and delivers the updated command to the actuator.
Referring to
A number of examples have been described above. Nevertheless, it should be understood that various modifications may be made. For example, suitable results may be achieved if the described techniques are performed in a different order and/or if components in a described system, architecture, device, or circuit are combined in a different manner and/or replaced or supplemented by other components or their equivalents. Accordingly, other implementations are within the scope of the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2013-0010058 | Jan 2013 | KR | national |
10-2013-0095652 | Aug 2013 | KR | national |