This disclosure is directed to a set of advanced video coding technologies. More specifically, the present disclosure is directed to encoding and decoding haptic experience for multimedia presentation.
Haptics experience has become a part of multimedia presentation. In applications where multimedia presentation includes an aspect of haptic experience, haptic signals may be delivered to the device or wearable and the user may feel the haptic sensations during the use of the application in coordination with the visual and/or audio media experience.
Recognizing the growing popularity of haptic experience in multimedia presentations, motion picture experts group (MPEG) has started working on a compression standard (both for MPEG-DASH and MPEG-I) for haptics as well as carriage of the compressed haptics signaling in ISO based media file format (ISOBMFF).
In many applications, while visual and audio tracks are continuous, the haptic effects are sparse, e.g., for short durations of media presentation, the haptic media effects need to be rendered along with audiovisual samples, but at other times, the haptic track is ‘quiet.’ The present ISOBMFF carriage of haptic signals does not address the quiet periods of a haptic track. Therefore, solutions addressing this problem are required.
According to embodiments, a method for decoding sparse haptic data may be provided. The method may include receiving a haptic track comprising more than one type of moving picture experts group (MPEG) immersive haptics stream (MIHS) unit; obtaining, from the haptic track, a first type of MIHS unit that comprises haptic information; obtaining, from the haptic track, a second type of MIHS unit, the second type of MIHS unit being an empty unit comprising only duration information; and determining a quiet period of the haptic track based on the duration information in the second type of MIHS unit.
According to embodiments, an apparatus for decoding sparse haptic data may be provided. The apparatus may include at least one memory configured to store program code; and at least one processor configured to read the program code and operate as instructed by the program code. The program code may include first receiving code configured to cause the at least one processor to receive a haptic track comprising more than one type of moving picture experts group (MPEG) immersive haptics stream (MIHS) unit; first obtaining code configured to cause the at least one processor to obtain, from the haptic track, a first type of MIHS unit that comprises haptic information; second obtaining code configured to cause the at least one processor to obtain, from the haptic track, a second type of MIHS unit, the second type of MIHS unit being an empty unit comprising only a duration information; and first determining code configured to cause the at least one processor to determine a quiet period of the haptic track based on the duration information in the second type of MIHS unit.
According to embodiments, a non-transitory computer-readable medium stores computer instructions may be provided. The instructions may include one or more instructions that, when executed by one or more processors of a device for decoding sparse haptic data, cause the one or more processors to receive a haptic track comprising more than one type of moving picture experts group (MPEG) immersive haptics stream (MIHS) unit; obtain, from the haptic track, a first type of MIHS unit that comprises haptic information; obtain, from the haptic track, a second type of MIHS unit, the second type of MIHS unit being an empty unit comprising only a duration information; and determine a quiet period of the haptic track based on the duration information in the second type of MIHS unit.
Further features, the nature, and various advantages of the disclosed subject matter will be more apparent from the following detailed description and the accompanying drawings in which:
According to an aspect of the present disclosure, methods, systems, and non-transitory storage mediums for parallel processing of dynamic mesh compression are provided. Embodiments of the present disclosure may also be applied to static meshes.
With reference to
In
As illustrated in
The video source 201 can create, for example, a stream 202 that includes a 3D mesh and metadata associated with the 3D mesh. The video source 201 may include, for example, 3D sensors (e.g. depth sensors) or 3D imaging technology (e.g. digital camera(s)), and a computing device that is configured to generate the 3D mesh using the data received from the 3D sensors or the 3D imaging technology. The sample stream 202, which may have a high data volume when compared to encoded video bitstreams, can be processed by the encoder 203 coupled to the video source 201. The encoder 203 can include hardware, software, or a combination thereof to enable or implement aspects of the disclosed subject matter as described in more detail below. The encoder 203 may also generate an encoded video bitstream 204. The encoded video bitstream 204, which may have e a lower data volume when compared to the uncompressed stream 202, can be stored on a streaming server 205 for future use. One or more streaming clients 206 can access the streaming server 205 to retrieve video bit streams 209 that may be copies of the encoded video bitstream 204.
The streaming clients 206 can include a video decoder 210 and a display 212. The video decoder 210 can, for example, decode video bitstream 209, which is an incoming copy of the encoded video bitstream 204, and create an outgoing video sample stream 211 that can be rendered on the display 212 or another rendering device (not depicted). In some streaming systems, the video bitstreams 204, 209 can be encoded according to certain video coding/compression standards.
With reference to
As shown in
According to an embodiment, the haptic encoder 300 may process the two types of input files differently. For descriptive content, the haptic encoder 300 may analyze the input semantically to transcode (if necessary) the data into the proposed coded representation.
According to an embodiment, the .ohm metadata input file may include a description of the haptic system and setup. In particular, it may include the name of each associated haptic file (either descriptive or PCM) along with a description of the signals. It also provides a mapping between each channel of the signals and the targeted body parts on the user's body. For the .ohm metadata input file, the haptic encoder performs metadata extraction by retrieving the associated haptic files from the URI and encodes it based on its type and by extracting the metadata from the .ohm file and maps it to metadata information of the data model.
According to an embodiment, descriptive haptics files (e.g., .ivs, .ahap, and .hjif) may be encoded through a simple process. The haptic encoder 300 first identifies specifically the input format. If the input format is a .hjif file, then no transcoding is necessary, the file can be further edited, compressed into the binary format and eventually packetized into an MIHS stream. If .ahap or .ivs input files are used, a transcoding is necessary. The haptic encoder 300 first analyses the input file information semantically and transcodes it to be formatted into a selected data model. After transcoding, the data can be exported as the .hjif file, a .hmpg binary file or an MIHS stream.
According to an embodiment, the haptic encoder 300 may perform signal analysis to interpret the signal structure of the .wav files and convert it into the proposed encoded representation. For waveform PCM content, the signal analysis process may be split into two sub-processes by the haptic encoder 300. After performing a frequency band decomposition on the signal, at a first sub-process, low frequencies may be encoded using a keyframe extraction process. The low frequency band(s) may then be reconstructed and the error between this signal and the original low frequency signal may be computed. This residual signal may then be added to the original high frequency band(s), before encoding using Wavelet Transforms, the encoding using Wavelet Transforms being the second sub-process. According to an embodiment, when several low frequency bands are used, the residual errors from all the low frequency bands are added to the high frequency band before encoding. In embodiment when several high frequency bands are used, the residual errors from the low frequency band(s) are added to the first high frequency band before encoding.
According to an embodiment, keyframe extraction includes taking the lower frequency band from the frequency band decomposition and analysing its content in the time domain. According to an embodiment, wavelet processing may include taking the high frequency band from the frequency band decomposition and the low frequency residual, and splitting it into blocks of equal size. These signal blocks of equal size are then analysed in a psychohaptic model. The lossy compression may be applied by wavelet transforming the block and quantizing it, aided by the psychohaptic model. In the end, each block is then saved into a separate effect in a single band, which is done in the formatting. The binary compression may apply lossless compression using the appropriate coding techniques, e.g., the Set partitioning in hierarchical trees (SPIHT) algorithm and Arithmetic Coding (AC).
As shown in
As shown in
As shown in
According to embodiments, the haptic experience defines the root of the hierarchical data model. It provides information on the date of the file and the version of the format, it describes the haptic experience, it lists the different avatars (i.e., body representation) used throughout the experience and it defines all the haptic perceptions.
According to embodiments, haptic signals may be encoded on multiple channels. In some embodiments, a haptic channel may define a signal to be rendered at a specific body location with a dedicated actuator/device. Metadata stored at the channel level may include information such as the gain associated to the channel, the mixing weight, the desired body location of the haptic feedback and optionally the reference device and/or a direction. Additional information such as the desired sampling frequency or sample count may also be provided. Finally, the haptic data of a channel is contained in a set of haptic bands defined by their frequency range. A haptic band describes the haptic signal of a channel in a given frequency range. Bands are defined by a type and a sequential list of haptic effects each containing a set of keyframes. For every type of haptic band, haptic effects may be defined with at least a position (temporal or spatial) and a type. Depending on the type of band and the type of effect, additional properties may be specified, including the phase, the base signal, a composition and a number of consecutive haptic keyframes describing the effect.
According to an embodiment, the haptic data hierarchy is defined in the present disclosure.
Haptic Channels
According to an embodiment, a self-contained stream format to transport MPEG-I haptic data may use a packetized approach and may include two levels of packetization: MPEG-I haptic stream (MIHS) unit which covers a duration of time and includes zero or more MIHS packets and MIHS packet which includes metadata or haptic effect data. In embodiments, the MIHS unit may be referred to as a network abstraction layer unit associated with the haptic data. In embodiments, the MIHS unit may be referred to as a MIHS sample associated with the haptic data.
According to an embodiment, each MIHS unit covers a non-overlapping duration of haptic presentation time, i.e., it starts at the end of the previous MIHS unit and covers the duration of time defined by its duration field. The MIHS unit is followed by the next MIHS unit, unless it is the last MIHS unit of the haptic experience. All MIHS packets of a MIHS unit have the starting time and duration of the containing MIHS unit. MIHS units may be transmitted in a haptic track.
The binary distribution format often is encapsulated in ISOBMFF file format for distribution. The binary haptic distribution format needs to be divided between the ISOBMFF's samples, sometimes in the form of MIHS units. Each ISOBMFF sample and/or MIHS unit may cover information for a duration of time and the samples are nonoverlapping in time. The present encapsulation proposal considers that every ISOBMFF sample contains binary haptic distribution. However, as stated above, while visual and audio tracks are continuous, the haptic effects are sparse, e.g., for short durations of media presentation, the haptic media effects need to be rendered along with audiovisual samples, but at other times, the haptic track is ‘quiet.’ Therefore, there is a need for additional methods to signal and/or process the “quiet” parts of the haptic track more efficiently.
According to an aspect of the present disclosure, one or more empty haptic samples (also known as MIHS units) are added to the ISOBMFF haptics so that the duration of times when there is no haptic signal to be represented in ISOBMFF haptics tracks is signalled and decoded more efficiently.
Each ISOBMFF haptic track consists of one or more haptic samples (e.g., MIHS units). Each sample defines the haptic signal for a duration of time. According to an embodiment of the disclosure, new haptic samples may be added (also known as empty MIHS units) that are distinct from existing haptic samples and that indicate that the haptic track, for that duration is empty, i.e. there are no haptic effects to render.
Such indication of “quietness” of the haptic track improves efficiency by preventing unnecessary retrieval of haptic packets during the “quiet” durations. Embodiments of the present disclosure also help with random access of the haptic track because navigation to the next meaningful packet is easier.
As shown in
One of the advantages of the proposed disclosure is that knowledge of quiet periods in the haptic track may enable fragmentation and re-fragmentation of the haptics tracks. Furthermore, knowing quiet periods in the haptic track makes retrieval and delivery of the haptic signals more efficient, e.g. no request for the delivery of haptic tracks during the quiet durations.
In one example, the start of experience may be defined as a common anchor point for all effects in the stream. For instance, the first effect can have position 0 and all other effects' positions can be defined relative to the first effect's position. Then, in the case of ISOBMFF carriage of haptic channels, an effect position should be relative to the sample's start time carrying the effect. Then when a haptic channel is carried in ISOBMFF, its effect's position needs to be adjusted. Likewise, after parsing the ISOBMFF, the effect's position needs to be readjusted by adding the sample's start time before sending it to the haptic decoder.
In another or same example, two types of ISOBMFF tracks may be provided. First, tracks that track start time as the anchor to an effect's position, and second, tracks that the sample start time is the anchor.
In another or the same example, a sample structure in the haptics elementary stream may be defined. In this case, a haptic channel may consist of one or more samples/frames and the timing of each effect in each sample/frame is relative to that sample.
According to an embodiment of the present disclosure, a method or process of encoding, decoding, or for carriage of compressed haptic signals in ISOBMFF tracks wherein a track may consist of two types of haptic samples, one with haptic binary information and one with empty haptic information. The empty haptic information may only include the duration of the sample and therefore representation of the quiet periods of the haptic tracks using such representation allows for more compact tracks as well as allows manipulation of track at the file format level since the quiet periods of the track are marked. In embodiments, a parser and file format packager may use this information for the file format manipulation without the need of parsing the haptics elementary streams.
At operation 605, a haptic track may be received comprising more than one type of moving picture experts group (MPEG) immersive haptics stream (MIHS) unit. At operation 610, a first type of MIHS unit that comprises haptic information may be obtained from the haptic track with the haptic information in the first type of MIHS unit being in a binary format
At operation 615, a second type of MIHS unit is obtained from the haptic track, with the second type of MIHS unit being an empty unit comprising only duration information. In embodiments, the first type of MIHS unit and the second type of MIHS unit are signaled in a high-level syntax streams.
In some embodiments, the duration information in the second type of MIHS unit indicates a length of time during which there are no haptic effects and the second type of MIHS unit does not comprise the haptic information. Therefore, the second type of MIHS unit is a representation of quiet periods of the haptic track. In embodiments, the quiet period of the haptic track is used for fragmentation of the haptic track.
In embodiments, based on determining the quiet period, no delivery of MIHS units during the quiet period is requested during the quiet period.
At operation 620, a quiet period of the haptic track may be determined based on the duration information in the second type of MIHS unit.
A person of skill in the art understands that the techniques described herein may be implemented on both the encoder side and the decoder side. The techniques, described above, can be implemented as computer software using computer-readable instructions and physically stored in one or more computer-readable media. For example,
The computer software can be coded using any suitable machine code or computer language, that may be subject to assembly, compilation, linking, or like mechanisms to create code including instructions that can be executed directly, or through interpretation, micro-code execution, and the like, by computer central processing units (CPUs), Graphics Processing Units (GPUs), and the like.
The instructions can be executed on various types of computers or components thereof, including, for example, personal computers, tablet computers, servers, smartphones, gaming devices, internet of things devices, and the like.
The components shown in
Computer system 700 may include certain human interface input devices. Such a human interface input device may be responsive to input by one or more human users through, for example, tactile input (such as: keystrokes, swipes, data glove movements), audio input (such as: voice, clapping), visual input (such as: gestures), olfactory input (not depicted). The human interface devices can also be used to capture certain media not necessarily directly related to conscious input by a human, such as audio (such as: speech, music, ambient sound), images (such as: scanned images, photographic images obtain from a still image camera), video (such as two-dimensional video, three-dimensional video including stereoscopic video).
Input human interface devices may include one or more of (only one of each depicted): keyboard 701, mouse 702, trackpad 703, touch screen 710, data-glove, joystick 705, microphone 706, scanner 707, camera 708.
Computer system 700 may also include certain human interface output devices. Such human interface output devices may be stimulating the senses of one or more human users through, for example, tactile output, sound, light, and smell/taste. Such human interface output devices may include tactile output devices (for example tactile feedback by the touch-screen 710, data glove, or joystick 705, but there can also be tactile feedback devices that do not serve as input devices). For example, such devices may be audio output devices (such as: speakers 709, headphones (not depicted)), visual output devices (such as screens 710 to include CRT screens, LCD screens, plasma screens, OLED screens, each with or without touch-screen input capability, each with or without tactile feedback capability—some of which may be capable to output two dimensional visual output or more than three dimensional output through means such as stereographic output; virtual-reality glasses (not depicted), holographic displays and smoke tanks (not depicted)), and printers (not depicted).
Computer system 700 can also include human accessible storage devices and their associated media such as optical media including CD/DVD ROM/RW 720 with CD/DVD or the like media 721, thumb-drive 722, removable hard drive or solid state drive 723, legacy magnetic media such as tape and floppy disc (not depicted), specialized ROM/ASIC/PLD based devices such as security dongles (not depicted), and the like.
Those skilled in the art should also understand that term “computer readable media” as used in connection with the presently disclosed subject matter does not encompass transmission media, carrier waves, or other transitory signals.
Computer system 700 can also include interface to one or more communication networks. Networks can for example be wireless, wireline, optical. Networks can further be local, wide-area, metropolitan, vehicular and industrial, real-time, delay-tolerant, and so on. Examples of networks include local area networks such as Ethernet, wireless LANs, cellular networks to include GSM, 3G, 4G, 5G, LTE and the like, TV wireline or wireless wide area digital networks to include cable TV, satellite TV, and terrestrial broadcast TV, vehicular and industrial to include CANBus, and so forth. Certain networks commonly require external network interface adapters that attached to certain general purpose data ports or peripheral buses 749 (such as, for example USB ports of the computer system 700; others are commonly integrated into the core of the computer system 700 by attachment to a system bus as described below (for example Ethernet interface into a PC computer system or cellular network interface into a smartphone computer system). Using any of these networks, computer system 700 can communicate with other entities. Such communication can be uni-directional, receive only (for example, broadcast TV), uni-directional send-only (for example CANbus to certain CANbus devices), or bi-directional, for example to other computer systems using local or wide area digital networks. Such communication can include communication to a cloud computing environment 755. Certain protocols and protocol stacks can be used on each of those networks and network interfaces as described above.
Aforementioned human interface devices, human-accessible storage devices, and network interfaces 754 can be attached to a core 740 of the computer system 700.
The core 740 can include one or more Central Processing Units (CPU) 741, Graphics Processing Units (GPU) 742, specialized programmable processing units in the form of Field Programmable Gate Areas (FPGA) 743, hardware accelerators for certain tasks 744, and so forth. These devices, along with Read-only memory (ROM) 745, Random-access memory 746, internal mass storage such as internal non-user accessible hard drives, SSDs, and the like 747, may be connected through a system bus 748. In some computer systems, the system bus 748 can be accessible in the form of one or more physical plugs to enable extensions by additional CPUs, GPU, and the like. The peripheral devices can be attached either directly to the core's system bus 748, or through a peripheral bus 749. Architectures for a peripheral bus include PCI, USB, and the like. A graphics adapter 750 may be included in the core 740.
CPUs 741, GPUs 742, FPGAs 743, and accelerators 744 can execute certain instructions that, in combination, can make up the aforementioned computer code. That computer code can be stored in ROM 745 or RAM 746. Transitional data can be also be stored in RAM 746, whereas permanent data can be stored for example, in the internal mass storage 747. Fast storage and retrieve to any of the memory devices can be enabled through the use of cache memory, that can be closely associated with one or more CPU 741, GPU 742, mass storage 747, ROM 745, RAM 746, and the like.
The computer readable media can have computer code thereon for performing various computer-implemented operations. The media and computer code can be those specially designed and constructed for the purposes of the present disclosure, or they can be of the kind well known and available to those having skill in the computer software arts.
As an example and not by way of limitation, a computer system having the architecture of computer system 700, and specifically the core 740 can provide functionality as a result of processor(s) (including CPUs, GPUs, FPGA, accelerators, and the like) executing software embodied in one or more tangible, computer-readable media. Such computer-readable media can be media associated with user-accessible mass storage as introduced above, as well as certain storage of the core 740 that are of non-transitory nature, such as core-internal mass storage 747 or ROM 745. The software implementing various embodiments of the present disclosure can be stored in such devices and executed by core 740. A computer-readable medium can include one or more memory devices or chips, according to particular needs. The software can cause the core 740 and specifically the processors therein (including CPU, GPU, FPGA, and the like) to execute particular processes or particular parts of particular processes described herein, including defining data structures stored in RAM 746 and modifying such data structures according to the processes defined by the software. In addition or as an alternative, the computer system can provide functionality as a result of logic hardwired or otherwise embodied in a circuit (for example: accelerator 744), which can operate in place of or together with software to execute particular processes or particular parts of particular processes described herein. Reference to software can encompass logic, and vice versa, where appropriate. Reference to a computer-readable media can encompass a circuit (such as an integrated circuit (IC)) storing software for execution, a circuit embodying logic for execution, or both, where appropriate. The present disclosure encompasses any suitable combination of hardware and software.
While this disclosure has described several non-limiting embodiments, there are alterations, permutations, and various substitute equivalents, which fall within the scope of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise numerous systems and methods which, although not explicitly shown or described herein, embody the principles of the disclosure and are thus within the spirit and scope thereof.
This application claims priority from U.S. Provisional Application No. 63/416,790, filed on Oct. 17, 2022, the disclosure of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63416790 | Oct 2022 | US |