The invention relates to a system for controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content, said audiovisual content comprising an audio portion and a video portion, said audio portion comprising multiple audio channels.
The invention further relates to a method of controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content, said audiovisual content comprising an audio portion and a video portion, said audio portion comprising multiple audio channels.
The invention also relates to a computer program product enabling a computer system to perform such a method.
The experience of content, visual or auditory, can benefit immensely from a dynamic lighting system. An entertainment lighting system like e.g., Hue Sync, can dramatically alter a user's viewing experience by rendering light colors that are extracted from a scene in real-time, or scripted offline. In addition to this, the accompanying audio of the content could be taken into account, where e.g., the intensity of the audio can be used to modulate the rendered light effects.
For example, US 2010/265414 A1 discloses that a scene accompanied by high intensity audio may be rendered with higher intensity light effects than the same scene accompanied by low intensity audio. US 2010/265414 A1 further discloses that video-based ambient lighting characteristics intended for presentation on a left side of a display may be combined with audio-based ambient lighting data relating to a left channel while a video-based ambient lighting characteristics intended for presentation on a right side of the display may be combined with audio-based ambient lighting data relating to a right-channel.
It is a drawback of US 2010/265414 A1 that the disclosed system does not take advantage of newer audio formats to create enhanced ambient lighting effects which are based both on a video portion and an audio portion of the audiovisual content.
It is a first object of the invention to provide a system, which is able to create enhanced ambient lighting effects which are based both on a video portion and an audio portion of the audiovisual content.
It is a second object of the invention to provide a method, which can be used to create enhanced ambient lighting effects which are based both on a video portion and an audio portion of the audiovisual content.
In a first aspect of the invention, a system for controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content, said audiovisual content comprising an audio portion and a video portion, said audio portion comprising multiple audio channels, comprises at least one input interface, at least one transmitter, and at least one processor configured to obtain said audiovisual content via said at least one input interface, determine a first characteristic of a first audio channel of said multiple audio channels or of an audio object comprised in said audio portion, said first characteristic being indicative of an audio source position, and associate, based on said first characteristic, said first audio channel or said audio object with a first lighting device of said plurality of lighting devices, wherein said associating is based on said audio source position relative to a position of said first lighting device.
The at least one processor is further configured to determine a second characteristic of a second audio channel of said multiple audio channels, associate, based on said second characteristic, said second audio channel with said first lighting device and with a second lighting device of said plurality of lighting devices, said first audio channel not being associated with said second lighting device, determine whether second audio content in said second audio channel meets one or more predetermined criteria, determine at least a chromaticity based on said video portion of said audiovisual content, determine a first light effect based on said determined chromaticity, wherein if said one or more predetermined criteria are not met, the light intensity of said first light effect is based on first audio content in said first audio channel or in said audio object, and if said one or more predetermined criteria are met, the light intensity of said first light effect is based on said second audio content in said second audio channel, and control, via said at least one transmitter, said first lighting device to render said first light effect.
More and more people are enjoying surround sound configurations at home and most TV programs and movies have surround sound audio nowadays. By taking the semantic properties of the audio channels and optionally audio objects into account when determining the light effects, it becomes possible to determine light effects which reflect what is happening in the audio channels and optionally audio objects and thereby realize a more immersive entertainment experience.
This is achieved by not simply using the intensity of audio content to modulate the light effects but by using the semantics of the audio channels and optionally audio objects to modulate the light effects. For example, the low frequency effects (subwoofer) channel may influence all connected lights such that bass-heavy effects like a loud explosion are not only heard but also seen throughout the entire entertainment area. This can be contrasted to audio effects on left/right audio channels (also referred to as side channels) which may be used to modulate light effects on only the left/right positioned lighting devices.
With the above-described system, a first audio channel or audio object may be mapped to a first lighting device based on an audio source position associated with the first audio channel relative to a position of the first lighting device and a second audio channel may be mapped to multiple lighting devices including the first lighting device. A sound effect reproduced on the first audio channel or audio object can be located more precisely by the user than a sound effect reproduced on the second audio channel, e.g., because the second audio channel is a low frequency effects (subwoofer) channel.
By mapping the first audio channel or audio object to fewer lighting devices, e.g., a single lighting device, than the second audio channel, the light effects determined based on first audio content on the first audio channel or audio object reflect that the corresponding sound effect has a more specific location while the light effects determined based on second audio content on the second audio channel reflect that the corresponding sound effect has a less specific location.
By determining the chromaticity (and optionally the entire color) of the light effects based on at least the video portion of the audiovisual content and determining the light intensity of the light effects based on at least the audio portion of the audiovisual content, wherein the light intensity is based on the second audio content in the second audio channel at certain moments, e.g., in case of loud events, the best light experience may be obtained. Optionally, the lightness of the color is also determined based on the audio portion of the audiovisual content.
Said second characteristic might not be indicative of an audio source position. For example, said second characteristic may indicate whether said second audio channel is a low frequency effect channel. Alternatively, said first characteristic may be determined of said audio object and said second characteristic may be indicative of a desired speaker position for said second audio channel, for example.
Said at least one processor may be configured to determine the light intensity of said first light effect further based on said first audio content in said first audio channel or in said audio object if said one or more predetermined criteria are met. By always determining the light intensity based on the first audio content in the first audio channel, a less intense light experience may be obtained, which is preferred by certain users. For example, the highest light intensity may only be achieved if there is a loud event in both the first audio channel and the second audio channel. By always determining the light intensity based on the first audio content in the audio object, the audio objects are emphasized in the light effects.
Said at least one processor may be configured to determine the light intensity of said first light effect further based on said video portion of said audiovisual content. This may be used to ensure that the light intensity not only matches the audio portion but also the video portion. The user may be able to configure whether the light intensity of the light effects should be determined based on the video portion of the audiovisual content.
Said at least one processor may be configured to determine whether said second audio content in said second audio channel meets said one or more predetermined criteria by determining whether an audio intensity of said second audio content exceeds a threshold. Thus, the light intensity may be determined based on the second content in the second audio channel if there is a loud event in the second audio channel, e.g., a loud explosion.
Said at least one processor may be configured to select a spatial region in a current frame of said video portion in dependence on whether said one or more predetermined criteria are met and determine at least said chromaticity from only said selected spatial region. Although the chromaticity (or entire color) of the light effects is preferably determined based on the video portion of the audiovisual content, the audio portion may still have some influence on the chromaticity (or entire color). For example, the color of the light effect for a lighting device positioned on the left may be extracted from a center region of a video frame if a loud event is detected in the low frequency effects channel and from a left region of the video frame otherwise.
Said first characteristic may be determined of said first audio channel and said at least one processor may be configured to determine whether an audio intensity of said first audio content exceeds a threshold, select a spatial region in a current frame of said video portion in dependence on whether said audio intensity of said first audio content exceeds said threshold, and determine at least said chromaticity from only said selected spatial region. From which spatial region of the video portion the chromaticity (or entire color) is extracted may not only depend on the second audio content in the second audio channel but also on the first audio content in the first audio channel. For example, when a loud event is detected in the second audio channel, the light intensity of the light effects for all lighting devices may be determined based on the second audio content in the second audio channel and the color of the light effect for a lighting device positioned on the left may be extracted from a center region of a video frame if the loud event is also detected in the first audio channel and from a left region of the video frame if not.
Said at least one processor may be configured to determine one or more speaker signals for a loudspeaker based on said audio portion of said audiovisual content. Said at least one processor may be configured to determine the light intensity of said first light effect based on said one or more speaker signals. Instead of determining the light intensity of the first light effect directly based on the audio portion of the audiovisual content, the light intensity may be determined based on the one or more speaker signals. This may be beneficial if the user's audio system is not able to recreate the audio source positions specified in the content close enough or if the user's audio system enhances the audio effects specified in the audiovisual content.
As an example of the latter, audio upmixing algorithms exist that create pseudo channels for traditional content that does not comprise those channels (e.g., Dolby Surround, which does not contain height channels). An example of such an upmixing algorithm is DTS Virtual: X. Other audio analysis steps, e.g. determining the first and second characteristics and/or determining whether the second audio content in the second audio channel meets the one or more predetermined criteria, may also be performed based on the one or more speaker signals.
Alternatively, said at least one processor may be configured to determine the light intensity of said first light effect further based on information on available speakers and/or information on used three-dimensional audio virtualization. This may be beneficial if the user's audio system is not able to recreate the audio source positions specified in the content close enough.
In a second aspect of the invention, a method of controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content, said audiovisual content comprising an audio portion and a video portion, said audio portion comprising multiple audio channels, comprises obtaining said audiovisual content, determining a first characteristic of a first audio channel of said multiple audio channels or of an audio object comprised in said audio portion, said first characteristic being indicative of an audio source position, and associating, based on said first characteristic, said first audio channel or said audio object with a first lighting device of said plurality of lighting devices, wherein said associating is based on said audio source position relative to a position of said first lighting device.
Said method further comprises determining a second characteristic of a second audio channel of said multiple audio channels, associating, based on said second characteristic, said second audio channel with said first lighting device and with a second lighting device of said plurality of lighting devices, said first audio channel not being associated with said second lighting device, determining whether second audio content in said second audio channel meets one or more predetermined criteria, determining at least a chromaticity based on said video portion of said audiovisual content, determining a first light effect based on said determined chromaticity, wherein if said one or more predetermined criteria are not met, the light intensity of said first light effect is based on first audio content in said first audio channel, and if said one or more predetermined criteria are met, the light intensity of said first light effect is based on said second audio content in said second audio channel, and controlling said first lighting device to render said first light effect. Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.
Moreover, a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided. A computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.
A non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations for controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content, said audiovisual content comprising an audio portion and a video portion, said audio portion comprising multiple audio channels.
The executable operations comprise obtaining said audiovisual content, determining a first characteristic of a first audio channel of said multiple audio channels or of an audio object comprised in said audio portion, said first characteristic being indicative of an audio source position, and associating, based on said first characteristic, said first audio channel or said audio object with a first lighting device of said plurality of lighting devices, wherein said associating is based on said audio source position relative to a position of said first lighting device.
The executable operations further comprise determining a second characteristic of a second audio channel of said multiple audio channels, associating, based on said second characteristic, said second audio channel with said first lighting device and with a second lighting device of said plurality of lighting devices, said first audio channel not being associated with said second lighting device, determining whether second audio content in said second audio channel meets one or more predetermined criteria, determining at least a chromaticity based on said video portion of said audiovisual content, determining a first light effect based on said determined chromaticity, wherein if said one or more predetermined criteria are not met, the light intensity of said first light effect is based on first audio content in said first audio channel, and if said one or more predetermined criteria are met, the light intensity of said first light effect is based on said second audio content in said second audio channel, and controlling said first lighting device to render said first light effect.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a device, a method or a computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit”, “module” or “system.” Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java™, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
These and other aspects of the invention are apparent from and will be further elucidated, by way of example, with reference to the drawings, in which:
Corresponding elements in the drawings are denoted by the same reference numeral.
In the example of
Alternatively or additionally, the HDMI module 1 may be able to communicate directly with the bridge 19, e.g. using Zigbee technology, and/or may be able to communicate with the bridge 19 via the Internet/cloud. Alternatively or additionally, the HDMI module 1 may be able to control lighting devices 11-15 without a bridge, e.g. directly via Wi-Fi, Bluetooth or Zigbee or via the Internet/cloud.
The wireless LAN access point 21 is connected to the Internet 25. A media server 27 is also connected to the Internet 25. Media server 27 may be a server of a video-on-demand service such as Netflix, Amazon Prime Video, Hulu, HBO Max, Paramount+, Peacock, Disney+ or Apple TV+, for example. The HDMI module 1 is connected to a display device 23 and local media receivers 31 and 32 via HDMI. The local media receivers 31 and 32 may comprise one or more streaming or content generation devices, e.g., an Apple TV, Microsoft Xbox and/or Sony PlayStation, and/or one or more cable or satellite TV receivers. The display device 23 is connected to an audio system 35, e.g., via HDMI ARC. The audio system 35 is connected to speakers 36.
In an alternative embodiment, the system for controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content is a display device. In this alternative embodiment, HDMI module logic may be built-in in the display device. Media receivers 31 and 32 may then also be comprised in the display device, e.g., a smart TV.
The HDMI module 1 comprises a receiver 3, a transmitter 4, a processor 5, and memory 7. The processor 5 is configured to obtain the audiovisual content via receiver 3 from media receiver 31 or 32, determine a first characteristic of a first audio channel of the multiple audio channels or of an audio object comprised in the audio portion, and associate, based on the first characteristic, the first audio channel or the audio object with a first lighting device of the lighting devices 11-15. The first characteristic is indicative of an audio source position and the associating is based on the audio source position relative to a position of the first lighting device.
The processor 5 is further configured to determine a second characteristic of a second audio channel of the multiple audio channels and associate, based on the second characteristic, the second audio channel with the first lighting device and with a second lighting device of the lighting devices 11-15. The first audio channel is not associated with the second lighting device.
The processor 5 is further configured to determine whether second audio content in the second audio channel meets one or more predetermined criteria, determine at least a chromaticity based on the video portion of the audiovisual content, determine a first light effect based on the determined chromaticity, and control, via the transmitter 4, the first lighting device to render the first light effect.
If the one or more predetermined criteria are not met, the light intensity of the first light effect is based on first audio content in the first audio channel or in the audio object, and if the one or more predetermined criteria are met, the light intensity of the first light effect is based on the second audio content in the second audio channel.
In the embodiment of the HDMI module 1 shown in
The receiver 3 and the transmitter 4 may use one or more wired or wireless communication technologies such as Zigbee to communicate with the bridge 19 and HDMI to communicate with the display device 23 and with local media receivers 31 and 32, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in
In the embodiment of
The mobile device 51 comprises a receiver 53 a transmitter 54, a processor 55, a memory 57, and a display 59. The video portion is preferably displayed on the display device 23 but could also be displayed on display 59 of the mobile device 51. In the former case, the audio portion may be rendered on the display device 23 or on an audio system (not shown in
The processor 55 is configured to obtain the audiovisual content via receiver 53, determine a first characteristic of a first audio channel of the multiple audio channels or of an audio object comprised in the audio portion, and associate, based on the first characteristic, the first audio channel or the audio object with a first lighting device of the lighting devices 11-15. The first characteristic is indicative of an audio source position and the associating is based on the audio source position relative to a position of the first lighting device.
The processor 55 is further configured to determine a second characteristic of a second audio channel of the multiple audio channels and associate, based on the second characteristic, the second audio channel with the first lighting device and with a second lighting device of the lighting devices 11-15. The first audio channel is not associated with the second lighting device.
The processor 55 is further configured to determine whether second audio content in the second audio channel meets one or more predetermined criteria, determine at least a chromaticity based on the video portion of the audiovisual content, determine a first light effect based on the determined chromaticity, and control, via the transmitter 54, the first lighting device to render the first light effect.
If the one or more predetermined criteria are not met, the light intensity of the first light effect is based on first audio content in the first audio channel or in the audio object, and if the one or more predetermined criteria are met, the light intensity of the first light effect is based on the second audio content in the second audio channel.
In the embodiment of the mobile device 51 shown in
The receiver 53 and the transmitter 54 may use one or more wireless communication technologies such as Wi-Fi (IEEE 802.11) to communicate with the wireless LAN access point 21, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in
In the embodiment of
A first embodiment of the method of controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content is shown in
A step 101 comprises obtaining audiovisual content. A step 103 and a step 107 are performed after step 101. Step 103 comprises determining a first characteristic of a first audio channel of the multiple audio channels or of an audio object comprised in the audio portion. The first characteristic is indicative of an audio source position. In many audio formats, most of the audio channels are associated with desired speaker position in the room, e.g., front left, front right, center. Some audio formats like Dolby Atmos and DTS: X support the use of audio objects. An audio object is normally associated with a position of the audio object in a virtual 3D space.
A step 105 comprises obtaining the position of the first lighting device, e.g., an x/y/z position. This may be done manually, but may also be automated, e.g., via RF-sensing. Step 105 further comprises associating, based on the first characteristic determined in step 103, the first audio channel or the audio object with a first lighting device of the plurality of lighting devices, wherein the associating is based on the audio source position relative to the position of the first lighting device.
Step 107 comprises determining a second characteristic of a second audio channel of the multiple audio channels. A step 109 comprises associating, based on the second characteristic determined in step 107, the second audio channel with the first lighting device and with a second lighting device of the plurality of lighting devices. The first audio channel is not associated with the second lighting device. For example, a low frequency effects (abbreviated as LFE) channel may be associated with all lighting devices in a room or a left audio channel (at listener level or at height level) may be associated with multiple lighting devices on the left side of the room.
In the embodiment of
A step 111 is performed after steps 105 and 109 have been completed. Step 111 comprises determining whether second audio content in the second audio channel meets one or more predetermined criteria. For example, step 111 may comprise determining whether an audio intensity of the second audio content exceeds a threshold. Next, a step 113 comprises determining at least a chromaticity (and optionally the entire color) based on the video portion of the audiovisual content.
A step 115 comprises determining a first light effect based on the determined chromaticity and based on a light intensity. If the one or more predetermined criteria are not met, the light intensity of the first light effect is based on first audio content in the first audio channel. If the one or more predetermined criteria are met, the light intensity of the first light effect is based on the second audio content in the second audio channel.
Additionally, the light intensity of the first light effect may depend on the distance between a speaker (or a position of an audio object, e.g., rendered using multiple speakers) and the lighting device that renders the first light effect. In this case, if two lighting devices are located on the left, for example, but one is farther away from the left channel speaker(s), the adjustment for the lighting device farther away may be less than for the one that is closer. A step 117 comprises controlling the first lighting device to render the first light effect determined in step 115.
A second embodiment of the method of controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content is shown in
Step 101 comprises obtaining audiovisual content. In a step 121, a mapping from audio channel to lighting device is determined. First, a characteristic of each audio channel is determined. Certain audio channels are associated with an audio source position and in this case, the determined characteristic is indicative of this audio source position. For example, a front left channel in a Dolby Digital-encoded audio portion is associated with a desired front left speaker position. However, not all audio channels are associated with an audio source position. An example is the LFE (subwoofer) channel.
Step 121 comprises determining the positions, e.g., x/y/z positions, of all lighting devices of the plurality of lighting devices. This may be done manually, but may also be automated e.g., via RF-sensing. The characteristic determined for the LFE audio channel (also referred to in this embodiment as the second audio channel) indicates that it is an LFE channel and is not indicative of an audio source position, because humans are not able to locate the source of low frequency sounds. The LFE audio channel is therefore associated with all lighting devices of the plurality of lighting devices in step 121.
The other audio channels (also referred to in this embodiment as the first audio channels) are associated with lighting devices based on the audio source position associated with the respective audio channel and the position of the respective lighting device. For example, the front left audio channel may be associated with a front left lighting device. The type and capability of a lighting device may influence how the mapping between audio channel and lighting device is made. Furthermore, the type and capability of a lighting device may also influence how the brightness and chromaticity are determined for this lighting device in step 123. For example, a point light source may be treated differently from a linear light source like a light strip.
Step 111 comprises determining whether second audio content in the second audio channel, i.e., the LFE audio channel, meets one or more predetermined criteria. In the embodiment of
Next, the light effects are determined for the plurality of lighting devices in step 123. A chromaticity is determined for each of the light effects based on the video portion of the audiovisual content. Moreover, a light intensity is determined for each of the light effects. The chromaticity is extracted from a certain spatial region of the video frames of the video portion. In this embodiment, this spatial region depends on the position of the lighting device. For example, a chromaticity for a light effect to be rendered by a lighting device on the left is extracted from a region on the left side of the video frames and a chromaticity for a light effect to be rendered by a lighting device on the right is extracted from a region on the right side of the video frames.
In the embodiment of
If it was determined in step 111 that the audio intensity in the second audio channel, i.e., the LFE audio channel, did not exceed the threshold, the light intensity of a light effect for a certain lighting device is determined based on the first audio content in the first audio channel associated with that lighting device. For example, the light intensity for a front left lighting device is then determined based the audio content in the front left audio channel.
If it was determined in step 111 that the audio intensity in the second audio channel, i.e., the LFE audio channel, exceeded the threshold, the light intensity of each light effect of each lighting device is determined based on the second audio content in the second audio channel. In the embodiment of
Next, step 125 comprises controlling the lighting devices to render the light effects determined in step 123. Step 111 is repeated after step 125, and the method then proceeds as shown in
A third embodiment of the method of controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content is shown in
Step 101 comprises obtaining audiovisual content. In a step 141, a mapping from audio channel to lighting device is determined. Step 141 of
If the LFE audio channel is not associated with all lighting devices of the plurality of lighting devices, then one or more other audio channels are associated with multiple lighting devices. These one or more other audio channels are then treated as second audio channels. In this case, one or more second characteristics indicative of respective desired speaker positions are determined for the one or more second audio channels.
For example, a front left audio channel may be associated with two lighting devices on the left of the room. When the audio portion comprises both a front left audio channel and a surround left audio channel, the front left audio channel may be mapped to a front left lighting device and the surround left audio channel may be mapped to a front rear lighting device or both audio channels may be mapped to both lighting devices. The same principle may be used for right audio channels and applies when the audio portion comprises rear audio channels and/or height audio channels. A left audio channel and a right audio channel are preferably not mapped to the same lighting device.
Optionally, both the LFE audio channel and the above-mentioned one or more other audio channels may be treated as second audio channels if the LFE audio channel is associated with all lighting devices of the plurality of lighting devices.
In a step 143, a mapping from audio object to lighting device is determined. For example, an audio object may represent a plane that flies from left to right and may be mapped to different lighting devices depending on its position. A first characteristic indicative of a current audio source position is determined of the audio object.
Step 111 comprises determining whether second audio content in the second audio channel meets one or more predetermined criteria. If there is more than one second audio channel, this may be done for each second audio channel. In the embodiment of
Next, the light effects are determined for the plurality of lighting devices in a step 145. A chromaticity is determined for each of the light effects based on the video portion of the audiovisual content, as described in relation to step 123 of
In the embodiment of
If the second audio channel is not (just) the LFE audio channel, then in step 145, it is determined for each respective lighting device which respective second audio channel has been associated with the respective lighting device, if any. If a lighting device was not associated with a second audio channel in step 141 and an audio object was not associated with the lighting device in step 143, then the light intensity is not adjusted. If a lighting device was not associated with a second audio channel in step 141 and an audio object was associated with the lighting device in step 143, then the light intensity is adjusted based only on the first audio content in the audio object.
If a lighting device was associated with a second audio channel in step 141 and it was determined in step 111 that the audio intensity in the second audio channel did not exceed the threshold, then the light intensity of a light effect for the lighting device is not adjusted based on the second audio content in this second audio channel. If the lighting device was associated with an audio object in step 143, then the light intensity is adjusted based on the first audio content in the audio object.
If a lighting device was associated with a second audio channel in step 141 and it was determined in step 111 that the audio intensity in the second audio channel exceeded the threshold, then the light intensity of a light effect for the lighting device is adjusted based on the second audio content in this second audio channel. In this case, if the lighting device has been associated with an audio object, then the light intensity is further adjusted based on the first audio content in the audio object in the embodiment of
Optionally, step 145 comprises determining the light intensities of the light effects further based on information on available speakers and/or information on used three-dimensional audio virtualization. For example, if a user only has front speakers and a center speaker and his audio system does not support three-dimensional audio virtualization, it may be better not to adjust the light intensity of a light effect rendered on a lighting device in the rear of a room based on audio content of a first audio channel or audio object with an audio source position in the rear of the room, as this would create a contradiction between the rendered light effects and the rendered audio.
Step 125, described in relation to
Video content 81 comprises a video portion 84 and an audio portion 83. In this example, the audio portion 83 comprises six audio channels (5.1 audio channels to be precise): a surround left channel, a front left channel 86, a center channel, a front right channel, a surround right channel, and a low frequency effects channel 87. In an alternative example, the audio portion 83 may comprise more or less than six audio channels. The audio portion further comprises two audio objects: a first audio object 88 and a second audio object 89. In practice, an audio portion which comprises audio objects will comprise more than two audio objects.
In a first usage example, the method of
In a second usage example, the method of
In this case, the light effect rendered by the lighting device nearest to the virtual source position 78, i.e., lighting device 14, may be even higher than the light effect rendered by the other lighting devices. When there is no loud effect on the low frequency effects channel 87, only the light intensity of the light effect rendered by the lighting device nearest to the virtual source position 78 is relatively high, and not the light intensities of the light effects rendered by the other lighting devices.
In a third usage example, the method of
When there is a loud effect on the combined left audio channel, the light intensity of the light effects rendered on lighting devices 12 and 14 is relatively high. In this case, the light effect rendered by the lighting device nearest to the virtual source position 78, i.e., lighting device 14, may be even higher than the light effect rendered by the other lighting device, i.e., lighting device 12. When there is no loud effect on the combined left audio channel, only the light intensity of the light effect rendered by the lighting device nearest to the virtual source position 78, i.e., lighting device 14, is relatively high, and not the light intensity of the light effect rendered by lighting device 12.
A fourth embodiment of the method of controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content is shown in
Step 161 comprises selecting a spatial region in a current frame of the video portion in dependence on whether the one or more predetermined criteria are met, as determined in step 111. Step 163 comprises extracting the chromaticity from (only) the spatial region selected in step 161. If an intensity is also extracted from the video portion, as described for example in relation to
As an example, when the loudness of the LFE channel does not exceed a threshold, a spatial region on the left of the video frames is selected for a front left lighting device. When the loudness of the LFE channel exceeds the threshold, a spatial region in the center of the video frames is selected for the front left lighting device.
A fifth embodiment of the method of controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content is shown in
Step 181 comprises determining whether an audio intensity of the second audio content in the second audio channel exceeds a threshold. Step 183 comprises determining whether an audio intensity of the first audio content in the first audio channel exceeds a further threshold, which may be the same as the threshold.
Steps 185 comprises selecting a spatial region in a current frame of the video portion in dependence on whether the audio intensity of the first audio content exceeds the further threshold, as determined in step 183, and optionally also in dependence on whether the audio intensity of the second audio content exceeds the threshold, as determined in step 181. Step 163 comprises extracting the chromaticity from (only) the spatial region selected in step 185.
With the method of
A sixth embodiment of the method of controlling a plurality of lighting devices to render light effects accompanying a rendering of audiovisual content is shown in
Step 201 comprises determining one or more speaker signals for one or more loudspeakers based on the audio portion of the audiovisual content obtained in step 101. In steps 203, the first characteristic of the first audio channel or the audio object is determined based on the one or more speaker signals determined in step 201. In this case, the first characteristic is indicative of a speaker position associated with the first audio channel or the audio object.
In many cases, the audio source position specified by the audio portion is the same as the rendered audio source position. However, there are a few exceptions, including:
In the embodiment of
The embodiments of
As shown in
The memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution. The processing system 300 may also be able to use memory elements of another processing system, e.g., if the processing system 300 is part of a cloud-computing platform.
Input/output (I/O) devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g., for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in
A network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.
As pictured in
Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression “non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 302 described herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.
Number | Date | Country | Kind |
---|---|---|---|
21198736.7 | Sep 2021 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/076083 | 9/20/2022 | WO |