The invention relates to a method for enhancing a user's recognition of a light scene. The invention further relates to a system for enhancing a user's recognition of a light scene.
A burgeoning market currently increasing in scope is that being built around smart devices and home networks. Systems being formed from such elements typically fall under the heading of smart home systems. Smart home systems are often connected to the Internet, typically such that they may be controlled by a user when (s)he is out-of-home. Although referred to above as a ‘home’ system, such a system can be implemented in any environment such as a work space or outdoor space, such that the system comprises, and may be used to control, devices placed within the environment. The connected devices are any devices capable of being connected to, or identified by, the system. A commonly used phrase for such a system and its devices is the Internet of Things (IoT) and IoT devices. In the Internet of Things (IoT) many kinds of devices are connected to the Internet, allowing elements of an environment such as heating and lighting to be controlled using dedicated devices which are networked together into the ‘home’ system.
An important ingredient of smart home systems is a connected lighting system, which refers to a system of one or more lighting devices. These lighting devices are controlled not by (or not only by) a traditional wired, electrical on-off or dimmer circuit, but rather by using a data communications protocol via a wired or more often wireless connection, e.g. a wired or wireless network. Typically, the lighting devices, or even individual lamps within a lighting device, may each be equipped with a wireless receiver or transceiver for receiving lighting control commands from a lighting control device according to a wireless networking protocol such as Zigbee, Wi-Fi or Bluetooth.
The lighting devices, e.g. in the connected lighting system, can be used to render a light scene. The light scene comprises different light effects and can be used to enhance, e.g. entertainment experiences such as audio-visual media, set an ambience and/or a mood of a user.
The inventors have realized that with enough lighting devices (light points), connected lighting system can offer limitless possibilities in changing the ambiance or setting a mood. Naturally, people often have dozens of light scenes for different areas in their home in addition to standard light scenes already offered by the system, e.g. by Philips Hue. However, when the number of lighting devices (light points) is limited, it is difficult to create a recognizable and evocative light scene. For example, the “enchanted forest” light scene in the Philips Hue Go dynamically changes light in the green spectrum, however even when knowing the name of a scene many would not recognize it and will not associate it with the forest.
It is therefore an object of the present invention to provide a method for enhancing a user's recognition of a light scene, irrespective of the lighting infrastructure, e.g. number and/or type of the lighting devices, which is used to render the light scene.
According to a first aspect, the object is achieved by a method for enhancing a user's recognition of a light scene, the method comprising: receiving a first signal indicative of a selection of the light scene; the light scene defining one or more lighting instructions according to which one or more lighting devices are to be controlled; determining whether the light scene has either been descriptively selected or has been non-descriptively selected; wherein a determination is made that the light scene has been descriptively selected when the light scene has been selected via a user interface device which provides a descriptive name of the light scene, and wherein otherwise the determination is made that the light scene has been non-descriptively selected; controlling the one or more lighting devices to render the light scene; and on condition that the determination has been made that the light scene has been non-descriptively selected: determining a characteristic of the light scene; providing an audio fragment based on the determined characteristic; controlling an audio fragment rendering device to render the provided audio fragment.
The method comprises the step of receiving a first signal indicative of the section of a light scene. The selection can be performed manually, e.g. by a user, or automatically, e.g. by a controller. The method further comprises the step of determining whether the light scene is selected descriptively or non-descriptively. The descriptively selected light scene may comprise selecting the light scene via a user interface device which provides a descriptive name of the light scene. For example, the user interface device may be a mobile phone, tablet or a computer, and the descriptive name is a representation of the light scene; wherein the representation may comprise a text, a figure, or an image. The determination may be made based on a characteristic of the first signal, e.g. based on the source of the first signal, type of the first signal etc. The one or more lighting devices may be controlled to render the light scene.
The method further comprises, on the condition that the determination has been made that the light scene has been non-descriptively selected, the step of determining a characteristic of the light scene. In an embodiment, the determined characteristics of the light scene may comprise one or more of: an image associated with the light scene, title, color, color temperature, intensity, beam width, beam direction, illumination intensity. For example, the color of the light scene may be the average color or the dominant color of the light scene. The color of the light scene may be a color palette, for instance, color rendered by each or some of lighting devices. Different weighting may be used to form such a color palette. The method further comprises providing an audio fragment based on the determined characteristic, and controlling an audio fragment rendering device to render the provided audio fragment. The audio fragment may be provided such that it matches with the light scene. For example, if the “enchanted forest” light scene in the Philips Hue Go is selected, based on condition that the light scene has been non-descriptively selected, the audio fragment, for instance, an audio of a bird, animal or wind blowing in a forest etc which best matches with the forest scene, may be provided, The audio fragment is then rendered, which enhances the recognition of the light scene. Therefore, a user's recognition of a light scene is enhanced, irrespective of the number and/or the type of the lighting devices used to render the light scene.
In an embodiment, the non-descriptively selected light scene may comprise the selection of the light scene via one of: a voice command, a switch, a non-descriptive icon on a user interface device, a rule-based selection, a time-based selection, a sensor-based selection.
The light scene may be non-descriptively selected. For instance, via a voice command, e.g. by using smart speakers. The non-descriptively selected light scene may also comprise selection via a switch, e.g. the legacy wall switch, dimmer etc., or via a non-descriptive icon on a user interface device, such as an icon on a mobile phone, tablet or computer. The non-descriptive selection may further comprise a rule-based, a time-based and/or a sensor-based selection, for instance, the light scene associated with wake-up, sleep or a presence of a user.
In an embodiment, the method may further comprise: receiving a second signal indicative of a subsequent selection of the light scene; wherein a subsequent rendering of the audio fragment is limited to a predetermined number of times. In this embodiment, the method may comprise: receiving a second signal indicative of a subsequent selection of the light scene; controlling the audio fragment rendering device to render the provided audio fragment for a predetermined number of times.
When the light scene is subsequently selected, e.g. selected at a different time or at a different day, the audio fragment may be subsequently rendered as well; wherein the number of subsequent renderings of the audio fragment may be limited to a predetermined number of times. For instance, a user may or may not prefer to subsequently render the audio fragment with the subsequently selected light scene, e.g., after (s)he starts recognizing the light scene.
In an embodiment, the predetermined number of times of the subsequent rendering of the audio fragment may be different for a descriptively selected light scene and for a non-descriptively light scene.
A user may like to subsequently render audio fragment, when (s)he selects the light scene in a non-descriptive way compared to when (s)he selects the light scene in a descriptive way. For instance, the recognition of a light scene selected via a switch requires rendering the audio fragment comparatively higher number of times than when the light scene is selected via a mobile phone with a descriptive name.
In an embodiment, the subsequent rendering of the audio fragment may comprise rendering an amended audio fragment or rendering a different audio fragment, wherein the amended audio fragment may comprise changing one or more of: time duration, beat, timbre, pitch, intensity, rhythm, major and minor key.
For instance, when a user does not prefer the provided audio fragment, (s)he may like to amend the audio fragment, for example, according to his/her preference such that the recognition of the light scene is enhanced. Alternative to amending the audio fragment, the user may prefer to render a different audio fragment, e.g. which better matches his/her expectation, and/or which improves the recognition of the light scene.
In an embodiment, the step of providing an audio fragment based on the determined characteristic may comprise: obtaining a plurality of audio fragments, selecting the audio fragment from the plurality of audio fragments based on the determined characteristic, associating the audio fragment with the light scene.
The step of providing may comprise obtaining a plurality of audio fragments, e.g. from a music streaming or video sharing platform. The audio fragment may be selected from the plurality based, e.g. on a similarity criterion, with the determined characteristic. The audio fragment may be then associated with the light scene.
In an embodiment, when a subset of audio fragments from the plurality of audio fragments has been selected, the step of selecting the audio fragment from the subset may be based on one or more of: a user preference, a spatial location, type and/or number of the audio fragment rendering device(s), a psychological or physiological state of a user, a previous audio fragment used by a user, the time of day, an environmental context.
In case when a subset of audio fragments is selected, wherein the subset has more than one audio fragment, the selection of the audio fragment from the subset may be based on a user preference or on a psychological or physiological state of a user. The spatial location, type and/or number of the audio fragment rendering device(s) may be considered such that the audio fragment is rendered, e.g. according to the user expectation.
In an embodiment, the step of providing an audio fragment based on the determined characteristic may comprise: generating the audio fragment based on the determined characteristic, associating the audio fragment with the light scene. An alternative to selecting the audio fragment from the plurality of audio fragments may be to generate the audio fragment based on the determined characteristics. The generation may be based on a machine learning approach, such as similarity analysis.
In an embodiment, the method may further comprise: providing a second modality based on the determined characteristic; wherein the second modality comprises affecting one or more of human sensing systems: visual, olfactory, gustatory, kinesthetic; controlling a functional device to render the provided second modality, wherein the time period in which the second modality is rendered may be at least partially overlap with the time period in which the audio fragment is rendered. For example, the second modality may be rendered sequentially with the audio fragment or in parallel to the audio fragment. A human sensing system may be defined as a system which comprises of a group of sensory cell types that responds to a specific physical phenomenon, and that corresponds to a particular group of regions within the brain where the signals are received and interpreted.
Additionally, or alternatively to rendering the audio fragment, a second modality, for instance, smell, tactile, vibration may be rendered for enhancing a user's recognition of the light scene. The second modality may complement the audio fragment such that the second modality may be rendered in parallel to the audio fragment or alternatively, may be rendered sequentially with the audio fragment. The second modality may be rendered instead of rendering the audio fragment.
In an embodiment, a selection of the second modality from a group of second modalities may be based on one or more of: a user preference, an activity of a user, a spatial location, number and/or type of the functional device(s), a psychological or physiological state of a user, the time of day, an environmental context.
In order to select the second modality from the group and/or a selection between second modality and the audio fragment, the selection criteria may comprise selecting based on a user preference and/or activity, such that if the user is watching a movie (s)he may not prefer audio fragment and may prefer smell.
According to a second aspect, the object is achieved by a controller for enhancing a user's recognition of a light scene; the controller comprising: an input interface arranged for receiving a first signal indicative of a selection of the light scene; a processor arranged for executing the steps of the method according to the first aspect; an output interface arranged for outputting an audio fragment and/or a second modality to an audio fragment rendering device and/or a functional device.
According to a third aspect, the object is achieved by a computer program product comprising instructions configured to execute the steps of the method according to the first aspect, when executed on a controller according to the second aspect.
According to a fourth aspect, the object is achieved by a system for enhancing a user's recognition of a light scene; the system comprising: one or more lighting devices arranged to render a light scene; an audio fragment rendering device arranged to render an audio fragment, and/or a functional device arranged to render a second modality; the controller according to the second aspect arranged for executing the steps of the method according to the first aspect.
It should be understood that the computer program product and the system may have similar and/or identical embodiments and advantages as the above-mentioned methods.
The above, as well as additional objects, features and advantages of the disclosed systems, devices and methods will be better understood through the following illustrative and non-limiting detailed description of embodiments of systems, devices and methods, with reference to the appended drawings, in which:
All the figures are schematic, not necessarily to scale, and generally only show parts which are necessary in order to elucidate the invention, wherein other parts may be omitted or merely suggested.
The system 100 comprises various ways to select a light scene either in a descriptive way or a non-descriptive way. The descriptively selected light scene may comprise selecting the light scene via the user interface device 160 which provides a descriptive name of the light scene. The descriptive name is a representation of the light scene; wherein the representation may comprise a text, figure, or image. For example, a user 155 may select the light scene via a user interface device 160, which in this exemplary figure is shown to be a mobile phone 160. Such a user interface device 160 is also exemplarily shown in
Alternatively, the light scene may be non-descriptively selected. For instance, the user 155 may select the light scene via a voice command 122, e.g. a smart speaker, or by using a switch 121. In
The system 100 may comprise an audio fragment rendering device 135 to render an audio fragment. In this exemplary figure, the audio fragment rendering device is an external speaker 135. Any other audio fragment rendering device, e.g. headsets, head phone, ear phones etc. may also be used to render the audio fragment. The system 100 may comprise more than one audio fragment rendering device 135. The audio fragment rendering device 135 may be co-located with the lighting devices 111-114, e.g. in the same room, or may be located at different but neighbouring locations, e.g. the lighting devices 111-114 are located in the bed room of a house and the audio fragment rendering device 135 is located in the living room. Based on the determination that the light scene is non-descriptively selected, a characteristic of the light scene may be determined, wherein the determined characteristics of the light scene may comprise one or more of: an image associated with the light scene, title of the light scene, color, color temperature, intensity, beam width, beam direction, illumination intensity. The color of the light scene may comprise the average color of the light scene or the dominant color. The color of the light scene may be according to a user preference. The color may be a single color or a vector of colors, e.g. a color palette. The color palette may comprise the colors rendered by each of some of the lighting devices. The color palette may comprise a weighted combination of colors rendered by the lighting devices. The weights may be assigned based on a user preference and/or automatically by a learning algorithm. Similarly, the intensity and the other characteristics of the light scene may be a single value or a vector of values. For example, intensity may be an average intensity or highest/lowest intensity or a vector of intensities. The image associated with the light scene may be the image which is used to generate the palette for the light scene, for instance image in the Philips Hue app. Similar interpretation may be given to other characteristics of the light scene. An audio fragment may be provided based on the determined characteristic. In a simple example, a user 155 manually selects an audio fragment, for instance, for the “enchanted forest” light scene in the Philips Hue Go, based on the title of the light scene, the user 155 may select the audio fragment of a bird singing, wind blowing or an animal sound which resembles with the title ‘enchanted forest’. In a more advanced embodiment, machine learning model may be used to analyze the determined characteristics and provides an audio fragment. The audio fragment rendering device 135 may be arranged for rendering the provided audio fragment. The provided audio fragment may be spatially and temporally rendered together with the light scene. These steps are also exemplary shown and discussed in
When the light scene may be subsequently selected, for instance, the user 155 subsequently selects a Go-to-sleep (good night) scene at another night, the subsequent rendering of the audio fragment may be limited to a predetermined number of times. The number of times may be determined based on one or more of: a user 155 preference, an environmental context, number of audio fragment rendering devices 135, type of audio fragment rendering devices 135, a lighting infrastructure of a user 155 etc. For example, a lighting infrastructure of a user 155 comprises a large number of the lighting devices 111-114 such that the light scene is sufficiently recognizable or perceivable by the user 155, (s)he may not prefer to subsequently render the audio fragment, in this case the predetermined number of times is zero, or (s)he may prefer to subsequently render the audio fragment, e.g. for 2 times or 3 times. The determination of the number of times may be manually performed by a user 155 or based on a historic data, e.g. a machine learning model trained on the historic data to predict the number of times.
The predetermined number of times of the subsequent rendering of the audio fragment may be different for a descriptively selected light scene and for a non-descriptively light scene. For instance, if a user interface device 160 provides a descriptive name and a user 155 is aware of the name of the light scene, the subsequent rendering is limited to a smaller number compared to if the user 155 has selected the light scene via a legacy wall switch 121. For non-descriptively selected light scenes, the predetermined number of times may be different for each or some of the methods (ways) of non-descriptive selection. For example, if a user 155 has selected the light scene via a voice command 122, (s)he is aware of the name of the light scene, the subsequent rendering is limited to a smaller number compared to if the user 155 has selected the light scene via a legacy wall switch 121. The predetermined number of times may be same for a descriptively selected light scene and for a non-descriptively selected light scene. For example, a selection via a voice command 122 and a selection via a mobile phone 160 with a descriptive name of the light scene may have the same predetermined number of times.
The subsequent rendering of the audio fragment may comprise rendering an amended audio fragment or rendering a different audio fragment, wherein the amended audio fragment may comprise changing one or more of: time duration, beat, timbre, pitch, intensity, rhythm, major and minor key. For example, a user 155 may not prefer the provided audio fragment and may prefer to amend the subsequent rendering of the audio fragment. For example, with the ‘enchanted forest’ light scene, if a bird sound is provided as an audio fragment, the user 155 may prefer to amend the intensity and/or time duration of the audio fragment. The amended audio fragment may be based on one or more of: an environmental context, number of audio fragment rendering devices 135, type of audio fragment rendering devices 135, a lighting infrastructure of a user 155 etc. For example, if the system 100 comprises only one audio fragment rendering device 135 in an environment of the user 155, the intensity of the audio fragment may be preferably increased or other way around. Alternatively, a different audio fragment may be used for the subsequent rendering of the audio fragment. Such different audio fragment may be applied to circumvent saturation in the senses of the user 155. In an example, when the attention of a user 155, when the light scene is rendered, is disrupted by e.g. a disturbing event (resulting in losing his/her focus on the light scene), then the audio fragment may be subsequently rendered.
Additionally or alternatively, a second modality may be provided and a functional device (not shown) may be controlled to render the provided second modality. The second modality may comprise affecting one or more of human sensing system(s): visual, olfactory, kinesthetic. The visual sensing system may comprise a picture and/or a video, for example, a video is rendered on a video rendering device (not shown) such as a television display or a mobile display. The olfactory sensing system may comprise evoking olfactory imagery, e.g. smell, aroma etc, rendered by a olfactory rendering device, e.g. a connected smell dispenser. The kinesthetic sensing system may comprise the sense of touch, e.g. feel, warm, vibration etc.
A selection of the second modality from a group of second modalities may be based on one or more of: a user 155 preference, an activity of a user 155, a spatial location of the functional device, a psychological or physiological state of a user 155, the time of day, an environmental context. The second modality may be rendered sequentially with the audio fragment or in parallel to the audio fragment. In some cases, only the second modality may be rendered instead of the audio fragment. In presence of the group of second modalities, an appropriate modality or a combination of modalities may be selected. For example, in presence of many users chatting or music playing, the olfactory (smell) may be preferred over the audio fragment, while in the context of a dinner, the audio fragment may be preferred over the olfactory (smell). The group of second modalities may be regularly repeated, in a sequence, as to make effective, maintain, renew or even strengthen the effect. It is known and described in the prior art that when using multiple sequential cues (modalities), the likelihood of being able to effectively create the envisioned association, e.g. with the light scene, is increased.
The method 300 further comprises determining 330 whether the light scene has either been descriptively selected or has been non-descriptively selected; wherein a determination is made that the light scene has been descriptively selected when the light scene has been selected via a user interface device 160 which provides a descriptive name of the light scene, and wherein otherwise the determination is made that the light scene has been non-descriptively selected. The determination 330 may be made to distinguish between a descriptively selected light scene or a non-descriptively selected light scene. The determination 330 may be based on a characteristic of the received first signal, e.g. which source the first signal is coming from, the format, type etc. The format of the first signal e.g. may also indicate the source of the first signal. The steps of controlling 320 and determining 330, and other steps, are exemplary shown to be performed sequentially but the steps may be performed in any order.
The method 300 further comprises, if the determination has been made that the light scene has been non-descriptively selected (a yes 334), determining 340 a characteristic of the light scene. The determined characteristics of the light scene may comprise one or more of: title, color, color temperature, intensity, beam width, beam direction, illumination intensity. The method 300 further comprises providing 350 an audio fragment based on the determined characteristic, and controlling 360 an audio fragment rendering device 135 to render the provided audio fragment. The audio fragment may be provided such that it matches with the light scene.
The step of providing 350 may further comprise selecting 404 the audio fragment from the plurality of audio fragments based on the determined characteristic. There may be different methods to perform the selection 404. In a simple way, the audio fragment selection 404 may be performed manually. A user 155, for instance, after creating a light scene may be prompted to select an audio fragment from the plurality of audio fragments, wherein the plurality of audio fragments may be derived from an open-source data-base or from a data-base of the user 155. The audio-fragment data base as presented to the user 155 may preferably be a selection of the plurality of audio fragments which relates (in general terms) to the light scene. The determined characteristic may be the light scene title. The light scene title may be used as a keyword for the selection 404 of the audio fragment, which best suited the light scene title. For example, the light scene with title ‘Carnival’, may be used to identify audio fragments representing the event. In an example, the step of providing 350 may comprise checking if there is an audio fragment associated with the scene and if not, e.g. when the light scene is rendered for the first time) an audio fragment is selected.
Machine learning approaches may be used to perform the selection 404. For example, machine learning approaches may be used to match the colors or an image that was used for color extraction with a possible nature light scene, event, or location. For example, image databases may be used to build a machine learning model that may assign labels to color combinations, for instance by:
The time duration of the audio fragment may be less than a threshold. For example, the audio fragment is less than 2 sec, 5 sec or 10 sec, wherein the time duration may also be selected based on the determined characteristic of the light scene. The time duration of the audio fragment may be based on lighting infrastructure of a user 155 and/or an environmental context. The time duration of the audio fragment may be selected such that the user 155 recognition of the light scene is enhanced. The intensity of the audio fragment may be less than a threshold and may be, e.g. based on the activity of a user 155, a lighting infrastructure, a spatial location of the audio fragment rendering device 135, time of the day, a psychological or physiological state of a user 155.
The step of providing 350 may further comprise associating 406 the audio fragment with the light scene. The associated audio fragment may be spatially and temporally rendered together with the light scene.
When a subset of audio fragments have been selected 404, the step of selecting the audio fragment from the subset may be based on one or more of: a user 155 preference, a spatial location, type and/or number of the audio fragment rendering devices 135, a psychological or physiological state of a user 155, a previous audio fragment used by a user 155, the time of day, an environmental context. For example, for enchanted forest light scene, when a subset of the plurality of audio fragments has been selected, e.g. bird sound, wind blowing etc., the to-be-rendered audio fragment may be selected 404 based on the user 155 preference or on the spatial location of the audio fragment rendering device 135 such that the user's recognition of the light scene is enhanced.
It will be understood that the processor 615 or processing system or circuitry referred to herein may in practice be provided by a single chip or integrated circuit or plural chips or integrated circuits, optionally provided as a chipset, an application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), digital signal processor (DSP), graphics processing units (GPUs), etc.
The controller 610 may be implemented in a unit, such as wall panel, desktop computer terminal, in the bridge, in the lighting device or even a portable terminal such as a laptop, tablet or smartphone. Further, the controller 610 may be implemented remotely (e.g. on a server); and the controller 610 may be implemented in a single unit or in the form of distributed functionality distributed amongst multiple separate units (e.g. a distributed server comprising multiple server units at one or more geographical sites, or a distributed control function distributed amongst the light sources 111-114 or audio fragment rendering device 135. Furthermore, the controller 610 may be implemented in the form of software stored on a memory (comprising one or more memory devices) and arranged for execution on a processor (comprising one or more processing units), or the controller 610 may be implemented in the form of dedicated hardware circuitry, or configurable or reconfigurable circuitry such as a PGA or FPGA, or any combination of these.
The methods 300, 400 and 500 may be executed by computer program code of a computer program product when the computer program product is run on a processing unit of a computing device, such as the processor 615 of the controller 610.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims.
In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer or processing unit. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
Aspects of the invention may be implemented in a computer program product, which may be a collection of computer program instructions stored on a computer readable storage device which may be executed by a computer. The instructions of the present invention may be in any interpretable or executable code mechanism, including but not limited to scripts, interpretable programs, dynamic link libraries (DLLs) or Java classes. The instructions can be provided as complete executable programs, partial executable programs, as modifications to existing programs (e.g. updates) or extensions for existing programs (e.g. plugins). Moreover, parts of the processing of the present invention may be distributed over multiple computers or processors or even the ‘cloud’.
Storage media suitable for storing computer program instructions include all forms of nonvolatile memory, including but not limited to EPROM, EEPROM and flash memory devices, magnetic disks such as the internal and external hard disk drives, removable disks and CD-ROM disks. The computer program product may be distributed on such a storage medium, or may be offered for download through HTTP, FTP, email or through a server connected to a network such as the Internet.
Number | Date | Country | Kind |
---|---|---|---|
19157903.6 | Feb 2019 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2020/054075 | 2/17/2020 | WO | 00 |