This application claims the benefit of German Patent Application DE 10 2023 103 500.2, filed on Feb. 14, 2023, the contents of which is incorporated by reference in its entirety.
The present disclosure generally relates to a system comprising a control unit, and a sensor configured to sense a parameter of an environment surrounding objects. Furthermore, the present disclosure generally relates to a method for controlling a relative positioning of at least two objects, and a method for controlling a positioning of at least one object.
It is known that certain visual patterns can affect people emotion and mood. Michael Hann's research underlines that geometry starts with basic points and lines, with lines representing transitions. A 1924 study highlighted that angular lines are seen as “serious” or “hard,” while curved lines are perceived as “gentle” or “playful,” with emotions influenced by line inflection. This concept is extended in Kansei Engineering, which uses form to evoke emotions. Gestalt theory suggests that slight changes in familiar shapes can create a “sense of happening,” altering emotional interpretations of patterns. Additionally, the concept of pattern languages, introduced by Christopher Alexander, uses structured patterns to convey design experiences, emphasizing the role of geometry and pattern in emotional design responses.
It is also known that colors may affect user mood and emotional state. For example, a user may associate the color red with anger, the color pink with love, the color black with sadness and so on.
Thus, the present disclosure is directed to systems which affect user visual perception to change and/or reflect a mood of a user, and corresponding methods.
A purpose of present disclosure is to create a system which is able to position and illuminate at least one object to reflect and/or create the above-mentioned emotions, wherein the emotion is selected by the user (or determined otherwise, using, for example, artificial intelligence) and/or automatically reflects the mood of the user by the positioning and illumination of at least one object.
The system can be placed in different locations, for example, shops, a private house, a museum, and so on. By showing a certain pattern, created by the position and the illumination of the at least one object, a user mood can be affected, and, as a result, users may be motivated to do different activities.
According to a first aspect, we describe a system comprising: a control unit; a pattern visualization device; and a sensor couplable to the control unit, wherein the sensor is configured to sense a parameter of an environment surrounding at least two objects and to provide the sensed parameter to the control unit; wherein the control unit is couplable to the at least two objects and to the pattern visualization device, and wherein the control unit is configured to control, based on the sensed parameter, a relative positioning of at least a first one of the at least two objects with respect to at least a second one of the at least two objects, and wherein the pattern visualization device comprises means for moving at least one of the objects based on the controlling of the relative positioning by the control unit.
Controlling the relative positioning of at least a first one of the objects with respect to at least a second one of the objects may comprise changing the absolute position of the first one of the objects, changing the absolute position of the second one of the objects, or changing the absolute positions of both of the first one of the objects and the second one of the objects.
The control unit may be any suitable control unit, preferably comprising a processor (and, in some examples, a memory). In some examples, the control unit may be a physical entity. In some examples, the control unit, i.e. the processor (and, in some examples, the memory), may be located in the cloud, and accessed via cloud storage, or cloud computing. The skilled person understands that cloud storage is a model of networked online storage and cloud computing is Internet-based development and use of computer technology stored on servers rather than client computers. In some examples, the control unit may be split between a physical entity and the cloud.
The pattern visualization device may be any suitable device configured to store data and/or execute instructions relating to the physical positioning of at least one of the objects and/or the relative positioning of the objects and/or the relative positioning of the objects with respect to one another. Additionally or alternatively, the pattern visualization device may comprise a drive unit, which comprises control and power electronics configured to drive a plurality of mechanical units which in turn allows for the physical movement of the objects. The skilled person understands that instead of a drive unit, described previously, the pattern visualization device can comprise any means that allows for at least one of the objects to be moved and/or allows for the relative positioning of the objects to be altered. The moving means may comprise one or more of a drive unit, a linear actuator, a winch, a servo actuator, a rotating means or any other suitable moving means, wherein the moving means allows for at least one of the objects to be moved based on the controlling of the relative positioning by the control unit. Any of the above moving means may be comprised in or constitute a moving unit.
The sensor may be any suitable sensor configured to sense a parameter of the environment surrounding the at least two objects. More information on such sensors is given below.
The control unit is configured to control a relative positioning between at least a first object and at least a second object based on the sensed parameter. That is to say, the control unit may control a physical positioning of at least one of the objects so that they move towards/away from each other (in particular in any direction). This movement may be completed by a drive unit, a winch, a linear actuator, a servo actuator, a rotating means or any other suitable method or any combination thereof, wherein the unit that allows for the objects to be moved towards/away from each other is coupled to at least one of the objects, and the control unit.
Throughout the present disclosure, illumination of an object may, in some examples, relate to shining light onto the object (which is then reflected or scattered by the object) and/or the object itself being a light emitting element configured to emit light.
In some examples, the sensed parameter relates to one or more of: a presence and/or movement of a person within the environment surrounding at least one of the at least two objects; a pose and/or gesture of the person in the environment surrounding at least one of the at least two objects; and a sound in the environment surrounding at least one of the at least two objects. The extent of the term “surrounding” may be limited by a capability of the sensor sensing the sensed parameter, i.e. the range of the sensor and/or may be a predetermined distance such as, for example, 1 meter, 2 meters, 3 meters, 4 meters, 5 meters, 6 meters, 7 meters, 8 meters, 9 meters, 10 meters, or any other suitable distance.
That is to say, the sensed parameter may be sensed, for example, via an optical sensor, wherein the optical sensor can sense a presence and/or movement and/or pose and/or gesture of a person, a motion sensor configured to detect a motion of a person, or an audio sensor configured to detect sound, or any combination of the foregoing examples. The audio sensor may detect volume and/or may pick up keywords relating to an emotion and/or mood of a person in the environment of the at least two objects. Any of these sensors may allow for a more accurate determination of the mood and/or emotion of a person, thereby allowing the system to more accurately position the objects in accordance with the mood and/or emotion of the person. The mood of a person may be any suitable and/or recognizable mood such as, for example, happiness, sadness, anger, peacefulness, scared, embarrassment, playfulness, confidence, and so on.
Throughout the present disclosure, it is to be understood that the terms “user” and “person” are interchangeable with each other. Furthermore, when the term “user” or “person” is used, it is to be understood that the same terms may apply to a plurality of users or people. Additionally, when the term “mood” is used, it is to be understood that this is interchangeable with the term “emotion”, and vice versa.
In some examples, the sound in the environment is a vocal expression of the person in the environment surrounding at least one of the at least two objects and/or a genre of music in the environment surrounding at least one of the at least two objects. In the case of a vocal expression, keywords showing anger, happiness or sadness, and additionally or alternatively a volume of the vocal expression, may be sensed and correlated to a mood pattern. In the case of a genre of music, the genre of music sensed may be correlated to a mood pattern. In both cases, this may allow for a more accurate determination of the mood and/or emotion of a person, thereby allowing the system to more accurately position the objects in accordance with the mood and/or emotion of the person.
In some examples, the sensed parameter relates to one or more of: an ambient light level in the environment surrounding at least one of the at least two objects; and a facial expression of the person in the environment surrounding the at least one of the at least two objects.
That is to say, the sensed parameter may be sensed, for example, via an optical sensor an ambient light level, temperature via infrared, or a camera configured to capture body language and/or facial emotions or a temperature sensor such as, for example, a semiconductor, a thermocouple, an infra-red sensor, or a thermal switch. Any of these sensors may allow for a more accurate determination of the mood and/or emotion of a person, thereby allowing the system to more accurately position the objects in accordance with the mood and/or emotion of the person. In some examples, the sensed parameter may further comprise a temperature of the environment surrounding the at least one of the at least two objects using the temperature detecting means mentioned above.
In some examples, the system further comprises a pattern illumination device comprising a light source, and wherein at least one of the at least two objects is illuminatable by the light source. This may mean that the object comprises a reflective surface, and the pattern illumination device, via the light source, illuminates the at least one reflective surface of the object. This may allow light to diffuse through the environment surrounding the at least two objects, thereby creating an atmosphere and/or mood and/or reflecting a mood as is described herein. In some examples, only one of the at least two objects is illuminated and/or illuminatable by the light source. In some examples, where there is a plurality of objects, only a subset of the objects may be illuminated and/or illuminatable by the light source. The light source may be any suitable light source comprising at least one light emitting element. The light source may comprise one or more of a lamp, a light emitting diode (LED), a laser, an OLED, an electro luminescent source, or any other suitable light source
In some examples, at least one of the at least two objects comprises a light emitting element configured to emit light. The light emitting element may be any suitable light emitting element and/or a light emitting element comprising a plurality of light emitting elements. The light emitting element may comprise one or more of a lamp, a light emitting diode (LED), a laser, an OLED, an electro luminescent source, or any other suitable light emitting element. This may allow light to be emitted into the environment surrounding the at least two objects, thereby creating an atmosphere and/or mood and/or reflecting a mood as is described herein.
In some examples, at least one of the at least two objects is moveable, upon a controlling of at least one of the at least two objects by the control unit, by a winch and/or a linear actuator and/or a servo actuator and/or a rotating means. A rotating means may allow for an object to be rotated in order to form a mood line, mood pattern, or mood plane as described herein. The above methods of controlling may be located within a mechanical unit. That is to say, the at least two objects may be moveable by a mechanical unit, wherein the mechanical unit comprises a winch and/or a linear actuator and/or a servo actuator and/or a rotating means. This may allow for at least one of the object's physical positioning to be altered, and for a pattern/mood pattern/mood line to be created and/or displayed, as described herein.
In some examples, a geometry defined by respective locations of the at least two objects represents a mood pattern, in particular a mood line, wherein the mood pattern corresponds to a mood of a person within the environment surrounding at least one of the at least two objects, and wherein the control unit is configured to control the relative positioning to generate the mood pattern based on the mood of the person. This may allow for the positioning of the objects to reflect the mood and/or emotion of the person. The mood pattern may be defined as a geometry defined by respective locations of the at least two objects, wherein the geometry is based upon the sensed parameter, and the sensed parameter relates to a mood and/or emotion of the person. The emotion may be determined by, for example laughing or crying, in order for the emotion of the person to be linked to a respective emotion, wherein the laughing or crying, for example, is sensed by an audio sensor and/or an optical sensor, and the sensor is configured to determine the emotion based on the sensed parameter and/or reading. In some examples, the audio sensor may detect volume and/or may pick up keywords relating to an emotion and/or mood. Additionally or alternatively, the optical sensor may be able to recognize a presence and/or movement and/or pose and/or gesture of a person and/or facial expression of a person. The determined emotion may be based on the above readings of the respective sensors.
In particular, in the case of the objects being point or point-like sources, such as a bulb, the mood pattern may be established by the person making virtual connections between, for example, nearest neighbor objects. In the case of the objects being elongated objects, the person may establish the mood pattern by virtually connecting, for example, ends of nearest neighbor objects. In some examples, the objects may be touching, or nearly touching, one another, thereby establishing the mood pattern without the need for the person to make virtual connections. The control unit may control the relative positioning of the at least two objects via the methods mentioned above. This may allow for the person to couple a pattern to a mood.
The term “mood line” may, in some examples, refer to a line, or shape, made between nearest neighbor objects, via the virtual connections, if needed, mentioned above. In some examples, the line may shift, and not be a constant line or shape, in which case, it may be referred to as a mood pattern. However, the term “mood pattern” may also be used for a stationary line, or shape.
Additionally, the term “geometry” may relate to the line, or shape, made by the objects via the touching of objects and/or the virtual connections mentioned above. In some examples, both touching, or nearly touching, and virtual connections may be used in the same mood pattern.
In some examples, the system further comprises a memory configured to store a correspondence table between the mood of the person and the geometry, and wherein the control unit is configured to receive a first signal based on the correspondence table for controlling the relative positioning. The memory may be a physical entity, or located in the cloud. In some examples, the correspondence table may comprise two columns or rows, wherein the first column and/or row comprise(s) information on mood, and the second column and/or row comprise(s) information on the geometry and/or mood pattern. The memory therefore can store information on moods and mood patterns reflected, by, or related to, said mood, and control the objects based on the mood, so that said objects are positioned in the corresponding mood pattern. This may allow for the same mood pattern to be used for the same mood repeatedly, thereby allowing the user to associate a pattern with a mood.
In some examples, the system further comprises a processor coupled to or integral to the control unit, wherein the processor is configured to determine the mood of the person based on the sensed parameter. This may also be in the form of a correspondence table, as mentioned above, with the first column/row comprising the mood and the second column/row comprising the sensed parameter. The processor may be able to intake the sensed parameter(s), and compare it/them to moods. In some examples, there is a separate table for each parameter, and the control unit is configured to control the objects via the most common mood, or via a proportional representation of the moods according to each sensed parameter. In some examples, the control unit may produce a score based on the sensed parameter(s) and base the mood off of this score. In some examples, the sensed parameter is in a third column/row of the table mentioned above with respect to the pattern and the mood. Should this happen, the control unit may relate the sensed parameter(s) to the mood, and then the mood to the pattern, thereby allowing the control unit to control the relative positioning of the objects according to the sensed parameter(s). This may allow for the mood, as sensed by the sensor, to be associated with the mood pattern as seen by the person, and for the person to recognize their current mood, should they be unaware of it.
In some examples, the geometry defining the mood pattern fulfils at one or more discrete points in time one or more of: a straight line according to a first mood; a wave-like line according to a second mood; a zig-zag pattern according to a third mood; a spiral according to a fourth mood; and a round shape according to a fifth mood. This may allow for the mood pattern to be associated with different moods/emotions. In particular, the straight line may be associated with passiveness, aspiration, calmness or satisfaction, the wave-like line with stability, instability, calmness or tenderness, the zig-zag pattern with brutality, dynamisism, excitedness, or nervousness, the spiral pattern with concentration and the round shape with focus. The skilled person understands that the various patterns may be associated with different moods for individual people, and that the above are only examples of moods associated with patterns. This may allow for the person to associate various moods with specific mood patterns.
In some examples, the system further comprises a remote control device configured to receive a first manual input by a person regarding a mood of the person, wherein the remote control device is configured to transmit a second signal to the system, and wherein the control unit is configured to control the relative positioning based on the second signal. This may allow for the person to manually input their current mood, or a desired mood, and for the relative positioning of the objects to be altered based on this manually input mood. This may be useful if the user wishes for the system to display a pattern according to their current mood, in order to maintain said mood, or the person may input a desired mood, opposite to their current one, in order to try and change their mood. This latter option may be particularly helpful if the person wishes to become more calm, or more focused, for example. Thereby, the mood pattern reflects the person's current mood, or reflects a wished mood. In some examples, the remote control device may comprise the sensor couplable to the control unit described herein. The remote control device may comprise a microphone and/or a camera and/or a touchscreen and/or any other suitable feature in order for the said device to act as a sensor as described herein.
In some examples, the remote control device is configured to receive a second manual input by the person, wherein the second manual input relates to a creation of a mood of the person not stored in a memory of the system, and wherein the remote control device is configured to transmit a third signal to a receiver of the system, wherein the control unit is further configured to control, based on the third signal, the relative positioning of at least the first object with respect to at least the second object, and wherein the memory of the system is configured to store the created mood and relate said created mood to the sensed parameter. This may allow for the person to create a mood not stored by the memory, and a mood pattern to be associated with this mood. Additionally, in order for the control unit to recognize this new mood, the control unit may take account of the sensed parameter(s) at the point of creation, and store this/these parameter(s) so that when this/these parameter(s) is/are sensed again, the control unit controls the relative position of the objects based on this/these parameter(s). This may allow for the person to customize the moods recognized by the control unit. In some examples, the creation of the mood can be in addition to the first manual input mentioned above, or could be an alternate to the first manual input. That is to say, in some examples, the system may only be able to recognize the second manual input.
In some examples, the remote control device is configured to receive a third manual input by the person, wherein the third manual input relates to the relative positioning of at least the first one of the objects with respect to at least the second one of the objects not stored in the memory of the system, and wherein the remote control device is configured to transmit a fourth signal to the receiver of the system, wherein the control unit is further configured to control, based on the fourth signal, the relative positioning of at least the first one of the objects with respect to at least the second one of the objects, and wherein the memory of the system is configured to store the relative positioning and relate said relative positioning to the sensed parameter. This may allow for the person to create a mood pattern and/or relative positioning of objects not stored by the memory. Additionally, in order for the control unit to recognize this new mood pattern, the control unit may take account of the sensed parameter(s) at the point of creation, and store this/these parameter(s) so that when this/these parameter(s) is/are sensed again, the control unit controls the relative position of the objects based on this/these parameter(s). This may allow for the person to customize the mood patterns and/or relative positioning of the objects. In some examples, the creation of the mood pattern can be in addition to the first and/or second manual input mentioned above, or could be an alternate to the first and/or second manual input. That is to say, in some examples, the system may only be able to recognize the third manual input, only two of the first to third manual inputs, or all three manual inputs.
In some examples, the control unit comprises a machine learning unit comprising a machine learning algorithm, and wherein the machine learning algorithm is configured to determine, based on the sensed parameter, the mood of the person. The machine learning unit may, in some examples, be in communication with a voice recognition service such as, for example, Amazon Voice Services, Microsoft Cortana, Google Assistant or the like, and receive leaning inputs relating to sound from such services. The machine learning unit may, in some examples, be configured to determine, based on the sensed parameter(s), the mood of the person. The machine learning unit may be able to do so via the sensors and/or sensors parameter(s) mentioned above. That is to say, the optical sensor may be able to recognize a presence and/or movement and/or pose and/or gesture of a person and/or facial expression of a person, and determine the mood based on the presence and/or movement and/or pose and/or gesture. The machine learning unit may additionally or alternatively be able to do the same via body language, as seen by the optical sensor, recognition of keywords showing anger, happiness or sadness via the audio sensor (happiness or sadness being recognized via, for example, laughing or crying, in order for the emotion to be linked to the respective emotion), or any other suitable sensed parameter, or any combination of the foregoing examples. In some examples, the machine learning unit is a physical entity, or in the cloud, or a hybrid between the two. The machine learning unit may be trained via a known method, such as, for example, having videos and/or images and/or sounds input into the unit relating to different moods and/or emotions (an image may relate to a facial expression, for example, a person smiling, which may correspond to happiness, a person crying, which may correspond to sadness, and so on), and then training the machine learning unit based on these videos and/or images and/or sounds. Additionally or alternatively, any suitable media may be input into/output from the machine learning unit in order to train said unit and/or the machine learning unit may be trained by any suitable method.
In some examples, the machine learning algorithm is configured to receive updates from an external source via a wired and/or wireless source, and wherein the machine learning algorithm is configured to be altered based on an approval or disapproval by the person of the determination of the mood of the person. This may allow for the algorithm to be updated during use of the system, thereby more accurately determining the mood of the person and so, displaying more accurate and relevant mood patterns. In some examples, the control unit, and the memory in particular, may have several mood patterns for each mood, and may display one of these patterns according to a determined and/or selected mood. The person may then be able to manually input an approval, or disapproval, of the mood pattern via a remote control device, and so, the control unit may control the relative positioning to the next stored pattern and/or relative positioning. The control unit and/or machine learning unit may then store this choice and use the rejected pattern less frequently, and the approved pattern more frequently. This may allow for the displayed patterns to be closer to the mood of the person.
In some examples, a memory of the system comprises a first folder relating to the environment surrounding at least one of the at least two objects, wherein the first folder comprises information on one or more of: a physical positioning of at least one of the objects; historical information on the relative positioning of at least the first object with respect to at least the second object; a purpose of the environment surrounding at least one of the at least two objects; information on a person in the environment surrounding at least one of the at least two objects; and a mood of the person in the environment surrounding at least one of the at least two objects. The physical positioning of the objects may relate to a relative positioning of the objects, a geometric configuration of the objects and an ID of the system, which is used for communication and identification purposes. This may allow for the system to be identified, and for the system to understand the present relative positioning of the objects. The historical information may relate to previous relative positionings, previous mood patterns, previous mood lines, previous moods, and the like. This may then be used to determine future actions of the system based on, for example, the feedback process mentioned above. The purpose may relate to information about the environment where system is setup such as, for example, a store, and the purpose of the store like a bookstore, a clothing store, a jewelry store and the like, a museum, be it a museum with ancient artifacts or modern pieces, or a house, with the information relating to a kitchen, a living room, a bedroom or the like. This may allow for the relative positioning and mood patterns to be altered based on the location of the light source. The information on the person may relate to cultural and religious information or age information, thereby customizing the system to the user. The mood of the person may relate to the present mood of the person, thereby allowing for the relative positioning of the objects to be customized to the person's mood. In some examples, the information on the environment relates to only one of the objects, but it is to be understood that this may apply to any subject of objects within the system.
In some examples, if the first folder comprises information on the mood of the person, the first folder comprises a subfolder comprising a playlist relating to the mood, wherein the playlist comprises (i) a pattern of at least a first and a second relative positioning of at least the first one of the objects with respect to at least the second one of the objects, wherein the first and second relative positionings are different relative positionings, and (ii) music playable from a speaker couplable to the system. This may allow for the system to customize the person's experience based on their present mood. Indeed, the relative positioning of the objects may be altered based on the mood, wherein the relative positionings are changed in a cyclical pattern. This pattern may be part of the mood pattern mentioned herein. The music also may be related to the mood. For example, if the user is in a focused mood, the music may be natural sounds such as sea waves, waterfall, or forest sounds and if the user is in a happy mood, the music may be a list of their favorite songs which have been saved to the memory via a music streaming service, or via a transferred playlist. This may allow for the person to have an immersed experience and amplify their present mood.
In some examples, the control unit is further configured to control one or more of: a wavelength of light emittable by at least one of the at least two objects; an intensity of at least one of the at least two objects; a first pattern comprising at least a first and a second relative positioning of at least the first one of the objects with respect to at least the second one of the objects, wherein the first and second relative positionings are different relative positionings; a second pattern comprising at least a first and a second wavelength of light emittable by at least one of the at least two objects, wherein the first and second wavelengths are different wavelengths; and a third pattern comprising at least a first and a second intensity of at least one of the at least two objects, wherein the first and second intensities are different intensities. The wavelength of light may be altered based on the relative positioning of the objects and/or the mood pattern and/or the mood of the person. For example, red may be associated with anger, the color pink with love and the color black with sadness. In some examples, the intensity of the light may be altered. That is to say, if the room is dark, at least one of the at least two objects may be lowered to, for example, 20% intensity, but if the room is bright, at least one of the at least two objects may be at 100% intensity. Additionally, both the wavelength of light and the intensity of light, along with the relative positionings of the objects, may be altered in cyclical patterns. This may allow for the mood of the person to be more accurately reflected by the system. This may also allow the control unit to more accurately control at least one of the at least two objects according to the relative positioning and/or the mood of the person. In some examples, the wavelength of light emittable by the light source is only in the visible spectrum. In some examples, the controlling relates to only one of the objects, but it is to be understood that this may apply to any subject of objects within the system.
In some examples, if the control unit is further configured to control the first pattern, a memory of the system comprises a second folder, the second folder comprising information on one or more of: a name relating to the first pattern; a description of the first pattern; an association made, by a person, between an experience by the person and the first pattern; a mood associated with the first pattern; an emotion associated with the first pattern; a musical genre associated with the first pattern; and coordinates of at least a first position and a second position in which at least one of the objects is moved between during the first pattern. The name of the first pattern may relate to the pattern of relative positionings such as, for example, “low amplitude 3-peak wave”. In this pattern, the relative positionings may be controlled so that it appears that the mood pattern and/or relative positionings replicate a series of rolling waves. The description of the pattern may relate to a more detailed description such as, for example, “a wavy line having 3 peaks, where relation between amplitude and wavelength is less than a value x. Higher value x, shorter wavelength and higher amplitude”. This may allow for a programmer and/or a person using the system to more accurately visualize what a pattern looks like. The association between an experience and the first pattern may relate to terms such as “calm sea”, “light breeze”, “lightweight clouds”, “travel” and “vacations”. This may allow for the person to add more abstract filters to the first pattern while the control unit and/or the machine learning unit is determining/selecting the mood of the person. Additionally, this may allow for the person to sync their calendar to the system, and if a vacation is approaching, the system may be more likely to display patterns with the tag word “vacation”. Moods may be, for example, “relaxed” or “romantic”. Emotions may be emotions such as “pleasure” or “tranquility”. Both the moods and the emotions may allow for the control unit to more accurately control the objects according to the person's current emotional state. Additionally, when referring to the first to third manual inputs above, the person may input emotions rather than moods. The musical genre may relate to “slow romantic” or “instrumental” dependent on the mood of the person and/or the first pattern. The positioning of the objects may relate to coordinates of each point, which then form a pattern. This may also allow the control unit to calculate how far each object needs to be moved in order to create the pattern. In some examples, if the object is an elongated object, the coordinate may relate to a center point of the object, and may also comprise a rotational angle component, so that the control unit can calculate how far the elongated object needs to be rotated in order to achieve a desired pattern.
According to a second aspect, we describe a method performed by a system for controlling a relative positioning of at least a first object with respect to at least a second object, the method comprising: receiving, by a receiver of the system, optical data and/or sound data relating to an environment surrounding at least one of the objects; determining, by a machine learning unit coupled to the receiver, a mood of the environment, in particular of a person within the environment, surrounding the light source based on the received optical data and/or sound data; and based on the determined mood, controlling, by a control unit of the system, the control unit being couplable to (i) the machine learning unit and (ii) at least one of the objects, the relative positioning of at least the first one of the objects with respect to at least the second one of the objects.
The above method may allow for the receipt of data, similar to the sensors and parameters mentioned above in relation to the first aspect, and the determining, via a machine learning unit similar to the machine learning unit mentioned above, the mood of the environment based on the received data. Then, based on the determined mood, a control unit of the system is configured to control a relative positioning of at least a first one of the objects with respect to at least a second one of the objects. This may allow for the relative positioning of the objects to be altered based on the detected mood. In this aspect, a mood of the environment is determined. This may require the determining of the mood of a plurality of people. In this example, similar to the correspondence tables mentioned above, the machine learning unit may base the relative positioning of the objects based on the most commonly detected mood in the environment, or via a proportional representation of the detected moods of people in the environment. Therefore, the effect is that the relative positioning is based on the mood of the environment.
According to a third aspect, we describe a method for controlling a plurality of objects, the method comprising: providing a single machine learning unit which is coupled to each of objects, wherein the machine learning unit is configured to determine the mood of the environment; and performing the method of the second aspect for each of the objects. This may allow for more elaborate relative positionings and mood patterns as the number of objects are increased. In this case, the control unit of the system may comprise a main controller configured to control a plurality of the objects simultaneously.
According to a fourth aspect, we describe a method performed by a system for controlling a pattern comprising at least a first object and a second object, wherein the first object is coupled to a first subsystem and the second object is coupled to a second subsystem, the method comprising: moving, via a control unit coupled to the first subsystem and the first object, the first object to a first physical position; moving, via the control unit coupled to the second subsystem and the second object, the second object to a second physical position; selecting, by the control unit, a transformation algorithm, wherein the transformation algorithm results in a third physical position of the first object and a fourth physical position of the second object, is the algorithm being generated from the first and second physical positions of the respective objects; starting, by the control unit, the transformation; and stopping, by the control unit, the transformation when the physical position of the first object matches the third physical position and the physical position of the second object matches the fourth physical position.
This may allow for the third position and the fourth position to be a blend between the first and second positions. This may be particularly helpful in situations where there are many people in the environment, and the control unit/machine learning unit controls the relative positioning of the objects according to a proportional determination of moods of people within the environment.
Throughout the present disclosure, the term “visualize” may refer to the movement of at least one of the objects, by a control unit.
In some examples, the control unit is further configured to select a transformation time, wherein the transformation time is the time needed for the first object to be controlled between the first physical position and the third physical position, and the second object to be controlled between the second physical position and the fourth physical position. This may allow for the third and fourth physical positions to be reached simultaneously, thereby creating a more aesthetically pleasing pattern movement.
According to a fifth aspect, we describe a method for controlling a plurality of objects, the method comprising: providing a single control unit which is coupled to each of the objects, wherein the control unit is configured to control a positioning of at least a first object; and performing the method of claim 18 for each of the objects. This may allow for more elaborate relative positionings and mood patterns as the number of objects is increased. In this case, the control unit of the system may comprise a main controller configured to control a plurality of the objects simultaneously.
According to a particular non-limiting example of the present disclosure, patterns and/or mood patterns and/or mood lines may be created using the objects mentioned herein. In a particular example, there may be 13 objects placed in a line, with the pattern and/or mood pattern and/or mood line being formed, either by touching objects or by the virtual connections mentioned herein, being formed by the line of objects.
Any advantages and features described in relation to any of the above aspects and examples may be realized in any of the other aspects and examples described above.
It is clear to a person skilled in the art that certain features of the system set forth herein may be implemented under use of hardware (circuits), software means, or a combination thereof. The software means can be related to programmed microprocessors or a general computer, an ASIC (Application Specific Integrated Circuit) and/or DSPs (Digital Signal Processors). For example, a processing unit may be implemented at least partially as a computer, a logical circuit, an FPGA (Field Programmable Gate Array), a processor (for example, a microprocessor, microcontroller (μC) or an array processor)/a core/a CPU (Central Processing Unit), an FPU (Floating Point Unit), NPU (Numeric Processing Unit), an ALU (Arithmetic Logical Unit), a Coprocessor (further microprocessor for supporting a main processor (CPU)), a GPGPU (General Purpose Computation on Graphics Processing Unit), a multi-core processor (for parallel computing, such as simultaneously performing arithmetic operations on multiple main processor(s) and/or graphical processor(s)) or a DSP.
Even if some of the aspects described above have been described in reference to any one of the first to fifth aspects, these aspects may also apply to any one or more of the other aspects.
These and other aspects of the invention will now be further described, by way of example only, with reference to the accompanying figures, wherein like reference numerals refer to like parts, and in which:
A control unit of the system mentioned herein may be any suitable control unit, preferably comprising a processor and a memory. In some examples, the control unit may be a physical entity. In some examples, the control unit, i.e. the processor and/or the memory, may be located in the cloud, and accessed via cloud storage, or cloud computing. The skilled person understands that cloud storage is a model of networked online storage and cloud computing is Internet-based development and use of computer technology stored on servers rather than client computers. In some examples, the control unit may be split between a physical entity and the cloud (in particular the processor or the memory may be stored in the cloud). More details are given on such a control unit below.
The system also comprises a pattern visualization device, which may be any suitable device configured to store data and/or execute instructions relating to the physical positioning of at least one of the objects and/or the relative positioning of the objects. Additionally or alternatively, the pattern visualization device may comprise a drive unit, which comprises control and power electronics configured to drive a plurality of mechanical units which in turn allows for the physical movement of the objects
The system further comprises a sensor which may be any suitable sensor configured to sense a parameter (or parameters) of the environment surrounding the light source. More information on such sensors is given below.
The control unit is configured to control a relative positioning between at least an object and at least a second object based on the sensed parameter. That is to say, the control unit may control a physical positioning of at least one of the objects so that they move towards/away (in any direction) from each other. This movement may be completed by a drive unit, a winch, a linear actuator, a servo actuator, a rotating means or any other suitable method, wherein the unit that allows for objects to be moved towards/away from each other is coupled to at least one of the objects and the control unit.
Any pattern can be represented by individual object, via the above-mentioned relative positioning, for example, a straight line of
In some examples of elongated objects, the elongated objects are, at least partially, flexible. This may allow for mood patterns and mood lines, as seen in
A geometry defined by respective locations of the objects represents a mood pattern, in particular a mood line, wherein the mood pattern corresponds to a mood of a person within the environment surrounding the objects, and wherein the control unit is configured to control the relative positioning to generate the mood pattern based on the mood of the person. This may allow for the positioning of the objects to reflect the mood and/or emotion of the person.
The term “mood line” may, in any one or more of the examples outlined throughout the present disclosure, refer to a line, or shape, made between nearest neighbor objects, via the virtual connections, if needed, mentioned above. In some examples, the line may shift, and not be a constant line or shape, in which case, it may be referred to as a mood pattern. However, the term “mood pattern” may also be used for a stationary line, or shape.
Additionally, the term “geometry” may relate to the line, or shape, made by the objects via the touching of elements and/or the virtual connections mentioned above. In some examples, both touching, or nearly touching, and virtual connections may be used in the same mood pattern.
The mood pattern may also be constructed from a series of elongated objects, as seen in
More complicated mood patterns may be constructed from a plurality of objects, as seen in
In the case of
Mood patterns can be approximated, i.e. may relate to an abstract pattern, or the control unit may control the relative positioning so that such mood patterns can be realized by the system and/or the person, scaled up or down to any number of “point source” and/or elongated objects, depending on the system configuration and/or the wants and needs of the person.
It is also known that colors may affect user mood and emotional state. For example, red may be associated with anger, the color pink with love and the color black with sadness. Therefore, the objects devices may be additionally configured to emit a plurality of wavelengths of light across at least the visible spectrum. This may allow for the system to display multiple colors of light simultaneously and for the system to accurately reflect the mood of a person in the environment of the light source, in line with the sensed parameter(s), as will be described in more detail below.
The objects may emit light via at least one of the two following methods:
In some examples, the system further comprises a pattern illumination device (shown in
In some examples, at least one of the at least two objects comprises a light emitting element configured to emit light (shown in
The system 100 described herein may have two primary abilities when it comes to the relative positioning of the objects, and the objects themselves, displaying the mood of the person 60 in the environment surrounding the light source.
The first primary ability is to reflect an action 80, i.e. reflect the person's mood, by displaying appropriate mood patterns to the person 60.
The second is to create an action 70, i.e. create a mood, via a user defined mood by displaying appropriate mood patterns to the person 60.
As can be seen in
In some examples, the create action 70 block may relate to the person's current mood, in order to maintain said mood, or a desired mood, opposite to the person's current mood, in order to try and change their mood. This latter option may be particularly helpful if the person 60 wishes to become more calm, or more focused, for example. Thereby, the mood pattern reflects the person's current mood, or reflects a wished mood.
The system 100 comprises in this example two main parts:
The Data Processing and Control Unit 100a comprises elements which will be described in more detail below.
The other main part 100b of the system 100, described as the Pattern and Illumination Unit 100b, comprises a Pattern Visualization Device 210 and Pattern Illumination Device 260. The Pattern Illumination Device 260 may be an optional part of the system 100.
The skilled person understands that any suitable part of the system 100 may be located in the cloud, and accessed via cloud storage, or cloud computing. The skilled person understands that cloud storage is a model of networked online storage and cloud computing is Internet-based development and use of computer technology stored on servers rather than client computers. In some examples, the control unit may be split between a physical entity and the cloud. Additionally or alternatively, any suitable element of the system 100 may be located in a plurality of different physical locations and/or different elements of the system 100 may be located in a plurality of different physical locations.
The Data Processing and Control Unit 100a comprises the following features:
The Data Processing and Control Unit 100a is connected to a Network 112 via a communication unit 113, wherein the connection is by wire and/or wireless. The communication unit 113 may be a transmitter, a receiver, a transceiver, or any other suitable means. The Network 112 is preferably the Internet, but may be a private network not connected to the cloud, such as, for example, a home intranet network or a work intranet network.
The Data Processing and Control Unit 100a may also be coupled to external devices, such as a Home Assistance Device 118 such as, for example, Amazon Alexa, Google Assistant or Microsoft Cortana, and/or a Mobile Device 65 such as, for example, a mobile phone or a remote control, via the Network 112.
The Data Processing and Control Unit 100a may also be connected to remote storage 160 via the Internet, wherein the remote storage is located at a different physical location from the Data Processing and Control Unit 100a and/or is in the cloud.
The CPU Unit 111 may optionally have an AI Engine, in form of a hardware unit such as, for example, an Edge AI chipset, or software, which uses the communication unit 113 to send information to a remote AI processing and interpretation unit. Cloud based AI services such as, for example, the OpenAI neural network service can be used as remote AI processing. The AI engine may be referred to as the machine learning unit herein. The machine learning unit may comprise a machine learning algorithm configured to determine, based on the sensed parameter(s), the mood of the person. The parameter(s) is/are described in more detail below.
The machine learning unit may, in some examples, be configured to determine, based on the sensed parameter(s), the mood of the person. The machine learning unit may be able to do so via the sensors and/or sensors parameters mentioned herein. In some examples, the machine learning unit is a physical entity, or in the cloud, or a hybrid between the two.
In some examples, the machine learning algorithm is configured to receive updates from an external source via a wired and/or wireless source, and wherein the machine learning algorithm is configured to be altered based on an approval or disapproval by the person of the determination of the mood of the person. This may allow for the algorithm to be updated during use of the system, thereby more accurately determining the mood of the person and so, displaying more accurate, and relevant, mood patterns. In some examples, the control unit, and the memory in particular, may have several mood patterns for each mood, and may display one of these patterns according to a determined and/or selected mood. The person may then be able to manually input an approval, or disapproval, of the mood pattern via a remote control device, and so, the control unit may control the relative positioning to the next stored pattern and/or relative positioning. The control unit and/or machine learning unit may then store this choice and use the rejected pattern less frequently, and the approved pattern more frequently. This may allow for the displayed patterns to be closer to the mood of the person. In some examples, the machine learning unit may be in communication with a voice recognition service such as, for example, Amazon Voice Services, Microsoft Cortana, Google Assistant or the like, and receive learning inputs relating to sound from such services.
The sensors 115, 116, 117 are configured to sense a parameter (or parameters) of the environment surrounding the system 100 and can comprise one or more of:
Any number of the above sensors 115, 116, 117 may be used in conjunction with the system. That is to say, the sensed parameter(s) may be sensed via an optical sensor, wherein the optical sensor can sense a presence and/or movement and/or pose and/or gesture of a person 60, a motion sensor configured to detect a motion of a person 60, or an audio sensor configured to detect sound, or any combination of the foregoing examples. The audio sensor may detect volume and/or may pick up keywords relating to an emotion and/or mood of a person 60 in the environment of the light source. Any of these sensors may allow for a more accurate determination of the mood and/or emotion of a person 60, thereby allowing the system 100 to more accurately position the objects in accordance with the mood and/or emotion of the person 60. The machine learning unit may be able to do the same via body language, as seen by the optical sensor, recognition of keywords showing anger, happiness or sadness via the audio sensor, or any other suitable sensed parameter(s), or any combination thereof.
Additionally, a wearable device, like a smartwatch, can be considered as a sensor 115, 116, 117. The wearable device may be coupled to the system via a wired and/or wireless connection via the communication unit 113. In this case, the system 100 can receive information about a user physical state, such as the temperature and heart rate of the user 60, and use said data in the interpretation of the user emotional state and mood and so control system settings such as, for example, the relative positioning of the objects in order to display mood patterns.
The following is a particular example of the parameters the sensors 115, 116, 117 may sense. The sensors 115, 116, 117 may sense one or more of the following parameters:
Acoustic data, or sound data, may be an important source of information to determine user mood in an environment. To gather acoustic data, an acoustic sensor 115, such as a microphone, or a home assistant 118 may be used. The user 60 may select the preferred method of sensing sound data during system 100 setup. When the home assistant 118 is in use, it may detect a specific vocal keyword, that is, a “wake” word. For example, a user may say “Alexa” to trigger the home assistant 118 to begin listening.
The system 100 may be connected to the Internet, which, respectively may be connected to one or more cloud servers which host a voice service for interpreting sound data that may comprise voice commands. Cloud processing may use a cloud-based interpreting acoustic data and voice recognition software service, i.e., a voice service, such as Amazon Voice Services, Cortana, Google Assistant, or the like.
The sensor(s) 115, e.g. microphone device(s), may identify acoustic signals, filter them and transmit them to a remote cloud server for additional processing. Acoustic signals may be audio data of interest. For example, acoustic signatures may be all acoustic signals sensed by the microphone device, or may comprise some restrictions, such as acoustic signals which are voice commands specifically intended for processing (e.g., a specific keyword or a wake word), or acoustic signals which fall within or above one or more thresholds (such as a frequency and/or amplitude threshold).
Particularly, some types of information can be obtained from the audio analysis of the environment, which allow for detecting the mood in an environment:
Additionally or alternatively to the above, the sensor 115 may transmit the data to the CPU unit 111. The CPU unit 111 may then transmit the data to a cloud processing service on a cloud server for acoustic processing, or the system controller may locally process the sound data, based on the determination of whether the signal is an acoustic signature, i.e. audio data of interest for additional processing. If the device 118 and/or system 100 determines that additional processing is necessary, the sound data may be sent to a server, i.e. a cloud server on the Internet, for additional processing.
Alternatively, the device 118 may transmit acoustic data to the Data Processing and Control Unit 100a, and the Data Processing and Control Unit 100a may interpret and determine whether the data should remain local or be transferred to the cloud server for cloud processing. Although a cloud server has been described, it is to be understood that any server may be used, for example, a dedicated server and/or a physical server. For example, the system 100 may handle some or all of the acoustic processing in place of, or in addition to, the cloud server.
In order to perform the analysis and classification locally within the system 100, the audio content analysis and classification software, which may run at the Data Processing and Control Unit 100a, may execute a feature extraction and audio classification algorithm, which processes an incoming audio signal. With the software, the audio signal may be processed, and characteristic features from the audio content may be extracted, which are then used for classifying the audio content of each audio signal.
The software may execute a two-step analyzing and classification process. In a first step, an audio content may be classified into one of general audio classes and its sub-classes such as, for example, people presence, music, verbal expression, control words and auxiliary sounds by extracting and analyzing features from the audio content. The first step allows for the classification of the audio content. Then, in a second step, the software may refine the classification by further extracting and analyzing features from the audio content. For example, the software may further analyze an audio content, classified during the first step as “music class”, by doing a more specific analysis to determine whether the audio content is Jazz, Rock, Reggae, Folk, R&B, classic etc. The two-dimensional mood model proposed by Thayer, R. E. (1989), “The biopsychology of mood and arousal”, (hereinafter, “Thayer”) may be also used to detect music mood. The two-dimensional model adopts the theory that mood is comprised of two factors: Stress (happy/anxious) and Energy (calm/energetic), and divides music mood into four divisions: contentment, depression, exuberance and anxious/frantic. Usually, there are four parameters of audio features to detect the music mood: intensity, timbre, pitch and rhythm. These four features correspond to physical quantities such as frequency, duration, amplitude, and spectrum distribution of air vibrations. In the mood map, intensity and timbre are associated with energy while rhythm and pitch are in combination with stress in Thayer's mood model.
Yet another way to detect the music mood is using Bag-of-Words (BOW). BOW is a collection of words where each word is assigned tags from a dictionary. A word can have different tags. Some tags are predefined as positive or negative according to mood, such as happy words, sad words, etc., while other tags are tagged based on previous tags. In an example, the lyrics of a song may be represented as a set of the 20 most frequent words (stems) in the song, then the emotional value may be calculated by the Data Processing and Control Unit 100a and/or cloud server based on the positive and negative word counts. Using a speech to text algorithm, may allow for the lyrics of a song to be extracted and analyzed. The classification result is then interpreted, using, for example, a lookup table, where sub-class correspond to certain mood: class “music” 4 sub-class “classic” →sub-class “contentment” →Relaxing Mood. Then, on a result of classification, the moods are matched with patterns and/or colors, and are displayed via the methods described herein.
In some examples, the system 100 may have the ability to have gesture-based control. The gesture-based control software of the system 100 may identify the user 60 and/or gestures performed by user 60 using any device capable of capturing images by the sensor 116 and/or by receiving information from mobile device/remote control device 65.
The gesture-based control software of the system 100 may identify a gesture being indicated by the user 60 based on the images generated by the sensor 116 and/or sensed by the mobile phone/remote control device 65. The gesture may be indicated by the user 60 holding a position for a period of time to indicate a command and/or by the user 60 performing one or more bodily movements that indicate a command.
To engage the gesture-based control software, the user may perform an engage gesture. The engage gesture may be captured by the sensor 116 and may be identified by the gesture-based control software to activate the gesture-based control software. The system 100 may indicate to the user 60 that the gesture-based control software is engaged, by, for example, displaying a selected pattern of objects 133 and/or illuminating the objects 133 with a selected color.
The gesture-based control software may, additionally or alternatively, be activated and/or deactivated by a keyword received and identified by the audio content analysis and classification software, using the sensor 115 described above and/or via another sensor 115, 116, 117. The audio commands may be paired with gestures to display and illuminate a pattern. For example, the user 60 may say “blue wave”, and slowly raise and lower their hands. Audio content analysis and classification software may then recognize and select the “wave” pattern, then display said pattern, and illuminate the pattern with a “blue” color. At the same time, the gesture-based control software may detect the rising and lowering hands, classify and interpret the rising hands as a command to increase wave amplitude, and the lowering hands as a decrease of amplitude of the wave pattern.
The above gesture-based control software may be stored in the Data Processing and Control Unit 100a and/or the Pattern and Illumination Unit 100b and/or in the cloud.
Data storage 160 can be realized as a physical entity, such as, for example, an SSD or any other non-volatile memory device.
The Data Processing and Control Unit 100a can be realized using an SBC (single board computer).
The data lines 120, 120a may allow for data from the CPU unit 111 to be communicated to an external element/device/unit. The data may relate to the controlling of the objects described herein. In some examples, there is only one data line. In some examples, one of the data lines 120, 120a may be used for controlling a physical positioning of at least one of the objects, and the other data line 120, 120a may be used for controlling illumination of at least one of the objects. In some examples, at least one of the data lines 120, 120a may transmit and/or receive data relating to the physical positioning and the illumination. In some examples, there may be one data line 120, 120a (combined data line for illumination and position). In some examples, there may be two data lines 120, 120a (one for illumination, one for position).
The Pattern and Illumination Unit 100b of system 100 has, in this example, two major parts: The Pattern Visualization Device 210 and the Pattern Illumination Device 260.
The Pattern Visualization Device 210 comprises, in this example, a drive unit 130, which comprises control and power electronics configured to drive a plurality of mechanical units 131. In some examples, there may only be one mechanical unit 131. As can be seen in
The mechanical unit 131 moves an object 133 via a link 132. Each object 133, in this example, corresponds to a dot from
Each of the mechanical units 131 may be a winch, and each of the links 132 may be a rope. Additionally, one or more of the mechanical units 131 may comprise a linear drive device, which couples the object 133 to the mechanical unit 131 and allows for the object 133 to be moved relative to the mechanical unit 131. In some examples, a mechanical unit can comprise a plurality of drive units and/or objects 133. Other forms of mechanical device may be used which are able to locate objects in an environment such as, for example, a servo actuator or a rotating means.
For example, the drive unit 130 may be coupled to the CPU unit 111 via the data lines 120, 120a. The CPU unit 111 may be configured to transmit a position of the object 133 to be positioned by the mechanical unit 131 in absolute terms, in relative terms, via coordinates, or via any other suitable method, or any combination thereof. This may allow for the objects to be controlled independently from one another.
The Pattern Illumination Device 260 comprises, in this example, a drive unit 140, which comprises control and power electronics to control a plurality of light units 141. Each light unit 141 is configured to emit a light beam 142, which illuminates an object 133. Thus, the object 133 can be individually illuminated, and a pattern, created by the objects 133 emit, via diffusion or any other suitable means, a certain color of light. Alternatively, there may be only one light unit 141 which illuminates a whole pattern. In some examples, a light unit 141 may be configured to emit a plurality of light beams 142, wherein each light beam 142 can have the same characteristics, or different characteristics such as, for example, an intensity and/or a wavelength of light.
The drive unit 140 may be coupled to the CPU unit 111 via the data lines 120, 120a. The CPU unit 111 may be configured to transmit color and/or intensity characteristics to each light unit 141. This may allow for the light units to be controlled independently from one another.
The Pattern and Illumination Unit 100b of the system 100 may have a combined Pattern Visualization Device 210 and Pattern Illumination Device 260. In this implementation, the mechanical unit 131 sends power and data to a light source located within a respective object 133 via a respective link 132. This can be implemented by, for example, a winch with an electrically conductive rope or cable.
The placement of the Pattern and Illumination Unit 100b may also present specific ornamental patterns. For example, it can be a line, wave, matrix, spiral, or a combination of any of these patterns. The placement pattern may reflect a certain style of architecture of environment, for example, modern, gothic, religious and so on. This data may be input by the person 60 into the CPU unit 11 during installation of the system 100 and/or may be input after installation of the system 100 and/or may be predetermined during manufacturing of the system 100. In particular, a matrix pattern may be achieved by having more than one subsystem 100b, wherein a plurality of objects 133 are arranged equidistantly.
The system 100 may create patterns, as seen in
In the examples of
In
Throughout the patterns shown in
The arrangement of mechanical units 131, i.e. the mechanical units 131 which move the objects 133 may also be in form of certain pattern, such as the wave of
The mechanical units 131 may be arranged as needed by an installer or a user 60 to form any desired pattern such as, for example, linear, curved, matrix/lattice, or spiral, as shown in
As an example, the mechanical units 131 may be arranged in the form of “infinity” shape “o”, as shown at
Moods 164: contains several subfolders 165, with each subfolder 165 named as certain moods, for example:
Each mood subfolder 165 comprises a playlist with files 166 which correspond to the named mood. In this example, a playlist file 166 comprises references to a pattern, light files and sound files, located in folders Patterns 168, Light 170 and Music 172, respectively. This may allow for the relative positioning of the objects 133 and the mood pattern to be changed based on the mood detected by the AI unit described above. Resultantly, the system may reflect the mood of the person 60. This may allow for the system 100 to customize the person's 60 experience based on their present mood. Indeed, the relative positing of the objects may be altered based on the mood, wherein the relative positionings are changed in a cyclical pattern. This pattern may be part of the mood pattern mentioned herein. The music also may be related to the mood. For example, if the user 60 is in a focused mood, the music may be natural sounds such as sea waves, waterfall, or forest sounds and if the user is in a happy mood, the music may be a list of their favorite songs which have been saved to the memory/data storage 160 via a music streaming service, or via a transferred playlist. This may allow for the person 60 to have an immersed experience and amplify their present mood.
The main folder 161 may also contain folders for executable files relating to the Code 175.
The AI folder 174 may contain executable files, machine learning files for image/sound processing and recognition, mathematical processing subroutines relating to the machine learning unit mentioned herein, or any combination thereof.
An external storage may be used to store a copy of local storage 160, and/or may also be used a common database of all possible patterns, light and sound files.
The system 100 may have a web-based application running on the mobile device 65. The user 60 can use the mobile device 65 to configure and control the system 100. When the application is activated, it shows a menu, as seen in
The user can select different menu options, such as:
By choosing “Reflect Mood” 310 and consequently “Activate AI” 311, the method shown in
The above process may automatically continuously repeat, or repeat at set intervals such as, for example 1 minute, until another option is chosen, or the system 100 is deactivated. In some examples, not all of the above steps are undertaken. For example, there may be no “Illuminate Pattern” 354 step. In some examples, the steps are undertaken in a different order. For example, step 354 may be executed before step 353. In some examples, steps take place simultaneously.
An example of a simple pattern file 376 is shown. It comprises the following data fields:
Below is an example of how to use this format:
Exemplary pattern and its data fields:
Data Field Nr. 1: Pattern Name: “low amplitude 3-peak wave”.
Data Field Nr. 2: Description of pattern: “a wavy line having 3 peaks, where relation between amplitude and wavelength is less than a value x. Higher value x, shorter wavelength and higher amplitude”.
Data Field Nr. 3: Association: “calm sea, light breeze, lightweight clouds, travel, vacations”.
Data Field Nr. 4: Moods: “relaxed, romantic”.
Data Field Nr. 5: Emotions: “pleasure, tranquility”
Data Field Nr. 6: Music style: “slow romantic, instrumental”
Data Field Nr. 7 of pattern file 376 contain coordinates 375 of each point, forming a shape of a pattern. In some cases, the x-distance between dots are equal and may be omitted, so data field Nr. 7 may contain only y-coordinates.
In the case of using an automated rope winch as the mechanical unit 131, the y-coordinates may be directly transformed to rope lengths, or other suitable link 132 lengths, so that the object(s) 133 can be positioned according to the shape of pattern.
Data Fields 1 to 4 may be used by the machine learning unit at steps 351, 352 of the method shown in
The system 100 of the present application may allow for the automatic creation of dynamic patterns starting from static patterns, and vice versa. Process of creating dynamic patterns may be initiated by:
A dynamic pattern 385 will be produced during this process of object 133 movements as the object(s) 133 is/are rearranged. Such a dynamic pattern 385 may be considered as an emotional transition from a first associated emotion to a second associated emotion. The speed of such transition may be setup by user, and may depend on the pattern and its association. In some examples, the speed of the transition may be predetermined by the Data Processing and Control Unit 100a. In some examples, the user may select a keyword, such as “fast” or “slow” with regards to the speed of transition, and then the Data Processing and Control Unit 100a may select the speed of transition based on the selected keyword and the pattern and its association. For example, a short-wave pattern with a short wavelength may be transitioned to a long-wave pattern with a long wavelength or a high amplitude wave pattern may be transitioned to a low amplitude wave, or even to a straight line. In the example shown in
Examples of the present disclosure may be able to cause the user 60 to be subject to the so-called “change blindness phenomenon”. Change blindness is a phenomenon of visual perception that occurs when a stimulus undergoes a change without this change being noticed by its observer.
Change blindness may be defined as the failure to detect when a change is made to a visual stimulus. It occurs when the local visual transient produced by a change is obscured by a larger visual transient, such as an eye blink, saccadic eye movement, screen flicker, or a cut or pan in a motion picture; or when the local visual transient produced by a change coincides with multiple local transients at other locations, such as mud-splashes, which act as distractions, causing the change to be disregarded.
The nature of change blindness results from a disconnect between the assumption that visual perceptions are so detailed as to be virtually complete, and the actual ability of the visual system to represent and compare scenes moment-to-moment.
That may mean that small changes of the pattern and/or mood pattern and/or mood line, as described herein, may not be immediately visible to the user 60. This may occur in two scenarios:
Even if the user 60 looks at the pattern and/or mood pattern and/or mood line and/or system 100 constantly, attention is needed by the user 60 to track differences between the present and previous shapes of the pattern and/or mood pattern and/or mood line and/or system 100, especially when the changes are made at very low transition speed. Thus, the change blindness effect will occur, but with less impact.
In the contrary, if the user 60 is socially active, and does something between looks at the pattern and/or mood pattern and/or mood line and/or system 100, they will miss most of the changes during the transition time. At the beginning, they will see one static pattern and/or mood pattern and/or mood line, and at a second, later moment in time, they will see another pattern and/or mood pattern and/or mood line, or even a new pattern and/or mood pattern and/or mood line, and be surprised that the changes occurred without visible movement.
Both of the above scenarios are used, and experienced, in real life. Setting up a long transition time, with a low speed of transition, between patterns and/or mood patterns and/or mood lines may make this change blindness effect stronger. Below is an example table of possible speeds of transition of objects 133 between a first position and a second position and/or a transition between a first pattern and a second pattern, wherein the objects 133 transition with the example speeds shown below, and wherein the speed corresponds to an absolute distance needed for the objects 133 to transition from their first position to their second position.
The skilled person understands that the bigger the movement distance of the objects 133 and the lower the speed the objects 133 move at, the greater is the change blindness effect.
The overall change blindness effect may be increased if a gradual color changing of the objects 133 is also used. This phenomenon is known as color change blindness, which is the inability to notice difference in colors between stimuli. Changes in color and/or changes in brightness of the illumination of the objects 133 in the methods described herein during transition may be also applied.
This effect may be called “Change Blindness Mode”, and can be selected by user 60 and can be selected in a method similar to that described above in relation to
Illumination may also be involved in such a transition. Taking the example of stormy sea, an initial pattern illumination can be a dark blue color, and the final illumination relating to a calm sea may be a light blue color. Illumination can also be changed dynamically from dark to light blue during transition. Or, alternatively, only the first 380 and second 383 patterns may be illuminated, meaning that the transition between two patterns will be dark. This may relate to a dimming of the objects during the transition, or a switching off of the light reflecting from the objects, via the light unit(s) 141 and/or the light emitting element within the object(s) 133 during transition.
In another example, the first pattern 380 may be a series of appearingly randomly placed point sources, which is associated with chaos and uncertainty. The second pattern 383 may, in turn, be a straight line, which associated with order and stability. The transition from the first pattern 380 to the second pattern 383 may lead to an emotional transition from chaos to order. In some examples, the speed of the transition may be predetermined by the Data Processing and Control Unit 100a. In some examples, the user may select a keyword, such as “fast” or “slow” with regards to the speed of transition, and then the Data Processing and Control Unit 100a may select the speed of transition based on the selected keyword and the pattern and its association. The illumination in this example may be from random colored dots to a light blue color.
The system 100 of the present application may allow for operation not only with 2D patterns, but also with 3D patterns, by using, for example, cascading Pattern and Illumination Units 100b, wherein each Unit 100b is placed at a different height. Thus, taking last example, the randomized dots can be arranged in volumetric space, and then arranged as a straight 2D plane during the transition process.
In this example, the system 100 displays two patterns: a first pattern 386, located at top of the system 100, and a second pattern 387, located at bottom of the system 100. The skilled person understands that the patterns 386, 387 may be the other way round. The patterns 386 and 387 are represented by different sets of objects 133. In some examples, there may be a plurality of Illumination Units 100b in the system 100, wherein the first pattern 386 may be shown by a first Illumination Unit 100b, the second pattern 387 may be shown by a second Illumination Unit 100b, and the third pattern 388 may comprise at least one object 133 from each of the first and second Illumination Units 100b. During the transformation towards the third pattern 388, the first and second patterns 386, 387 move towards each other, and form the third pattern 388, which is formed by sets of objects 133 from both the first and second patterns 386, 387, and thereby, in some examples, objects 133 from the first and second Illumination Units 100b. The pattern formed during the transition may be a dynamic pattern 389
An algorithm of transformation for creating the third pattern 388, for example, may calculate a mean value between y-coordinates of the first and second patterns 386, 387, wherein the result of the mean value is the y-coordinate of the third pattern 388.
In some examples, not all of the above steps are undertaken. For example, there may be no “Illuminate Intermediate Pattern” step 475, or any “Illuminate Pattern” step 475, 477 at all. In some examples, the steps are undertaken in a different order. For example, step 473 may be executed before step 472. In some examples, steps take place simultaneously.
A pattern file, which may be the same as the pattern file 376 described above, is created as a result of the above-described transformation. Attributes for the newly created pattern may be assigned by the user 60 corresponding to the data fields as described in relation to
Throughout
a, b, c and e show systems 100 generally as described herein according to some example implementations, wherein the objects 133 are in a wave-like pattern;
It will be appreciated that the present disclosure has been described with reference to exemplary embodiments that may be varied in many aspects. As such, the present invention is only limited by the claims that follow.
Number | Date | Country | Kind |
---|---|---|---|
10 2023 103 500.2 | Feb 2023 | DE | national |