The present disclosure relates to a sound environment control system and a sound environment control method.
Japanese Patent Laying-Open No. H7-59858 (PTL 1) discloses a relaxing acoustic device. The relaxing acoustic device is configured to output three types of sinusoidal audio frequency signals having a frequency difference of a few Hertz therebetween, while simultaneously outputting sound information such as music. The relaxing acoustic device can provide increased realism and an increased sense of relaxation by letting a listener listen three-dimensionally to the three types of sinusoidal audio frequency signals, as compared to letting the listener just listen to sound information such as music.
Individuals have different preferences for the sound environment. Due to this, sound environment may have effects of enhancement of the realism and a sense of relaxation of one listener, while other listener may not notice such effects. Therefore, in order to provide certain effects to all listeners, it is necessary that individual preferences are previously studied and results of the studies are reflected to the control of the sound environment.
The present disclosure is made to solve such a problem, and an object of the present disclosure is to provide a sound environment control system and a sound environment control method that can provide sound environment in which people's work efficiencies or comfort can be improved, independent of individual preferences.
A sound environment control system according to the present disclosure controls sound environment in a room in which a person is present. The sound environment control system includes an information processing device, an output device, and a sensor. The information processing device generates a sound having a plurality of frequency components. The output device output the sound generated by the information processing device into the room. The sensor senses biometric information of the person. The sound includes a meaningless sound that is meaningless to the person. The information processing device determines conditions of the person, using the biometric information sensed by the sensor. The information processing device adjusts at least one of a frequency and a magnitude of at least one frequency component forming the meaningless sound, depending on the determined condition of the person.
A sound environment control method according to the present disclosure is a sound environment control method for controlling sound environment in a room in which a person is present, the sound environment control method including generating, by a computer, a sound having a plurality of frequency components; outputting, by the computer, the generated sound into the room; and sensing, using a sensor, biometric information of the person. The sound includes a meaningless sound that is meaningless to a person. Generating the sound includes determining conditions of the person, using the biometric information sensed by the sensor; and adjusting at least one of a frequency and a magnitude of at least one frequency component forming the meaningless sound, depending on the determined conditions of the person ADVANTAGEOUS EFFECTS OF INVENTION
According to the present disclosure, the sound environment can be provided in which people's work efficiencies or comfort can be improved, independent of individual preferences.
An embodiment according to the present disclosure will be described in detail, with reference to the accompanying drawings. Note that the same reference sign is used to refer to the same or corresponding component in the drawings, and description thereof will not be repeated.
As shown in
The sound environment control system 100 includes an information processing device 10, an output device 12, and a sensor 14. The information processing device 10 is connected to the output device 12 and the sensor 14 so as to be communicable by wire or wirelessly. The information processing device 10 may be installed inside or outside the room 200. The information processing device 10 may be communicatively connected to the output device 12 and the sensor 14 via a communication network (typically, the Internet) not shown.
The information processing device 10 generates a sound that has multiple frequency components. The frequency components include at least one frequency component in an audio frequency band. The audio frequency band is a frequency range audible to humans, and is, generally, a frequency band from 20 Hz to 20 kHz. The frequency components can further include a frequency component in an ultra-high frequency band (a frequency band higher than 20 kHz) that is not audible to humans.
The information processing device 10 is configured to generate a meaningful sound and a meaningless sound. The “meaningful sound,” as used herein, refers to a sound that is meaningful to a person. Examples of the meaningful sound include music, a person's talking, and reading aloud. The “meaningless sound,” as used herein, refers to a sound that is meaningless to a person. Examples of the meaningless sound include nature sounds such as sound of sea waves, sound of wind, sounds of tree leaves rustling, and babbling brooks, sounds of traffics such as automobiles, trains, or aircrafts, street sounds, people's footsteps, and hum of air-conditioning equipment.
The information processing device 10 is configured to generate a sound which includes at least one of the meaningless sound and the meaningful sound, depending on an output of the sensor 14. This allows the sound environment control system 100 to have: a mode in which the room 200 is provided with a meaningful sound only; a mode in which the room 200 is provided with a synthetic sound obtained by synthesizing a meaningful sound and a meaningless sound; and a mode in which the room 200 is provided with a meaningless sound only. The sound environment control system 100 can alternately switch these modes.
The output device 12 is installed in the room 200, and outputs into the room 200 the sound generated by the information processing device 10. The output device 12 is, typically, a loudspeaker or a headphone. The output device 12 converts an electrical signal received from the information processing device 10 into an audio signal, and outputs the audio signal into the room 200 as a sound. While
The sensor 14 senses biometric information of the person M in the room 200. The biometric information includes information indicating the biological conditions and information indicating body activities or movements. Examples of the biometric information include person's eye movements (ocular movements, blinking times, pupil diameter, etc.), arm (in particular, hand) movements, a pulse rate, a heart rate, brainwaves, sweating, or the temperature of peripheral body parts. Any of these biometric information can be sensed, using a well-known non-contact or contact sensor. The sensor 14 is, typically, a wearable device worn by a person or a camera.
In
The eye or arm movements of the person M can be sensed by analyzing the motion image captured by the camera. Specifically, when the person M is engaged in input work on the terminal device 202 as shown in
Human's pulse rate can be measured by photo plethysmography using a light-emitting diode and an optical sensor (such as a photo transistor), for example.
Human's brainwaves can be sensed by near-infrared spectroscopy or electroencephalograph, for example. The near-infrared spectroscopy is an approach to observe variations in cerebral blood volume, using a light source and a light-receiving sensor. The electroencephalograph is a sensor that picks up and amplifies the small current generated through the activities in the brain from the electrodes on the skull to measure as brainwaves. The brainwave information includes data indicating underlying rhythms, including frequency bands such as α wave and β wave.
Examples of human's peripheral sites include the wrists, the fingers, the ears, and the nose. The temperatures of the peripheral sites can be measured by, for example, sensors that are attached to parts of the person's body.
The information processing device 10 obtains the biometric information of the person M sensed by the sensor 14. The information processing device 10 uses the obtained biometric information of the person M to determine the person M's conditions. The conditions of the person M include the person M's work efficiency and the person M's comfort. The “work efficiency” refers to the percentage of work the person M can do within a period of time. For example, when the person M is engaged in input work on the terminal device 202 as shown in
In the present embodiment, the information processing device 10 is configured to determine the person M's work efficiency, using the biometric information of the person M. As an example, the information processing device 10 can determine the person M's work efficiency from eye and/or hand movements of the person M within a period of time. In this case, the data indicating the relationship between the person M's eye and/or hand movements and the person M's work efficiency is previously obtained and stored in a memory device (see
Alternatively, the information processing device 10 can determine the person M's work efficiency from the person M's brainwaves within the period of time. Among the brainwaves, α wave a brainwave that often appears around the back of the head, generally, when a person is relaxed, such as during closed-eye resting. The β wave is a brainwave that often appears when a person is in wakefulness. Then, it is known that the alertness degree of a person can be estimated from the status of the brainwaves. The work efficiency deteriorates with a reduction in alertness degree. Thus, the alertness degree can be an index that indicates the work efficiency. In this case, data indicating the relationship between the person M's brainwaves and the person M's alertness degree within a period of time is previously obtained and stored in the memory device (see
The information processing device 10 is further configured to determine the person M's comfort, using the biometric information of the person M. For example, the information processing device 10 can determine the person M's comfort from the temperatures of the person M's peripheral body parts (such as the wrists, the fingers, the ears, and the nose). Generally, fluctuations in temperature of the peripheral body parts represent the thermoregulation conditions for each individual at a proper temperature, and are therefore used as an index that is suitable for the estimation of individual's comfort. The lower the temperatures of the peripheral body parts are, the lower the person's comfort tends to be.
Depending on the determined conditions of the person M, the information processing device 10 controls at least one of a component, frequency, and magnitude (sound pressure level) of the sound output from the output device 12. Specifically, depending on the person M's conditions, the information processing device 10 adjusts at least one of the frequency and magnitude (sound pressure level) of at least one frequency component forming the meaningless sound. The information processing device 10 also adjusts at least one of the frequency and magnitude (sound pressure level) of a frequency component forming the meaningful sound, depending on the person M's conditions. This causes the output device 12 to output into the room 200 the meaningful sound only, the meaningless sound only, or a synthetic sound obtained by synthesizing the meaningful sound and the meaningless sound. The output device 12 can further reproduce various meaningless sounds
As shown in
The CPU 20 deploys a program stored in the ROM 22 into the RAM 21 and executes the program. Processes that are executed by the information processing device 10 are written in the program stored in the ROM 22.
The/F device 23 is an input/output device for exchange of signals and data with the output device 12 and the sensor 14. The I/F device 23 receives from the sensor 14 the biometric information of the person M sensed by the sensor 14. The/F device 23 also outputs a sound (an electrical signal) generated by the information processing device 10 to the output device 12.
The memory device 24 is a storage storing various information, including the biometric information of the person M, the information indicating the person M's conditions, and the data indicating the relationship between the biometric information of the person M and the person M's conditions. The memory device 24 is, for example, a hard disk drive (HDD) or a solid state drive (SSD).
As shown in
The meaningful sound source unit 30 is a sound source unit for generating the meaningful sound. As noted above, the meaningful sound is a sound that is meaningful to a person, typically, music. The meaningful sound source unit 30, for example, plays songs according to a play list defining the order of the songs to be played. Alternatively, the meaningful sound source unit 30 repeatedly plays previously-specified songs. The meaningful sound source unit 30 outputs the meaningful sound to the sound synthesis unit 34.
The meaningless sound source unit 32 is a sound source unit for generating the meaningless sound. The meaningless sound source unit 32 includes sound sources S1 through Sn (n is an integer greater than or equal to 2). The sound sources S1 through Sn are each configured to generate a sine wave (a sound wave) in the audio frequency band. The sine waves that are generated by the respective sound sources S1 through Sn have mutually different frequency components. The frequency of each sine wave varies temporally.
Specifically, a sound source Si (i is an integer greater than or equal to 1 and less than or equal to n) has an oscillator and is configured to generate a sine wave Xi(t)=sin(2πfi(t)*t) upon input of a frequency fi(t). fi(t) indicates that the frequency varies temporally. The meaningless sound source unit 32 adds up sine waves X1(t) through Xn(t) to generate a synthetic wave of the sine waves. The meaningless sound source unit 32 outputs the synthetic wave to the sound synthesis unit 34.
The sound synthesis unit 34 is controlled by the control unit 40 and synthesizes the meaningful sound generated by the meaningful sound source unit 30 and the synthetic wave generated by the meaningless sound source unit 32. Here, a sound (a synthetic sound) Y(t) generated by the sound synthesis unit 34 can be represented by Equation (1) in a simplified manner.
Here, X0 is the meaningful sound that is generated by the meaningful sound source unit 30. Xi(t) is a sine wave generated by the sound source Si of the meaningless sound source unit 32. Ki(t) is a coefficient whose value varies temporally, provided that i satisfies 1≤i≤n.
The second term on the right side of Equation (1) represents the meaningless sound generated by the meaningless sound source unit 32. The meaningless sound is generated by multiplying the sine waves X1(t) through Xn(t) by the coefficients K1(t) through Kn(t), respectively, and adding up the resultant values. The values of the coefficients K1(t) through Kn(t) vary temporally, as described above. Varying the values of the coefficients K1(t) through Kn(t) varies the amplitudes of the sine waves X1(t) through Xn(t), respectively.
With this, at least one of the frequency and magnitude of the at least one frequency component forming the meaningless sound can be varied. Specifically, the frequency fi(t) of the sine wave Xi(t) varies temporally. The amplitude of the sine wave Xi(t) varies temporally, depending on the value of the coefficient Ki(t). As at least one of the frequency and amplitude of each of the sine waves X1(t) through Xn(t) varies temporally, at least one of the frequency and magnitude of the at least one frequency component forming the meaningless sound varies. As a result, several types of meaningless sounds, including street sounds, the sound of running water of a river, can be reproduced.
As indicated in Equation (1), the synthetic sound is obtained by superimposing the meaningless sound on the meaningful sound. Adjusting the values of the coefficients K0(t) through Kn(t) can vary the components of the synthetic sound. Note that the synthetic sound includes the meaningless sound only, if the coefficient K0(t), which the meaningful sound X0 is multiplied by, is set to zero. The synthetic sound includes the meaningful sound only, if the values of the coefficients K1(t) through Kn(t), which the sine waves X1(t) through Xn(t) are respectively multiplied by, are all set to zero, while the coefficient K0(t) is set to a positive value. The sound synthesis unit 34 outputs the synthetic sound to the tone control unit 36.
The tone control unit 36 is controlled by the control unit 40 and adjusts at least one of the frequency and magnitude (sound pressure level) of the synthetic sound output from the output device 12. The tone control unit 36 is further configured to add a frequency component in the ultra-high frequency band (a frequency band higher than 20 kHz) to the synthetic sound. Note that, while individuals have different audio frequency bands, it is known that, in general, the frequency component in the ultra-high frequency band, which is the frequency band higher than 20 kHz, becomes more difficult for a person to hear with age. However, it has been found that α wave in the brainwaves increases as the frequency component of the ultra-high frequency band is transmitted to the brain through the skin and ear bones in the vicinity of the ears.
The conditions determination unit 38 obtains the biometric information of the person M sensed by the sensor 14. Using the obtained biometric information of the person M, the conditions determination unit 38 determines the person M's conditions. In the present embodiment, the conditions determination unit 38 uses the biometric information of the person M to determine the person M's work efficiency.
Specifically, the conditions determination unit 38 measures the person M's eye movements within a period of time from the motion image captured by the sensor 14 (e.g., the camera). The conditions determination unit 38 then refers to the data, stored in the memory device 24 (see
Based on the person M's work efficiency determined by the conditions determination unit 38, the control unit 40 controls the sound synthesis unit 34, the tone control unit 36, and the meaningless sound source unit 32. This allows the control unit 40 to vary the sound output from the output device 12 into the room 200, depending on the person M's work efficiency.
Specifically, the control unit 40 compares the index, representing the person M's work efficiency, provided by the conditions determination unit 38, with a predetermined threshold. Then, if the person M's work efficiency is lower than the threshold, the control unit 40 varies the components of the synthetic sound to be generated by the sound synthesis unit 34.
The synthetic sound is configured of the meaningful sound X0, and the meaningless sound which consists of the sine waves X1(t) through Xn(t) having mutually different frequency components, as indicated in Equation (1). The control unit 40 controls the sound synthesis unit 34 to vary the value of the coefficient K0(t) which the meaningful sound X0 is multiplied by, and at least one of the values of the coefficients K1(t) through Kn(t) which the sine waves X1(t) through Xn(t) are respectively multiplied by. Specifically, the control unit 40 varies the ratio of the meaningful sound to the synthetic sound by varying the value of the coefficient K0(t). At this time, the meaningful sound can be removed from the synthetic sound if the coefficient K0(t) is set to zero.
The control unit 40 also varies frequencies f1(t) through fn(t) of the sine waves X1(t) through Xn(t) and/or varies the values of the coefficients K1(t) through Kn(t) to vary at least one of the frequency and magnitude (amplitude) of the at least one frequency component forming the meaningless sound to be included in the synthetic sound. With this, the type of the meaningless sound can be varied. For example, the control unit 40 may be configured to prepare multiple patterns corresponding to multiple types of meaningless sounds for the frequencies f1(t) through fn(t) and the coefficients K1(t) through Kn(t), and alternately select the patterns. Alternatively, the control unit 40 can remove the meaningless sound from the synthetic sound by setting all the values of the coefficients K1(t) through Kn(t) to zero.
The control unit 40 further controls the tone control unit 36 to vary at least one of the frequency and magnitude (sound pressure level) of the synthetic sound adjusted by the sound synthesis unit 34. Varying the frequency of the synthetic sound varies the pitch of the synthetic sound. Specifically, the pitch of the synthetic sound increases with an increase of the frequency, and decreases with a decrease of the frequency.
The tone control unit 36 can adjust the magnitude of the sound at three levels: small; medium; and large, for example. Note that the tone control unit 36 may vary the frequencies and/or magnitudes of both the meaningful sound and the meaningless sound, or the frequency and/or magnitude of one of the meaningful sound and the meaningless sound. The tone control unit 36 can further add the frequency component of an ultra-high frequency band to the synthetic sound.
The control unit 40 controls the sound synthesis unit 34 and the tone control unit 36 as described above, while monitoring the person M's work efficiency provided by the conditions determination unit 38, to adjust at least one of the component, frequency, and magnitude of the synthetic sound output from the output device 12 into the room 200. At this time, the control unit 40 is configured to adjust at least one of the component, frequency, and magnitude of the synthetic sound so that the person M's work efficiency is greater than or equal to the threshold. Varying the sound environment in the room 200 depending on the person M's work efficiency in this manner enables restoration of the person M's work efficiency.
Next, a sound environment control method according to the present embodiment is described.
As shown in
Subsequently, the information processing device 10 generates the meaningless sound (step S02). In S02, the information processing device 10 generates the sine waves X1(t) through Xn(t) having mutually different frequency components, using the sound sources S1 through Sn. The frequencies f1(t) through fn(t) of the sine waves X1(t) through Xn(t), respectively, vary temporally. The information processing device 10 then adds up the sine waves X1(t) through Xn(t), thereby generating a synthetic wave of the sine waves.
Subsequently, the information processing device 10 synthesizes the meaningful sound generated in S01 and the meaningless sound (a synthetic wave) generated in S02 (step S03). In S03, the synthetic sound is generated, using Equation (1) described above. Note that, as a default of the synthetic sound, a synthetic sound of a song and a previously-specified meaningless sound (e.g., the sound of crowds) may be set. In this case, the coefficient K0(t), which the meaningful sound X0 is multiplied by, is set to a positive value in Equation (1) and the values of the coefficients K1(t) through Kn(t), which the sine waves X1(t) through Xn(t) are respectively multiplied by, are set to patterns in which a previously-specified meaningless sound (e.g., the sound of crowds) is reproduced.
The information processing device 10 then transmits an electrical signal indicative of the synthetic sound generated in S03 to the output device 12. The output device 12 converts the electrical signal received from the information processing device 10 into an audio signal and outputs the audio signal into the room 200 as a sound (step S04). The sensor 14 senses the biometric information of the person M in the room 200. As an example, the sensor 14 is a camera installed in the room 200.
Next, the information processing device 10 obtains the biometric information of the person M sensed by the sensor 14 (step S05). In S05, as an example, the information processing device 10 measures the person M's eye movements within the period of time from the motion image captured by the camera as the sensor 14.
The information processing device 10, then, determines the person M's conditions, using the obtained biometric information of the person M (step S06) In S06, the information processing device 10 refers to the data, previously stored in the memory device 24 (see
Next, depending on the determined work efficiency of the person M, the information processing device 10 varies at least one of the component, frequency, and magnitude of the sound output from the output device 12 into the room 200.
Specifically, initially, the information processing device 10 compares the index representing the person M's work efficiency with the predetermined threshold (step S07). If the work efficiency is greater than or equal to the threshold (YES in S07), the information processing device 10 skips the subsequent process steps S08 through S10 to keep the sound output from the output device 12, thereby maintaining the sound environment in the room 200.
If the work efficiency is less than the threshold in step S07 (if NO in S07), in contrast, the information processing device 10 adjusts the frequency and magnitude (amplitude) of the at least one frequency component forming the meaningless sound that is included in the sound output from the output device 12 (step S08). In S08, the information processing device 10 varies the frequencies f1(t) through fn(t) and/or the values of the coefficients K1(t) through Kn(t) in Equation (1) to change the type of the meaningless sound. For example, the information processing device 10 can change the patterns of the frequencies f1(t) through fn(t) and the coefficients K1(t) through Kn(t) corresponding to the sound of crowds into the patterns of the frequencies f1(t) through fn(t) and the coefficients K1(t) through Kn(t) corresponding to other meaningless sound (e.g., nature sounds in a valley). Alternatively, the information processing device 10 can remove the meaningless sound from the sound output from the output device 12 by setting all the values of the coefficients K1(t) through Kn(t) to zero.
Next, the information processing device 10 adjusts a ratio of the meaningful sound to the synthetic sound (step S09). In S09, the information processing device 10 varies the value of the coefficient K0(t) in Equation (1) to vary the ratio of the meaningful sound to the synthetic sound. At this time, the information processing device 10 can remove the meaningful sound from the sound output from the output device 12 by setting the value of the coefficient K0(t) to zero.
The information processing device 10 further adjusts at least one of the frequency and magnitude of the synthetic sound obtained by synthesizing the meaningless sound adjusted in S08 and the meaningful sound adjusted in S09 (step S10) In S10, the information processing device 10 may vary the frequencies of both the meaningful sound and the meaningless sound or vary the frequency of one of the meaningful sound and the meaningless sound. At this time, the information processing device 10 may add the frequency component of an ultra-high frequency band to the synthetic sound.
As the at least one of the component, frequency, and magnitude of the sound output from the output device 12 is varied through the process steps S08 through S10, the information processing device 10 returns to S06 and determines, again, the person M's work efficiency. Then, the information processing device 10 determines whether the determined work efficiency of the person M is greater than or equal to the threshold (step S07). If the work efficiency is improved to the threshold or greater (YES in S07), the information processing device 10 keeps the sound output from the output device 12, thereby maintaining the sound environment of the room 200. If the work efficiency is less than the threshold (NO in S07), in contrast, the information processing device 10 performs, again, the process steps S08 through S10 to vary the sound output into the room 200. The process steps S08 through S10 are repeatedly performed until the person M's work efficiency is greater than or equal to the threshold.
As described above, the sound environment control system 100 according to the present embodiment is configured to output into the room a synthetic sound of a meaningful sound that is meaningful to a person and a meaningless sound that is meaningless to a person. In the above configuration, the information processing device 10, then, adjusts the component of the synthetic sound, depending on a person's work efficiency determined from the biometric information of the person in the room. Specifically, the information processing device 10 can adjust at least one of the frequency and magnitude of at least one frequency component forming the meaningless sound to change the type of the meaningless sound. The information processing device 10 can also remove one of the meaningful sound and the meaningless sound from the synthetic sound. The information processing device 10 can further vary at least one of the frequency and magnitude of the synthetic sound output into the room, depending on the person's work efficiency. With this, since the sound environment in the room can be varied, depending on the work efficiency of a person in the room, person's work efficiency can be improved, independent of individual preferences.
Next, experimental examples of a sound environment control which is performed using the sound environment control system 100 according to the present embodiment are described.
In this experiment, the subject in the room was asked to engage in input work on a terminal device (a notebook), and images of the eye movements of the subject being engaged in the input work were captured by the sensor 14 (e.g., a camera). Then, in order for the information processing device 10 to determine the subject's work efficiency, data indicating the relationship between the eye movements and work efficiency of the subject was previously obtained and stored in the memory device 24 within the information processing device 10.
The graph of
The meaningful sounds 1 to 3 were generated by the information processing device 10 setting the coefficient K0(t), which the meaningful sound was multiplied by, to a positive value and setting the values of the coefficients K1(t) through K0(t), which the sine waves X1(t) through Xn(t) were respectively multiplied by, to zero. The meaningless sounds 1 to 3 were reproduced by the information processing device 10 setting the value of the coefficient K0(t) to zero and varying the frequencies of the sine waves X1(t) through Xn(t) and/or the values of the coefficients K1(t) through Kn(t) which the sine waves X1(t) through Xn(t) were respectively multiplied by.
The information processing device 10 analyzed the motion image captured by the sensor 14 and thereby measured the eye movements of the subject within a period of time. The information processing device 10 then referred to the data stored in the memory device 24 to calculate an index representing the subject's work efficiency based on measurements.
As can be seen from the graph of
According to the experimental result of
In this experiment, the subject in the room was asked to engage in input work on a terminal device (a notebook), and images of the subject's brainwaves were captured by the sensor 14 (e.g., an electroencephalograph) worn by the subject. Then, the information processing device 10 was used to determine the conditions (the alertness degree) of the subject, based on the brainwave information obtained from measurements from the sensor 14.
The graph of
The meaningless sounds 1 and 2 were reproduced by the information processing device 10 setting the value of the coefficient K0(t) to zero and varying the frequencies of the sine waves X1(t) through Xn(t) and/or the values of the coefficients K1(t) through Kn(t) which the sine waves X1(t) through Xn(t) were respectively multiplied by.
The information processing device 10 detected the intensities of α wave and β wave included in the subject's brainwaves, based on the subject's brainwave information measured by the sensor 14 (an electroencephalograph). Note that the intensities of α wave and β wave are represented in voltage value (μV). The information processing device 10 further calculated the ratio of the intensity of β wave to the intensity of α wave (β/α) to estimate the alertness degree of the subject.
As can be seen from
In addition, the meaningful sound 1 and the meaningful sound 2, although they have different tunes, did not show a significant difference therebetween both in intensity of α wave and in intensity of β wave. As a result, the ratio (β/α) varied little, too.
As the sound in the room varied from the meaningful sound 2 to the meaningless sound 1, in contrast, both α wave and β wave increased. In particular, a noticeable increase was seen in β wave. Due to the increase of β wave, the ratio (β/α) of the meaningless sound 1 increased, as compared to the meaningful sound 1 and the meaningful sound 2. Note that the variations in intensity of β wave also increased in the meaningless sound 1 relative to variations in magnitude of the sound.
Furthermore, as the sound in the room varied from the meaningless sound 1 to the meaningless sound 2, α wave and β wave further increased. In particular, a noticeable increase was seen in β wave. Due to the increase of β wave, the ratio (β/α) of the meaningless sound 2 has further increased, as compared to the meaningless sound 1. Similarly to the meaningless sound 1, the variations in intensity of β wave has also increased, relative to the magnitude of the sound.
Here, it is known that α wave increases when a person is relaxed, and β wave increases when a person is in wakefulness. In addition, the higher the ratio (β/α) is, the higher the alertness degree is. In the experimental example of
The flowchart of
The information processing device 10, then, uses the obtained biometric information of the person M to determine the person M's conditions (step S06A) In S06A, the information processing device 10 refers to the data, previously stored in the memory device 24 (see
Next, depending on the determined person M's comfort, the information processing device 10 varies at least one of the component, frequency, and magnitude of the sound output from the output device 12 into the room 200.
Specifically, initially, the information processing device 10 compares the index representing the person M's comfort with a predetermined threshold (step S07A). If the comfort is greater than or equal to the threshold (YES in S07A), the information processing device 10 skips the process steps S08 through S10 to keep the sound output from the output device 12, thereby maintaining the sound environment of the room 200.
If the comfort is less than the threshold in step S07A (NO in S07A), in contrast, the information processing device 10 performs the same process steps S08 through S10 as those of
As described above, the sound environment control system 100 according to Variation 1 of the present embodiment can also vary the sound environment of a room, depending on a person's comfort determined from the biometric information of the person in the room, thereby improving the person's comfort, independent of individual preferences.
For example, as shown in
The sensor 14 senses the biometric information of the persons M1 to M3 in the room 200. The sensor 14 is, for example, a camera installed in the room 200, and arranged to cover the eyes or arms (in particular, hands) of the respective persons M1 to M3 in the field of view. The camera outputs a captured motion image to the information processing device 10. Note that the camera may be installed in the terminal device 202.
The information processing device 10 obtains the biometric information of the persons M t to M3 sensed by the sensor 14 and uses the obtained biometric information of the persons M1 to M3 to determine the conditions (e.g., work efficiencies) of the persons M1 to M3. Depending on the determined conditions (work efficiencies) of the persons M1 to M3, the information processing device 10 controls at least one of the component, frequency, and magnitude (sound pressure level) of the sound output from the output device 12.
The flowchart of
Next, the information processing device 10 obtains the biometric information of the persons M1 to M3 sensed by the sensor 14 (step S05B). In S05B, as an example, the information processing device 10 measures the eye movements of the persons M1 to M3 within a period of time from motion images captured by a camera as the sensor 14.
The information processing device 10, then, uses the obtained biometric information of the persons M1 to M3 to determine the conditions of the persons M1 to M3, respectively (step S06B) In S06B, the information processing device 10 refers to the data, previously stored in the memory device 24 (see
Next, the information processing device 10 calculates an average of the work efficiencies of the persons M1 to M3 determined in S06B (step S06C). The information processing device 10, then, varies at least one of the component, frequency, and magnitude of the sound output from the output device 12 into the room 200, depending on the calculated average work efficiency.
Specifically, initially, the information processing device 10 compares the average work efficiency with a predetermined threshold (step S07B). If the average work efficiency is greater than or equal to the threshold (YES in S07B), the information processing device 10 skips the process steps S08 through S10 to keep the sound output from the output device 12, thereby maintaining the sound environment in the room 200.
If the average work efficiency is less than the threshold in step S07B (NO in S07B), in contrast, the information processing device 10 performs the same process steps S08 through S10 as those of
As described above, the sound environment control system 100 according to Variation 2 of the present embodiment can also vary the sound environment in a room, depending on people's work efficiencies determined from the biometric information of them in the room, thereby improving each person's work efficiency, independent of individual preferences.
The configuration example has been described in which the sound environment in the room 200 is varied if the average work efficiency of the persons M1 to M3 is less than the threshold (NO in S07B) in the flowchart of
The presently disclosed embodiments should be considered in all aspects as illustrative and not restrictive. The technical scope of the present disclosure is defined by the appended claims, rather than by the description of the embodiments above. All changes which come within the meaning and range of equivalency of the appended claims are to be embraced within their scope.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/004699 | 2/7/2022 | WO |