Sound management systems for improving workplace efficiency

Information

  • Patent Grant
  • 10038952
  • Patent Number
    10,038,952
  • Date Filed
    Tuesday, February 3, 2015
    9 years ago
  • Date Issued
    Tuesday, July 31, 2018
    5 years ago
Abstract
A sound management system for use by a user located within an environment includes a memory device for storing a selection of sounds. The sounds can be music and/or various “colors” of noise (e.g., white, pink, and brown). A controller is used to select one particular stored sound based on a measured biological condition of the user, such as stress or fatigue, or an environmental condition of the environment, such as ambient noise. According to one embodiment, the system is used in conjunction with a sit/stand desk and the sound selection is made in response to changes in the desk height. The selected sound is selected to help abate or mitigate distracting sounds in the environment, such as people talking. The selected sounds are played to the user through headphones worn by the user, or through nearby speakers.
Description
BACKGROUND OF THE INVENTION
1) Field of the Invention

The present invention generally relates to devices and systems for benefiting the health and efficiency of workers in a workplace, and more particularly, to systems and devices for the purpose of mitigating or otherwise controlling unwanted conversation and also managing structured sounds (music) and random sounds generated within a workplace.


2) Discussion of Related Art

Today's workplace environments often include “open spaces” wherein many employees get to enjoy working in close proximity to each other in spaces that are generally replete of barriers, such as cubical walls. This open working arrangement is supposed to provide a means for the sharing ideas and collaboration which is supposed to improve work efficiency and creativity. It has the additional benefit of saving expenses by maximizing floor space and simplifying the lighting requirements for everyone, but unfortunately not everyone can work so efficiently in such an open environment and for many people, work efficiency and even morale can “take a hit.”


People are different and many people just need their space and don't appreciate sharing with others and don't appreciate the many distractions that have become common in such open space environments. One such distraction is sound, both general office noises (a nearby photocopier) and other people's conversations. People need to concentrate and many cannot unless they can control the sounds within their work area. If two people seated next to a worker, for example, start a conversation with each other, the worker's brain will involuntarily listen-in and try to comprehend what is being said nearby. This often confuses the worker and his or her concentration and work efficiency plummets, that is until the conversation stops.


In an effort to maintain the benefits of an open-space layout for a workplace and help control noise and regional intelligible conversations, two basic sound-management systems have been developed, sound-cancellation and sound-masking.


Sound cancellation—or “active noise control”—electronically changes a received incoming sound signal within an environment to minimize the signal before it can be received by a human's ear. With sound cancellation, a sound signal is first received by a microphone that has been placed within a subject environment. The signal analyzed by a microprocessor and then an inverted signal (mirror image) is sent to a speaker that has been positioned within the environment. The speaker broadcasts the exact opposite of the original sound signal, thus flattening out the original sound signal, effectively cancelling it. The end result is that the worker located within the environment will hear very little background sounds.


Unfortunately, sound cancellation works best in a very controlled and small environment that has a uniform, consistent, and predictable shape. Therefore, such sound cancellation systems work best with headphones and are not suitable for controlling the many dynamic frequencies common in open areas, such as within a large room that includes a myriad of sources of sounds, moving objects and complex and unpredictable shapes of all sizes. Any attempt to mitigate the background sounds in such a complex environment would require massive processors and a multitude of microphones and speakers. This would be prohibitively expensive and still would likely only produce marginal sound-control within the environment.


Sound masking, on the other hand, works on the principle that when background noise is added to an environment, speech becomes less intelligible. The “Articulation Index” (or “AI), which is a measurement of intelligible speech is controlled within the environment. In order for sound masking to be effective, the AI must be lowered by a change in the signal-to-noise ratio. The “signal” is typically a person speaking within the environment, and the “noise” is the sound masking signal.


A high signal-to-noise ratio means that speech within the environment is very intelligible and the workers will suffer by not being able to concentrate. By simply introducing select sounds (controlled noise) to the environment, the signal to noise ratio can be reduced significantly, to the point that the voices carried by anyone speaking within a subject environment will become unintelligible a short distance away so that even nearby workers will not become distracted and work efficiency will presumably remain unaffected. The generated noise used in such sound-masking systems is typically what is referred to as “white noise”, but so-called “pink noise” can be applied as will.


To be effective, sound masking systems must generate sound that is both random and within a specific range of frequency and decibels. The human brain will actively process received sounds in which it can identify a recognizable pattern, such as speech, but will quickly tune out sounds it cannot make sense of, such as static sounds. Sound masking works by injecting a random, low-level background noise within the environment that correlates in frequency to human speech. Control of the decibel level and frequency of the generated noise (white or pink) within the environment is critical. The noise should be just loud enough to make it difficult to understand conversations a predetermined distance away. However, should the noise become too loud, the human brain will no longer ignore it and the noise will begin to interfere with other human processing, such as working. Care must be taken to insure that both the decibel level and the frequency of the sound masking system is appropriate for the particular environment. This is what a “white noise” system does to mask sound it basically “fills in” the sound spectrum around you with barely perceptible “unstructured” noise. The sounds of running water serve well as white noise because the sounds are able to mask human speech very well, without distracting or annoying a listener. The running water sound creates a random, yet relatively uniform sound wave, within a specific frequency spectrum.


Early sound-masking systems installed in buildings in the 1960s simulated the sound of air moving by electronically filtering random noise produced by gas-discharge vacuum tubes. Loudspeakers in the ceiling distributed the amplified noise signal throughout the office. However, making human speech unintelligible required a volume level so high that the sound masking itself became a distracting annoyance.


In the 1970s, electronically generated sound masking employed frequency generators that shaped sound to better mask speech became more practical and worked well when installed correctly. In the 1980s, researchers began using 1/f noise, the phenomenon also known as “flicker” or “pink” noise. Calibrating this “pink” noise to match the frequencies of human speech raised the threshold of audibility just enough to mask intelligibility without requiring the higher volumes used in earlier systems.


Music and Headphones:


Although the above sound masking techniques may work well in many situations, many people just cannot listen to the sounds of “ocean waves” all day long. It is not uncommon for workers to simply play some music through their headphones to help create a controlled sound environment and effectively mask surrounding sounds. Unfortunately, even a preset playlist of songs may not also match a particular user's need throughout a day.


Although many of the above-described sound masking systems work generally well, Applicants have recognized areas of improvement with such systems.


OBJECTS OF THE INVENTION

It is therefore a first object of the invention to overcome the deficiencies of the prior art.


It is another object of the invention to provide a useful, effective sound masking system that can be altered automatically in response to select inputs.


It is another object of the invention to provide a sound controller for controlling the sounds played to a user in response to select inputs.


SUMMARY OF THE INVENTION

A sound management system for use by a user located within an environment includes a memory device for storing a selection of sounds. The sounds can be music and/or various “colors” of noise (e.g., white, pink, and brown). A controller is used to select one particular stored sound based on a measured biological condition of the user, such as stress or fatigue, or an environmental condition of the environment, such as ambient noise. According to one embodiment, the system is used in conjunction with a sit/stand desk and the sound selection is made in response to changes in the desk height. The selected sound is selected to help abate or mitigate distracting sounds in the environment, such as people talking. The selected sounds are played to the user through headphones worn by the user, or through nearby speakers.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention can be more fully understood from the following detailed description taken in conjunction with the accompanying drawing, in which:



FIG. 1 is a block diagram illustrating a sound management system, according to a first embodiment of the invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Referring to FIG. 1 and according to a first embodiment of this invention, a sound-management system 10 is schematically illustrated and includes sound generator 12, power supply 14, sound controller 16, and sound output device 18. Sound output device 18 may be any of several devices, including open speakers, headphones, bone-conduction transducers, vibration generators and any other device that is able to convert electrical signal into a corresponding sound wave that can be heard or felt by a user in a particular environment. Applicants consider use of headphones to be a preferred sound output device for this invention.


Sound generator 12 produces sound, either structured, such as music, or random noise, such as “pink noise.” The sound can either be pre-recorded and provided as sound or music files (stored on an appropriate memory), or can be a circuit-based device that includes electrical components that are arranged to generate electronic “noise” and other electrical components that provide various filters for altering the generated noise prior to outputting the signal to a sound output device 18. As is well known by those of ordinary skill in the art, depending on the specifics of the filtering circuitry various types of noise can be created, such as “white”, “brown”, “grey”, “violet”, “blue”, and “pink.” Each type of “noise” has different sound characteristics, as described below:


White:


White noise is a signal made of uncorrelated samples, such as the numbers produced by a random generator. When such randomness occurs, the signal will contain all frequencies in equal proportion and its spectrum will turn flat. Most white noise generators use uniformly distributed random numbers because they are easy to generate. Some more expensive generators rely on a Gaussian distribution, as it represents a better approximation of many real-world random processes. To the human auditory system, white noise sounds much brighter than what one would expect from a “flat” spectrum. This is because human hearing senses frequencies on a logarithmic scale (the octaves) rather than a linear scale. On the logarithmic scale, white noise packs more energy in the higher octaves, hence its bright sound. White noise sounds similar to TV channel static or “snow.”


Brown:


“Brown” noise (also called “red” noise) is a random signal that has been filtered in order to generate a lot of energy at low frequencies. Its power density is inversely proportional to f^2 and decreases by 6 dB per octave. Brown noise produces a much warmer tone than white noise (0 dB/oct) or pink noise (−3 dB/oct). Brown noise packs a lot of energy in the lowest frequencies. Each octave packs as much energy as the two octaves above it. For example, the 20 Hz bandwidth between 20 Hz and 40 Hz (one octave) will contain the same sound power as the 120 Hz bandwidth between 40 Hz and 160 Hz (the next two octaves). Brown noise is very relaxing to listen to—sounding similar to powerful ocean waves.


Grey:


Although white noise plays equally loudly at all frequencies, it fails in giving the listener such a perception, because of psychoacoustics. Grey noise is created by passing white noise through a filter to invert the frequency sensitivity curve. As a result, grey noise sounds relatively flat—a bit like muffled white noise.


Violet:


Violet noise is known as differentiated white noise, due to its being the result of the differentiation of a white noise signal. Violet noise generates very high energies at higher frequencies. Its power density is proportional to f^2 and increases by 6 dB per octave. Violet noise is also referred to as purple noise. This noise is sharp and not very soothing to listen to unless the volume is very low.


Blue:


Blue noise is a random signal that has been filtered in order to generate higher energies at higher frequencies. Its power density is proportional to the frequency and increases by 3 dB per octave.


Blue noise is also referred as azure noise and packs a lot of energy in the highest frequencies: each octave packs as much energy as the two octaves below it. Blue noise sounds very sharp and is not very soothing to listening to.


Pink:


Pink noise has a spectral envelope that is not flat within a frequency but rolls off at higher frequencies. Pink noise has a greater relative proportion of low frequency energy than white noise and sounds less “hissy.”


To the human auditory system—which processes frequencies logarithmically—pink noise is supposed to sound even across all frequencies, and therefore best approximates the average spectral distribution of music. In practice however, it turns out that human ears are more sensitive to certain frequencies, such as in the 2-4 kHz range. Pink noise, despite its even frequency distribution in the logarithmic frequency scale, is perceived as “colored”, with a prominent peak perceived around 3 kHz.


Pink noise is a random signal, filtered to have equal energy per octave. In order to keep the energy constant over the octaves, the spectral density is required to decrease as the frequency (f) increases (“1/f noise”). In terms of decibels, this decrease corresponds to 3 dB per octave on the magnitude spectrum.


A basic electronic noise-generating circuit is understood by those of ordinary skill in the art of electronics and the circuit details of such a circuit are generally beyond the scope of this invention. Of course, as is understood by those skilled in the art, that different filter components will create different types of “noise”, including the several color types noise types listed above.


Referring again to FIG. 1, and according to one embodiment of the present invention, sound generator controller 16 is connected to sound generator 12 and includes appropriate circuitry that causes sound generator 12 to either:

    • A) change the type of sound being outputted to sound output device 18; and/or
    • B) change a sound characteristic of the sound being outputted to sound output device 18,


      in response to receipt of input signal 20.


The term “sound characteristic”, includes volume or loudness, timbre, harmonics, rhythm, and pitch. The term “type of sound” refers to sound that is either random noise or structured sound, such as music. An example of “changing the type of sound” can be changing the outputted noise from pink to grey, or white to blue, etc., again in response to receipt of input signal 20. Another example of “changing the type of sound” can be simply changing the song or the genre of music being played, in response to receipt of input signal 20.


As is well understood, all circuitry described herein will be powered by an appropriate power supply 14, which is electrically connected to the circuitry, as required.


In operation, sound management system 10 can be incorporated into an environment for the purpose of managing nearby sounds through masking techniques. As mentioned above, the environment can be very small and controlled, such as a user wearing headphones, larger, such as a desk, or huge, such as a large room including several workers and desks. For the purpose of explanation however, Applicants are considering the environment to be small and controlled—headphones.


Headphones 18 are connected to sound generator 12, which is, in turn connected to sound generator controller 16. As a user works, he or she wears the headphones while a predetermined electronic sound is played through the headphones. The user will enjoy “sound protection” within the surrounding environment (outside the headphone area) because the electronic noise playing in his or her ears will effectively mask any outside noises from reaching his or her ears. In this arrangement, people located nearby the worker can carry on a conversation without the worker hearing them.


According to a first embodiment of the present invention, the user (the person wearing the headphones) can create an input signal 20 that can, in turn, cause sound generator controller 16 to cause sound generator 12 to change the type of electronic sound being played in the headphones, or a sound characteristic of the sound being played in the headphones. The input signal can be created manually (for example, by simply pressing a switch), or automatically, in response to detection of a change of a preset condition. Examples of preset conditions include:

    • a) Detection of a change of software program being used by the worker;
    • b) At preset times during the day (such as every hour);
    • c) At random times throughout the day;
    • d) Upon detection of changes of the level of ambient noise measured in the room (such as by a microphone);
    • e) Detection of the user changing their position (such as sitting down from standing, or standing up from sitting);
    • f) Detection of a change of height of a sit/stand desk being used by the user;
    • g) Detection of changes of energy levels of the user (e.g., change in measured heart rate); and
    • h) Changes in measured ambient temperature readings.


By way of example, as a user “surfs the web”, a certain type of sound and sound characteristic will play in the user's headphones. If the user then starts to read an article, for example, or review spreadsheets, the present sound management system 10, according to the present invention will automatically detect this change in type of work and will change either the type of sound being played on the headphones, or change some sound characteristic. The present sound management system 10 can use a simple algorithm to compare the type of programs that are being used by the user on their computer with a prescribed list to help select an appropriate (or predetermined) type of sound and an appropriate sound characteristic for the particular detected work or task.


In some instances, sound management system 10 can predict what type of work the user is doing while surfing the Internet by detecting level of activity. For example, the present system can measure mouse movement to decide the type of music the user would benefit from. If the user is actively using his or her mouse (or other input device) and actively clicking on links as they surf, then one type of music could be playing in the user's headphones, for example, an upbeat, high-energy type of music. However, if it is later determined that the mouse-use has stopped (or less clicks per minute are detected), then sound management system 10 may decide that it is likely the user is now reading or reviewing a selected article or webpage and may then change the type of music to benefit that particular activity, such as changing the music to something more soothing so the user can better concentrate on the article or webpage.


By way of another example, when the present sound management system 10 is used in combination with a motorized sit/stand desk, such as one commercially available by Stirworks, Inc, of Pasadena, Calif., USA (www.Stirworks.com), accelerometers or other appropriate sensors located in the desk can be used to predict the type of work the user is doing. Perhaps the present system is programmed (following the setup by the user) to change the music from soothing when the desk is at the “sit” height to high-energy when the desk moves to the “stand’ height. Also, vertical displacement frequency of desk movement and desk-height duration at certain heights can be used to predict the level of stress or fatigue of the user and can thereby select the best type of sound and sound characteristic to benefit the user and mitigate the effects of the stress and fatigue.


Other sensors that can be used as input signals for controlling the selected type and characteristic of the sound, according to this invention, include the user's computer (as it's being used), any electronic wearable device, such as a health-monitoring device (for measuring a user's blood pressure, body temperature and heart-rate and level of accumulative and recent activity, a microphone, a thermal sensor, a camera, a touch screen input device, and a computer mouse. These sensors, used alone or in combination can detect levels of stress, fatigue and activity which in turn, can be used to control the type of sound and its characteristic. Sound management system 10 can use this collected information to tune the music, for example, or the type of noise to “match” the measured type and level of work or stress. If the work is determined to be something like Internet surfing, or sketching, etc., the music selected can be active (e.g., rock) and its characteristic can increase in tempo and/or volume.


If the work is determined to be more stressful, perhaps indicated by a measured higher heart-rate of the user, then the music type could be more relaxing, such as jazz or classical, or the type of sound could be changed to an appropriate soothing background noise, such as quiet brown noise.


If sound management system 10 determines or predicts that the user is reading or doing an activity that requires focus and comprehension, then the system will automatically decrease the volume and tempo and select music (or other) that includes no words or lyrics.


The present system can also use microphones (not shown) to measure external noise and other potential audible distractions located within ear-shot of the user and then moderate the overall volume and type of sounds played to the user in an effort to mask or at least mitigate the level of distraction that these nearby sounds may have on the user.


The sounds that are sent to the user's ears may be sent through headphones that the user wears, or may be sent via nearby speakers, including speakers that are incorporated into the construction of the desk itself (i.e., embedded).


When the present sound-management system 10 is used in combination with a work-desk, the system can “listen” to the local environment and use the measured information to adjust the type of white noise or music played to the user's ears and the characteristics of that noise or music. As the level of measured audible distractions in the environment increase, the amplitude of the noise or the music sent to the user's ears likewise increases, but only up to a point. If the “corrective” noise or music sent the user's ears becomes too loud, at some point, this corrective sound itself will become a distraction to the user.


As described above, various “colors” of noise are predetermined and controlled spectrums of sound. According to another embodiment of the present invention, sound management system 10 can create its own specific noise with specific spectrum characteristics based on the tones the system detects around the user and the user's desk. For example, if the present system detects a person nearby speaking in a low voice, the system can introduce a type of music or sets of tones (or other generated custom noise) that are in a similar tonal range as the nearby distracting voice. By matching the audible distraction with similar tones (preferably of noise or sounds without words or lyrics), the distracting conversion will become effectively masked and the user's brain will not bother to try to understand it, thereby keeping the user focused and unbothered. This allows the system to abate or mitigate the distracting sounds and lower the Articulation Index without having to generate a full spectrum of noise.


Similarly, if a high squeaking noise is detected by the system of the present invention, a noise that is generated perhaps from a nearby door opening and closing, the generated corrective noise produced by the present system for the user's ears could be instantly shifted in that direction on the sound spectrum, functioning similar to a conventional active noise-cancellation system. By focusing on correcting or balancing the detected and measured frequency spectrum signature of the nearby distraction, the distracting noise can effectively be mitigated without a noticeable increase in amplitude in the user's ears.


System can be moved from random noise (white, pink, etc) to music using local controls (touch screen, etc).


If the user is wearing a device that measures levels of energy expenditure (such as a Fitbit® device, which is manufactured by Fitbit, Inc. of San Francisco, Calif., USA), the present sound management system will be able to “read” the Fitbit® device and change the type of music in response to the data read. For example, if the user recently completed a lot of exercise, the present sound management system 10 will change the music being played in the user's headphones to perhaps something more upbeat to make sure that the user doesn't fall asleep.


The user may have control of the present sound-management system to override any music or noise that the system automatically creates, and select their own from a list of selections, both of music and different “colored” noises. The user can also adjust other characteristics of the generated sounds, as desired. The user can make these adjustments using an appropriate controlling device, such as a computer with a touch-screen, or perhaps a tablet, or smart phone. Sound management system 10 is also able to automatically mix in music with any of the colored noises so that if a user is listening to white noise for an hour or so, for example, the present system can automatically and preferably slowly mix in the sounds of recognizable music. This will allow the user to avoid getting sick of listening to random noise for long periods of time. After a prescribed period of time, the system can slowly blend other noise patterns to create an effective sound that helps the user to abate or mitigate nearby audible distractions. Sound management system 10 allows the user a little variety in the sounds being played.


According to another embodiment of the invention, sound management system 10 includes a wireless connection (or a direct wired connection) to a user's computer or smart phone and is able to play specific sounds, such as music, in response to information on the user's calendar. For example, if it is shown on the user's calendar that the user has some free time between 3:00 and 3:30, the present system can play some upbeat music or some of the user's favorite songs. If the user will be going to a concert later that evening to see a particular band, the system will be able to obtain details of the band and the time an event of the concert from the user's smart phone, for example and then use this information to play select songs by that band . . . to get the user in the right “mood” for the concert later that night. Similarly, the user may be traveling to a country soon, such as Mexico, and the present system will learn this from the connected smart phone or computer and perhaps play Mexican music and sounds at different moments leading up to the day of travel. The system will learn what music and specific songs and sounds that the user likes (by keeping track of the frequency of which certain songs and sounds are played) and will play one or more of these preferred songs or sounds for the user at a specific time during the day and week. The present system will remember that the user loves to hear a particular song on Fridays at a given time and will play that song when the internal clock indicates that exact time and day.


The present system can detect which application the user is using on his or her computer or smart phone and could play sounds or music that are specific to particular detected applications. For example, when the user logs into Facebook®, the present system will detect this and will start playing a specific set of music tracks, perhaps “fun” and upbeat music, since the system will assume that the user is taking a break from work to enjoy reading their “wall.”


Also, the present system can read inputs or profile data or favorite song lists generated by music programs, including Spotify® and Pandora®, and use this information and biometric data to help select new sounds and music to be played for the user at different times.


The present system is adapted to absorb any reachable information that provides context, such as where the user has been, what the user has been doing, and perhaps where the user will be going in the future. The system can then use this information to help select the types of music and sounds appropriate for the user, given other information, such as time of day, and other environment and biological conditions, described above.


The present system can use immediate user feedback to help learn what the user likes and dislikes and use this information to help plan future sounds and music. For example, if the system selects a song and the user turns the music off or skips the song, or turns the music down, then the system will know that it selected a song or sound that you likely do not like. It will then remove this particular song from the primary song list and move it to a secondary list. At random times, the system can play the song again from the secondary list and see how the user responds. If the response is still negative, the system will move the song to the “no-play” folder, where it will remain until the user moves it back to the primary list.


According to another feature of the present invention, in a work environment where there are several desks, each using the present sound management system 10, the system can work with all the users by allowing each to effectively vote on the sound being played, either a random noise or a particular song, and cause the system to accommodate the played sounds and music based on the consensus vote. The voting could be inputted by touch screen connected to a smart phone or computer that his connected to the system 10 by WIFI or other appropriate connection.


Also, each desk could include a microphone, as mentioned above, wherein all microphones would be connected to the sound management system 10. According to this feature, the microphones could be used to collect distracting sounds (sound levels at different frequencies). This information could be used to determine which areas within a work environment are quiet and which are loud. Workers could then move about within the environment to find the best work zone based on this information. This information would be sensitive to time and would be dynamic but it is likely that trends could be formed to establish zones within the work environment that offer average quiet and average moderate sounds within a range of volume and frequency that is considered distracting.

Claims
  • 1. A sound management system for use with a sit-stand desk including a height adjustable work surface, the sound management system comprising: a memory device for storing a plurality of sounds including at least a first sound and a second sound, wherein the first sound is audibly different than the second sound; anda controller for selecting at least one of the plurality of stored sounds as a selected sound based on at least a first input signal, the first input signal and the corresponding selected sound being generated at least in part as a function of a height of the sit-stand desk;wherein said controller sends a command to a speaker to audibly generate said selected sound; andwherein the speaker is controlled to generate the selected sound until some other measured condition occurs; andwherein the controller selects the selected sound based on the first input signal and at least a second input signal that is different than the first input signal, the second input signal generated at least in part as a function of a change in a measured biological condition of a user and wherein said measured biological condition of said user is user-fatigue.
  • 2. The sound management system of claim 1, wherein said user-fatigue is determined by calorie expenditure of recent physical activity of said user.
  • 3. The sound management system of claim 1, wherein said first sound is one of music and random noise.
  • 4. The sound management system of claim 1, wherein said second sound is one of music and random noise.
  • 5. The sound management system of claim 1, wherein said first sound is a combination of music and random noise.
  • 6. The sound management system of claim 1, wherein said speaker is a set of headphones worn by the user.
  • 7. The sound management system of claim 1, wherein said speaker is built into said sit-stand desk.
  • 8. The sound management system of claim 1 wherein said controller is further configured to select one of the plurality of stored sounds as a selected sound in response to at least a first statistic related to the height of the work surface.
  • 9. The sound management system of claim 8, wherein said first sound is one of music and random noise and said second sound is the other of music and random noise.
  • 10. The system of claim 8 wherein the statistic related to height includes instantaneous height of the work surface.
  • 11. The system of claim 8 wherein the statistic related to height includes duration of time that the work surface is at a current height.
  • 12. The system of claim 8 wherein the statistic related to height includes frequency of work surface height adjustment.
  • 13. The sound management system of claim 1 wherein the other measured condition includes a change in the height of the sit-stand desk.
  • 14. The sound management system of claim 13 wherein the selected sound includes a sound type, the system further including at least one ambient condition sensor, the controller further controlling at least one characteristic of the sound type based on information from the ambient condition sensor.
  • 15. The sound management system of claim 14 wherein the at least one characteristic includes at least one of volume, loudness, timbre, harmonics, rhythm and pitch.
  • 16. A sound management system for use with a sit-stand desk including a height adjustable work surface, the sound management system comprising: a memory device for storing a plurality of sounds including at least a first sound and a second sound, wherein the first sound is audibly different than the second sound; anda controller for selecting at least one of the plurality of stored sounds as a selected sound based on at least a first input signal, the first input signal and the corresponding selected sound being generated at least in part as a function of a height of the sit-stand desk;wherein said controller sends a command to a speaker to audibly generate said selected sound;wherein the speaker is controlled to generate the selected sound until some other measured condition occurs; andwherein the controller selects the selected sound based on the first input signal and at least a second input signal that is different than the first input signal, the second input signal generated at least in part as a function of a change in a measured biological condition of said user and wherein said measured biological condition of said user is user-stress.
  • 17. The sound management system of claim 16, wherein said user-stress is determined by one of body temperature, blood pressure, and heart rate.
  • 18. A sound management system for use with a sit-stand desk including a height adjustable work surface, the sound management system comprising: a memory device for storing a plurality of sounds including at least a first sound and a second sound, wherein the first sound is audibly different than the second sound; anda controller for selecting at least one of the plurality of stored sounds as a selected sound based on at least a first input signal, the first input signal and the corresponding selected sound being generated at least in part as a function of a height of the sit-stand desk;wherein said controller sends a command to a speaker to audibly generate said selected sound; andwherein the speaker is controlled to generate the selected sound until some other measured condition occurs; andwherein the controller selects the selected sound based on the first input signal and at least a second input signal that is different than the first input signal, the second input signal generated at least in part as a function of a change in an environment of a user of the sit-stand desk, and wherein said change in said environment is one of a change in ambient temperature, a change in ambient noise level, or a change in type of a software program in use by the user.
  • 19. A sound management system for use in a work space, the sound management system comprising: a memory device for storing a plurality of sounds, wherein each sound is audibly different than the other sounds in the plurality of sounds; anda controller for selecting at least one of the plurality of sounds as a selected sound in response to at least first and second different input signals;wherein said controller sends a command to a speaker to audibly generate said selected sound; andwherein said first input signal is generated at least in part as a function of a change in a measured biological condition of a user and the second input signal is generated at least in part as a function of a measured non-biological condition is a sensed height of a height adjustable sit-stand desk;wherein the speaker is controlled to generate the selected sound until some other measured condition occurs;wherein the measured non-biological condition is a sensed height of a height adjustable sit-stand desk.
  • 20. The sound management system of claim 19, wherein said measured biological condition is user fatigue.
  • 21. The sound management system of claim 19, wherein said measured condition other than a biological condition is activity level in the workspace.
  • 22. The sound management system of claim 21, further including at least a first sensor for sensing activity level.
  • 23. The system of claim 21 wherein the activity level is activity level of a computer associated with user.
  • 24. The sound management system of claim 19, wherein said measured biological condition is user stress level.
  • 25. The sound management system of claim 19, wherein said speaker is a set of headphones worn by a user.
  • 26. The sound management system of claim 19, wherein at least a first sound in the plurality of sounds is one of music and random noise and at least a second sound in the plurality of sounds is the other of music and random noise.
  • 27. The sound management system of claim 19 wherein the measured biological condition is a current posture of the user.
  • 28. The sound management system of claim 19 wherein the measured condition other than a biological condition includes environmental sound within the ambient work space.
  • 29. The sound management system of claim 19 wherein the selected sound is a specific sound type based on one of the first and second measured conditions, the system further adjusting at least one sound characteristic of the specific sound type based on the other of the measured conditions.
  • 30. The sound management system of claim 29 wherein the specific sound type is based on the first measured condition.
  • 31. The system of claim 19 wherein the measured condition other than a biological condition includes detection of an application accessed by the user.
  • 32. The system of claim 19 wherein the measured condition other than a biological condition includes detection of an event on the user's schedule.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 61/935,343, filed Feb. 4, 2014, entitled “Sound Management Systems for Improving Workplace Efficiency.”

US Referenced Citations (231)
Number Name Date Kind
3953770 Hayashi Apr 1976 A
4163929 Janu et al. Aug 1979 A
4440096 Rice et al. Apr 1984 A
4571682 Silverman et al. Feb 1986 A
4779865 Lieberman et al. Oct 1988 A
4821118 Lafreniere Apr 1989 A
4828257 Dyer et al. May 1989 A
4849733 Conigliaro Jul 1989 A
4894600 Kearney Jan 1990 A
5019950 Johnson May 1991 A
5022384 Freels et al. Jun 1991 A
5140977 Raffel Aug 1992 A
5224429 Borgman et al. Jul 1993 A
5259326 Borgman et al. Nov 1993 A
5305238 Starr, III et al. Apr 1994 A
5308296 Eckstein May 1994 A
5323695 Borgman et al. Jun 1994 A
5335188 Brisson Aug 1994 A
5371693 Nakazoe Dec 1994 A
5412297 Clark et al. May 1995 A
5435799 Lundin Jul 1995 A
5456648 Edinburg et al. Oct 1995 A
5583831 Churchill et al. Dec 1996 A
5612869 Letzt et al. Mar 1997 A
5666426 Helms Sep 1997 A
5765910 Larkin et al. Jun 1998 A
5853005 Scanlon Dec 1998 A
5870647 Nada et al. Feb 1999 A
5890997 Roth Apr 1999 A
5944633 Wittrock Aug 1999 A
6013008 Fukushima Jan 2000 A
6014572 Takahashi Jan 2000 A
6030351 Schmidt et al. Feb 2000 A
6032108 Sciple et al. Feb 2000 A
6075755 Zarchan Jun 2000 A
6135951 Richardson et al. Oct 2000 A
6142910 Heuvelman Nov 2000 A
6161095 Brown Dec 2000 A
6286441 Burdi et al. Sep 2001 B1
6296408 Larkin et al. Oct 2001 B1
6312363 Watterson et al. Nov 2001 B1
6360675 Jones Mar 2002 B1
6447424 Ashby et al. Sep 2002 B1
6458060 Watterson et al. Oct 2002 B1
6527674 Clem Mar 2003 B1
6595144 Doyle Jul 2003 B1
6622116 Skinner et al. Sep 2003 B2
6669286 Iusim Dec 2003 B2
6702719 Brown et al. Mar 2004 B1
6716139 Hosseinzadeh-Dolkhani et al. Apr 2004 B1
6746371 Brown et al. Jun 2004 B1
6749537 Hickman Jun 2004 B1
6790178 Mault et al. Sep 2004 B1
6793607 Neil Sep 2004 B2
6964370 Hagale et al. Nov 2005 B1
6977476 Koch Dec 2005 B2
6987221 Platt Jan 2006 B2
7030735 Chen Apr 2006 B2
7063644 Albert et al. Jun 2006 B2
7070539 Brown et al. Jul 2006 B2
7128693 Brown et al. Oct 2006 B2
7141026 Aminian et al. Nov 2006 B2
7161490 Huiban Jan 2007 B2
7172530 Hercules Feb 2007 B1
7301463 Paterno Nov 2007 B1
7327442 Fear et al. Feb 2008 B1
7439956 Albouyeh et al. Oct 2008 B1
7510508 Santomassimo et al. Mar 2009 B2
7523905 Timm et al. Apr 2009 B2
7538284 Nielsen et al. May 2009 B2
7594873 Terao et al. Sep 2009 B2
7614001 Abbott et al. Nov 2009 B2
7628737 Kowallis et al. Dec 2009 B2
7635324 Balis Dec 2009 B2
7637847 Hickman Dec 2009 B1
7640866 Schermerhom Jan 2010 B1
7652230 Baier Jan 2010 B2
7661292 Buitmann et al. Feb 2010 B2
7717827 Kurunmaki et al. May 2010 B2
7722503 Smith et al. May 2010 B1
7735918 Beck Jun 2010 B2
7857731 Hickman et al. Dec 2010 B2
7884808 Joo Feb 2011 B2
7892148 Stauffer et al. Feb 2011 B1
7909737 Ellis et al. Mar 2011 B2
7914468 Shalon et al. Mar 2011 B2
7931563 Shaw et al. Apr 2011 B2
7955219 Birrell et al. Jun 2011 B2
8001472 Gilley et al. Aug 2011 B2
8024202 Carroll et al. Sep 2011 B2
8047966 Dorogusker et al. Nov 2011 B2
8051782 Nethken et al. Nov 2011 B2
8052580 Saalasti et al. Nov 2011 B2
8092346 Shea Jan 2012 B2
8105209 Lannon et al. Jan 2012 B2
8109858 Redmann Feb 2012 B2
8159335 Cox, Jr. Apr 2012 B2
8167776 Lannon et al. May 2012 B2
8206325 Najafi et al. Jun 2012 B1
8257228 Quatrochi et al. Sep 2012 B2
8361000 Gaspard Jan 2013 B2
8381603 Peng et al. Feb 2013 B2
8432356 Chase Apr 2013 B2
8462921 Parker Jun 2013 B2
8522695 Ellegaard Sep 2013 B2
8540641 Kroll et al. Sep 2013 B2
8550820 Soltanoff Oct 2013 B2
8560336 Schwarzberg et al. Oct 2013 B2
8593286 Razoumov et al. Nov 2013 B2
8595023 Kirchhoff et al. Nov 2013 B2
8620617 Yuen et al. Dec 2013 B2
8668045 Cohen Mar 2014 B2
8688467 Harrison et al. Apr 2014 B2
8690578 Nusbaum et al. Apr 2014 B1
8690735 Watterson et al. Apr 2014 B2
8700690 Raghav et al. Apr 2014 B2
8771222 Kanderian, Jr. et al. Jul 2014 B2
8812096 Flaherty et al. Aug 2014 B2
8814754 Weast et al. Aug 2014 B2
8818782 Thukral et al. Aug 2014 B2
8821350 Maertz Sep 2014 B2
8825482 Hernandez-Abrego et al. Sep 2014 B2
8836500 Houvener et al. Sep 2014 B2
8847988 Geisner et al. Sep 2014 B2
8947215 Mandel et al. Feb 2015 B2
8965541 Martinez et al. Feb 2015 B2
8984685 Robertson et al. Mar 2015 B2
8997588 Taylor Apr 2015 B2
9049923 Delagey Jun 2015 B1
9084475 Hjelm Jul 2015 B2
9119568 Yin et al. Sep 2015 B2
9167894 DesRoches et al. Oct 2015 B2
9236817 Strothmann et al. Jan 2016 B2
9486070 Labrosse et al. Nov 2016 B2
20010013307 Stone Aug 2001 A1
20010028308 De La Huerga Oct 2001 A1
20020055419 Hinnebusch May 2002 A1
20020145512 Sleichter, III et al. Oct 2002 A1
20040010328 Carson et al. Jan 2004 A1
20040014014 Hess Jan 2004 A1
20040229729 Albert et al. Nov 2004 A1
20040239161 Lee Dec 2004 A1
20050058970 Perlman et al. Mar 2005 A1
20050113649 Bergantino May 2005 A1
20050165626 Karpf Jul 2005 A1
20050172311 Hjelt et al. Aug 2005 A1
20050182653 Urban et al. Aug 2005 A1
20050202934 Olrik et al. Sep 2005 A1
20050209051 Santomassimo et al. Sep 2005 A1
20060063980 Hwang et al. Mar 2006 A1
20060205564 Peterson Sep 2006 A1
20060241520 Robertson Oct 2006 A1
20060250524 Roche Nov 2006 A1
20060266791 Koch et al. Nov 2006 A1
20070135264 Rosenberg Jun 2007 A1
20070179355 Rosen Aug 2007 A1
20070200396 Baumann et al. Aug 2007 A1
20070219059 Schwartz et al. Sep 2007 A1
20070265138 Ashby Nov 2007 A1
20080030317 Bryant Feb 2008 A1
20080045384 Matsubara et al. Feb 2008 A1
20080051256 Ashby et al. Feb 2008 A1
20080077620 Gilley et al. Mar 2008 A1
20080098525 Doleschal et al. May 2008 A1
20080132383 Einav et al. Jun 2008 A1
20080245279 Pan Oct 2008 A1
20080255794 Levine Oct 2008 A1
20080256445 Olch Oct 2008 A1
20080300914 Karkanias et al. Dec 2008 A1
20080304365 Jarvis et al. Dec 2008 A1
20080306351 Izumi Dec 2008 A1
20090076335 Schwarzberg et al. Mar 2009 A1
20090078167 Ellegaard Mar 2009 A1
20090132579 Kwang May 2009 A1
20090156363 Guidi et al. Jun 2009 A1
20090195393 Tegeler Aug 2009 A1
20090229475 Bally et al. Sep 2009 A1
20090270227 Ashby et al. Oct 2009 A1
20090273441 Mukherjee Nov 2009 A1
20100049008 Doherty et al. Feb 2010 A1
20100073162 Johnson et al. Mar 2010 A1
20100135502 Keady Jun 2010 A1
20100185398 Berns et al. Jul 2010 A1
20100198374 Carson et al. Aug 2010 A1
20100201201 Mobarhan et al. Aug 2010 A1
20100205542 Walman Aug 2010 A1
20100234184 Le Page et al. Sep 2010 A1
20100323846 Komatsu et al. Dec 2010 A1
20110015041 Shea Jan 2011 A1
20110015495 Dothie et al. Jan 2011 A1
20110033830 Cherian Feb 2011 A1
20110054359 Sazonov et al. Mar 2011 A1
20110080290 Baxi et al. Apr 2011 A1
20110104649 Young et al. May 2011 A1
20110120351 Shoenfeld May 2011 A1
20110182438 Koike Jul 2011 A1
20110184748 Fierro et al. Jul 2011 A1
20110245979 Koch Oct 2011 A1
20110275939 Walsh et al. Nov 2011 A1
20110281248 Feenstra et al. Nov 2011 A1
20110281687 Gilley et al. Nov 2011 A1
20110296306 Oddsson et al. Dec 2011 A1
20120015779 Powch et al. Jan 2012 A1
20120051579 Cohen Mar 2012 A1
20120116550 Hoffman et al. May 2012 A1
20120173319 Ferrara Jul 2012 A1
20120316661 Rahman Dec 2012 A1
20130002533 Burroughs et al. Jan 2013 A1
20130012788 Horseman Jan 2013 A1
20130116092 Martinez et al. May 2013 A1
20130144470 Ricci Jun 2013 A1
20130199419 Hjelm Aug 2013 A1
20130199420 Hjelm Aug 2013 A1
20130207889 Chang et al. Aug 2013 A1
20130218309 Napolitano Aug 2013 A1
20130316316 Flavell et al. Nov 2013 A1
20130331993 Detsch et al. Dec 2013 A1
20140096706 Labrosse et al. Apr 2014 A1
20140137773 Mandel May 2014 A1
20140156645 Brust et al. Jun 2014 A1
20140245932 McKenzie, III Sep 2014 A1
20140249853 Proud Sep 2014 A1
20140270254 Oishi Sep 2014 A1
20150064671 Murville et al. Mar 2015 A1
20150071453 Po Mar 2015 A1
20150142381 Fitzsimmons May 2015 A1
20150302150 Mazar et al. Oct 2015 A1
20160051042 Koch Feb 2016 A1
20160309889 Lin et al. Oct 2016 A1
20170052517 Tsai et al. Feb 2017 A1
20170135636 Park May 2017 A1
Foreign Referenced Citations (29)
Number Date Country
202286910 Jul 2012 CN
19604329 Aug 1997 DE
10260478 Jul 2004 DE
102008044848 Mar 2010 DE
202014005160 Jul 2014 DE
1159989 Dec 2001 EP
1470766 Oct 2004 EP
2424084 Sep 2006 GB
H11178798 Jul 1999 JP
2001289975 Oct 2001 JP
2005267491 Sep 2005 JP
516479 Jan 2002 SE
0219603 Mar 2002 WO
02062425 Aug 2002 WO
2005032363 Apr 2005 WO
2005074754 Aug 2005 WO
2006042415 Apr 2006 WO
2006042420 Apr 2006 WO
2006065679 Jun 2006 WO
2007099206 Sep 2007 WO
2008008729 Jan 2008 WO
2008050590 Feb 2008 WO
2008101085 Aug 2008 WO
2010019644 Feb 2010 WO
2010023414 Mar 2010 WO
2011133628 Oct 2011 WO
2012061438 May 2012 WO
2012108938 Aug 2012 WO
2013033788 Mar 2013 WO
Non-Patent Literature Citations (24)
Entry
Anthro Corporation, Can Anthro “Walk the Talk”?: Employees Embark on a 30-Day Sit-Stand Challenge, Press Release Oct. 11, 2010, www.anthro.com/press-releases/2010/employees-embark-on-a-30-day-sit-stand-challenge, copyright 2016, 4 pages.
Benallal, et al., A Simple Algorithm for Object Location from a Single Image Without Camera Calibration, In International Conference on Computational Science and Its Applications, pp. 99-104. Springer Berlin Heidelberg, 2003.
Bendixen, et al., Pattern of Ventilation in Young Adults, Journal of Applied Physiology, 1964, 19(2):195-198.
BrianLaF, asp.net 4.0 TimePicker User Control, www.codeproject.com/articles/329011/asp-net-timepicker-user-control, Feb. 25, 2012, 5 pages.
EJAZ, Time Picker Ajax Extender Control, www.codeproject.com/articles/213311/time-picker-ajax-extender-control, Jun. 22, 2011, 8 pages.
Heddings, Stop Hitting Snooze: Change the Default Reminder Time for Outlook Appointments, www.howtogeek.com/howto/microsoft-office/stop-hitting-snooze-change-the-default-reminder-time-for-outlook-appointments, Apr. 25, 2008, 2 pages.
Hopkins Medicine, Vital Signs (Body Temperature, Pulse Rate, Respiration Rate, Blood Pressure). Source: Johns Hopkins Medicine Health Library, 2016, pp. 1-4.
Kriebel, How to Create a Two-Panel Column Chart in Tableau (And Save Lots of Time Compared to Excel), www.vizwiz.com/2012/02/how-to-create-two-panel-column-chart-in.html, 2012, 14 pages.
Microsoft, Automatically Adjust the Start and Finish Dates for New Projects, Applies To: Project 2007, Project Standard 2007, https://support.office.com/en-us/article/Automatically-adjust-the-start-and-finish-dates-for-new-projects-27c57cd1-44f3-4ea8-941a-dc5d56bdc540?ui=en-US&rs=en-US&ad=US&fromAR=1, Copyright 2017 Microsoft.
mrexcel.com, Forum: How to Calculate Percentage of Total Used Time, www.mrexcel.com/forum/excel-qestions/192521-how-calculate-percentage-total-used-time.html., Post Date: Mar. 20, 2006, 4 pages.
Paolo, Arduino Forum, Measuring Point To Point Distances With Accelerometer, http://forum.arduino.cc/index.php?action=printpage;topic=49902.0;images, Post Date: Jan. 26, 2011, 5 pages.
Process Dash, Using the Task & Schedule Tool, www.processdash.com/static/help/Topics/Planning/UsingTaskSchedule.html, Mar. 4, 2011.
Sun Microsystems, Lights Out Management Module, https://docs.oracle.com/cd/E19585-01/819-0445-10/lights_out.html, Copyright 2004, Sun Microsystems, Inc.
Wideman, Issues Regarding Total Time and Stage 1 Time, http:/maxwideman.com/papers/resource/issues.html, 1994.
LINAK, Deskline Controls/Handsets User Manual, Copyright LINAK 2017.
LINAK, DPG Desk Panels—A New Way to Adjust Your Office Desk, Product News, May 19, 2017.
Steelcase, Inc., Airtouch Height-Adjustable Tables, Brochure, Copyright 2015 Steelcase Inc.
Steelcase, Inc., Migration Height-Adjustable Desk, Brochure, Copyright 2015 Steelcase Inc.
Steelcase, Inc., Ology Height-Adjustable Desk, Brochure, Copyright 2016 Steelcase Inc.
Steelcase, Inc., Series 5 Sit-To-Stand Height-Adjustable Tables, Brochure, Copyright 2015 Steelcase Inc.
Steelcase, Inc., Series 7 Enhanced Sit-To-Stand Height-Adjustable Tables, Brochure, Copyright 2015 Steelcase Inc.
Linak, Deskline Deskpower DB4/DL4 Systems User Manual, Copyright Linak 2007.
Linak, Deskline DL9/DB9/DL11 System User Manual, Copyright Linak 2007.
Office Details, Inc., Height AdjusTable Worksurfaces User Instructions, Copyright 2004 Office Details Inc.
Related Publications (1)
Number Date Country
20150222989 A1 Aug 2015 US
Provisional Applications (1)
Number Date Country
61935343 Feb 2014 US