Various methods and apparatus remote audio communication are known, for example telephones, intercoms, radio transmitter/receiver pairs and listening devices such as baby monitors. While such apparatus is particularly suited to exchanging detailed or specific information, there is no attempt to convey the audio environment at one location to another. This result in a feeling of remoteness between users as the audio environment forms a large part of the ambiance of a location.
Without any idea of the audio environment, it can be hard for a listener to understand the situation at the remote location and/or to empathize with a person at that location. For example, it can be hard for neighbors to empathize with one another over ‘nuisance noise’. In other cases, a certain level and quality of noise can provide reassurance, for example, a carer listening in on young children need not be aware of the content of their conversation but will be reassured by an appropriate level of background noise.
The embodiments described below are not limited to implementations which solve any or all of the disadvantages of known communications devices.
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
The disclosure relates to communication devices which monitor an audio environment at a remote location and convey to a user a representation of that audio environment. The “representation” may be, for example, an abstraction of the audio environment at the remote location or may be a measure of decibels or some other quality or parameter of the audio environment. In some embodiments, the communication devices are two-way devices which allow users at remote locations to share an audio environment. In some embodiments, the communication devices are one way devices.
As used herein, the term ‘abstraction’ should be understood in its sense of generalization by limiting the information content of the audio environment, leaving only the level of information required for a particular circumstance.
Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
Although the present examples are described and illustrated herein as being implemented in wireless communication system, the system described is provided as an example and not a limitation. As those skilled in the art will appreciate, the present examples are suitable for application in a variety of different types of communication systems.
The processing circuitry 200 comprises a position sensor 202 which senses the position of the flap 104 and a microprocessor 204 which is arranged to receive inputs from the microphone 108 and the position sensor 202 and to control the speaker 106 and the indicator light 110. The processing circuitry 200 further comprises a transmitter/receiver 206 arranged to allow it to communicate with a local wireless network. The transmitter/receiver 206 provides inputs to the microprocessor 204 and is controlled thereby.
The position of the flap 104 acts as a selection means and controls qualities with which sound is transmitted and received by the device 100. If the flap 104 is fully closed (i.e. in its lowermost position), the microprocessor 204 detects this from the position sensor 202. The microprocessor 204 controls the microphone 108 and the speaker 106 such that no sound is transmitted or received by the communication device 100. If the flap 104 is in a middle position, the microprocessor 204 receives sound from the microphone 108 and (if, as is described further below, the device 100 is in communication with a second device 100) processes that sound using known algorithms to render it less clear or muffled. This processing results in an ‘abstraction’ of the audio environment as less information than is available is transmitted. Any sound received via the transmitter/receiver 106 will be played through the microphone 108, similarly muffled. If the flap 104 is fully open then sound is transmitted/received clearly, i.e. with no muffling. As the flap 104 is mounted as a roller blind, there are a large range of positions which it can occupy. The degree to which the sound is muffled, i.e. ‘abstracted’, is set by the position of the flap 104.
The indicator light 110 is arranged to indicate when the device 100 is in communication with another similar device 100. In the embodiment now described, this will be a paired device 100 arranged to communicate over a local wireless network. If the flap 104 on the second device 100 is in any position other than fully closed, the indicator light 110 on the first device 100 will be lit, and vice versa.
A method of using the device 100 in conjunction with a second, similar device 100 is now described with reference to the flow chart of
In use of the paired devices 100, a user of the first device 100 wishes to listen in on the second device 100. The user of the first device 100 therefore opens the flap 104 (block 300) and the indicator light 110 on both devices is lit indicating that the second device 100 is capable of communicating sound (block 302). The user can choose the level of detail in the communication between the rooms (block 304). For example, the user may be working in the study, but wants to be reassured that his or her children are playing quietly in the living room. In such a case, the user may chose to have the flap 104 only partially open, i.e. in a mostly closed position. The sound from the living room received by the second device 100 under goes an abstraction process under the control of the microprocessor 204 of either device 100 and is presented to the user in a muffled form through the speaker 106 of the first device 100 (block 305). By looking at the device 100 in the living room the children will be able to see that the flap 104 is slightly open and that the indicator light 110 is on and will be aware that they can be heard. The user can continue with his or her work but can readily hear any dramatic changes in the sound levels from the living room, perhaps indicating that the children are arguing, have been injured or the like (block 306). In such an event, the user can opt to fully open the flap 104 on the first device 100 (block 308). This will result in sound being transmitted clearly (i.e. the sound data no longer undergoes an abstraction process) and will allow the user to obtain a clearer idea of what is occurring in the room and/or ask questions or communicate directly with the children. Of course, the user can choose to communicate clearly at any time.
A second embodiment of a communication device 101 is now described with reference to
The processing circuitry 500 comprises a microprocessor 502 arranged to receive inputs from the motion sensor 404, the proximity sensor 406 and the microphone 408 and to control the level indicator 412 and the speaker 410. The processing circuitry 500 further comprises a transmitter/receiver 504 arranged to allow it to communicate with a local wireless network. The transmitter/receiver 504 provides inputs to the microprocessor 502 and is controlled thereby.
The motion sensor 404 is arranged to detect movement within the room or area in which the device 101 is being used. If motion is detected, the proximity sensor 406 determines how far from the device 101 the moving object is. The proximity is used to determine the level of abstraction with which sound is transmitted to another paired device. This in turn allows a user to determine their level of privacy by choosing how close to stand to the communication device 101. This level of abstraction is displayed on the level indicator 412 of a paired device 101. The closer a user is, the more bars 413 will be lit up. In this embodiment, neither of the paired devices 101 is a slave.
A user of a first device 101 selects how clearly audio data is transmitted from the first device 101 to paired device(s) 101 by his or her physical distance there from. The user of the first device is able to determine how clearly a user of a paired (second) device 101 is willing to transmit data from observing the level indicator 412. If the user of the first device 101 is also willing to communicate clearly, he or she can approach the first device 101 and communicate through the microphone 408. However, unless he or she opts to approach the device 101, only muffled abstracted, sound will be heard though the speaker 410. In this embodiment, the user of a first device 101 will be notified of the increased proximity of a user of a second device 101 with an audible alarm played through the speaker 410 when all the bars 413 are lit.
In some embodiments, the device 101 may not comprise proximity sensor 406, but may instead be arranged to set the volume/clarity based on how many people there are in the room. In order to achieve this, the device 101 could comprise a detector across a doorway arranged to detect when people enter or leave the room.
A further embodiment is now described in which communication devices are used to convey information about sound levels which can be heard remotely, for example tracking the sound levels that can be heard by a neighbor.
In this embodiment, communication devices 103 such as those shown in
The processing circuitry 700 comprises a microprocessor 702, a memory 704, a transmitter/receiver 706, a sound analysis module 708 and a timer 710. The microprocessor 702 is arranged to receive inputs from the microphone 604 and the control buttons 610, 611, 612, and to control the speaker 608 and the LCD display panel 606, and can store data in and retrieve data from the memory 704. The transmitter/receiver 706 provides inputs to the microprocessor 702 and is controlled thereby.
In this embodiment, one of a pair of devices 103 is installed in each of two neighboring houses and are wall-mounted on either side of a party wall. The pair can communicate with one another wirelessly via their respective transmitter/receivers 706 to share data.
The process for setting up the pair of devices 103 is now described with reference to
During subsequent use of the pair of devices 103, the LCD panel 606 displays the sound level that can be heard by the neighbor of the user of that device 103. This allows a user to regulate their own sound levels to be below that which their neighbor has stated is the maximum he or she finds acceptable so as not to adversely affect their neighbor's environment. In this embodiment, the LCD panel 606 is arranged to display a sound wave representing the sound level in the room. The sound wave is displayed in green provided that the stored maximum volume is not exceeded and in red if the volume is exceeded. If the maximum volume is exceeded from more than a predetermined period of time, in this example half an hour, an alarm is triggered and will be heard through the speaker 608.
Each user can also experience the volume levels in the neighbor's house resulting from his or her own noise by pressing the auto-listener button 611. This results in the microprocessor 702 of the first device 103 retrieving the correction factor from its memory 704 and using this correction factor to process sound received by the microphone 604 such that a representation of what can be heard by the neighbor can be played back through the speaker 608.
In alternative embodiments, the sound could be played back through headphones or the like so that the user can distinguish the sound in their room from the sound they are causing in their neighbor's rooms.
The microprocessor 702 of each device 103 is also arranged to store historical data in relation to sound levels in its memory 704, using the timer 710 keep track of the time and date and to determine, for example, when and for how long the maximum level of volume was exceeded. This may be used to help resolve neighborhood disputes over sound levels. This information is accessed by pressing the ‘display history’ button 612. The information can be presented at various level of detail, e.g. by year, month, week, day or hour, depending on the requirements of a user.
In another embodiment, instead of an alarm being sounded when acceptable levels are exceeded for too long, the device 103 may be arranged to cut off sound producing devices such as televisions or music players, in order to minimize noise. In addition, in some embodiments, it may be possible to store various acceptable sound levels such that, for example, a higher volume is acceptable during the day than after 2200 hrs, a higher volume may be acceptable at weekends or when a neighbor is away. In some cases, a higher volume could be agreed in advance of a party. Alternatively or additionally, one neighbor may always be allowed to be as loud as the other at any given time. The maximum acceptable volume may be preset, or set according to local regulations or laws, rather than being agreed by the parties. In addition, the devices 103 have been described as monitoring the sound through a wall. They could instead be arranged to monitor the sound through a door, floor or ceiling, or across a corridor or the like.
In other embodiments, a plurality of devices 103 could be assembled within a network and a shared visual display means could be arranged to display data on the noise produced at each. This embodiment could be used to track the noise produced in a community such as a collection of houses or a block of flats. This will encourage an individual to consider their neighbors as he or she will be able to compare his or her noise contribution to that of others. A social contract concerning sound levels could be formally or informally enforced, and a form of noise trading could result.
Of course, features of the embodiments could be combined as appropriate. Also, while the above embodiments have been described in relation to two paired devices, further devices could be included on the local network. In addition, the devices 100, 101, 103 need not be in the same building but could instead be remote from one another and able to communicate over an open network such as a traditional or a mobile telephone network, or via the Internet.
Although the above embodiments have been described in relation to a domestic environment, the disclosure is not limited to such an environment.
In other embodiments, the devices could be arranged between two houses to help create a feeling of proximity. One example would be to have one device in a family house and another in a grandparent's house. The grandparent would experience the audio environment of the family house as a general background babble and would therefore feel connected with events in the family house and less lonely. Other embodiments may have a web interface such that a user could utilize their computer as one communication device 100, 101, 103, capable of communicating with another computer configured to act as a communication device 100 or with a dedicated communication device 100, 101, 103.
In the above embodiments, two-way communication devices were described. In alternative embodiments now described, the communication devices may be arranged for one-way communication. In one such embodiment, a speaker unit provides a ‘virtual window’ to allow sound from a remote location to be brought into a specific area in the same manner as if it were occurring outside of a window. Such an embodiment is now described with reference to
The microphones 912 are arranged at various remote locations and are capable of transmitting sound received at their locations to the sound window unit 900 via a wireless network, in this example, the mobile telephone network 914.
The processing circuitry 150 comprises a microprocessor 152, a position sensor 154, arranged to sense the position of the moveable panel 904, and a transmitter/receiver 156. The microprocessor 152 is arranged to receive inputs from the position sensor 154 and the selection dial 908 and to control the output of the speaker 906 based on these inputs.
As is described in relation to
The microprocessor 152 detects the position of the selection dial 908 and makes a wireless connection with the microphone 912 at that location using known mobile telephony techniques (block 162). The sound from that selected microphone 912 is then transmitted to the unit 900 and is received by the transmitter/receiver 156.
A user may then select the volume at which sound is played by selecting the position of the moveable panel 904 (block 164). This is detected by the position sensor 154 and the microprocessor 152 determines the volume at which the sound transmitted from the microphone 914 is played through the speaker 906 (block 166). The higher the panel 904 is lifted (i.e. the more open the ‘sash window’) is, the louder the sound. The effect mimics the behavior of a real window in that amount of sound received through a real window depends on how open the window is.
It will be appreciated that there are a number of variations which could be made to the above described exemplary sound window embodiment without departing from the scope of the invention. For example, the moveable panel 904 may not be mounted as a vertical sash window but may instead be a horizontal sash window, be mounted in the manner of a roller blind, open on a hinge or in some other manner.
The microphones 912 may be moveable or may be arranged in a number of locations which are near the unit 900 (for example in different rooms of the house in which the unit 900 is situated. There could be only one microphone 912, or two or many microphones 912 provided. The network may comprise a wired network, the Internet, a WiFi network or some other network. The network may be arranged to provide a user with a ‘virtual presence’ in another location.
In one embodiment, the microprocessor 152 may be arranged to modify or provide an abstraction of the sound received by the microphone. As explained above, the term ‘abstraction’ as used herein should be understood in its sense of generalization by limiting the information content of the audio environment, leaving only the level of information required for a particular circumstance.
The unit 900 could be provided with a visual display means arranged to display data relating to the audio environment at the location of the microphones 912.
Some embodiments may include a sound recognition means and could for example replace the sound with a visual abstraction based on the source of the noise, e.g. a pot to represent cooking sounds. As will be familiar to the person skilled in the art, there are known methods of sound recognition, for example, using probabilistic sound models or recognition of features of an audio signal (which can be used with statistical classifiers to recognize and characterize sound). Such systems may for example be able to tell music from conversation from cooking sound depending on characteristics of the audio signal.
The computing-based communication device comprises one or more inputs in the form of transmitter receivers which are of any suitable type for receiving media content, Internet Protocol (IP) input, and the like.
Computing-based communications device also comprises one or more processors which may be microprocessors, controllers or any other suitable type of processors for processing computing executable instructions to control the operation of the device. Platform software comprising an operating system or any other suitable platform software may be provided at the computing-based device to enable application software to be executed on the device.
Computer executable instructions may be provided using any computer-readable media, such as memory. The memory is of any suitable type such as random access memory (RAM), a disk storage device of any type such as a magnetic or optical storage device, a hard disk drive, or a CD, DVD or other disc drive. Flash memory, EPROM or EEPROM may also be used.
An output is also provided such as an audio and/or video output to a display system integral with or in communication with the computing-based device. The display system may provide a graphical user interface, or other user interface of any suitable type although this is not essential.
Conclusion
The terms ‘computer’ and ‘processing circuitry’ are used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes PCs, servers, mobile telephones, personal digital assistants and many other devices.
The methods described herein may be performed by software in machine readable form on a tangible storage medium. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software which runs on or controls “dumb” or standard hardware to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
It will be understood that the above description of preferred embodiments given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention. Although various embodiments of the invention have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this invention.
Number | Name | Date | Kind |
---|---|---|---|
5307051 | Sedlmayr | Apr 1994 | A |
6150947 | Shima | Nov 2000 | A |
6418346 | Nelson et al. | Jul 2002 | B1 |
7126467 | Albert et al. | Oct 2006 | B2 |
7254455 | Moulios | Aug 2007 | B2 |
7577262 | Kanamori et al. | Aug 2009 | B2 |
7732697 | Wieder | Jun 2010 | B1 |
20020067835 | Vatter | Jun 2002 | A1 |
20020111539 | Cosentino et al. | Aug 2002 | A1 |
20030109298 | Oishi et al. | Jun 2003 | A1 |
20030160682 | Yamada et al. | Aug 2003 | A1 |
20030187924 | Riddle | Oct 2003 | A1 |
20040001079 | Zhao et al. | Jan 2004 | A1 |
20040153510 | Riddle | Aug 2004 | A1 |
20060075347 | Rehm | Apr 2006 | A1 |
20070013539 | Choi et al. | Jan 2007 | A1 |
20070133351 | Taylor | Jun 2007 | A1 |
20070172114 | Baker et al. | Jul 2007 | A1 |
20090146803 | Sellen et al. | Jun 2009 | A1 |
20090147649 | Brown et al. | Jun 2009 | A1 |
20090183074 | Lindley et al. | Jul 2009 | A1 |
Number | Date | Country |
---|---|---|
0298046 | Jan 1989 | EP |
1755242 | Feb 2007 | EP |
Number | Date | Country | |
---|---|---|---|
20090180623 A1 | Jul 2009 | US |