This application generally relates to navigation, and more particularly to providing audio aided navigation.
While it is possible to track a device on short routes to destinations, the normal tracking on devices such as mobile devices are more accurate over large distances where the device moves at a greater distance than short routes, such as inside a building. Therefore, it is necessary to offer an application that verifies that a device, and a person associated with the device, is on a correct route to a destination via other methods, such as comparisons of audio recorded by the device at the current location and previously recorded audio at previously determined locations.
An example operation may include a method comprising one or more of recording a first audio sample at a set of waypoints, traveling, by a device, down a route, reaching a first waypoint, notifying the device to record a second audio sample when the device is not at a destination, comparing the second audio sample with each first audio sample, notifying the device to return to a previous waypoint when the comparison does not match, and notifying the device to continue when the comparison does match.
Another example operation may include a system comprising a device containing a processor and memory, wherein the processor is configured to perform one or more of record a first audio sample at a set of waypoints, travel, by the device, down a route, reach a first waypoint, notify the device to record a second audio sample when the device is not at a destination, compare the second audio sample with each first audio sample, notify the device to return to a previous waypoint when the comparison does not match, and notify the device to continue when the comparison does match.
A further example operation may include a non-transitory computer readable medium comprising instructions, that when read by a processor, cause the processor to perform one or more of recording a first audio sample at a set of waypoints, traveling, by a device, down a route, reaching a first waypoint, notifying the device to record a second audio sample when the device is not at a destination, comparing the second audio sample with each first audio sample, notifying the device to return to a previous waypoint when the comparison does not match, and notifying the device to continue when the comparison does match.
It will be readily understood that the instant components and/or steps, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of at least one of a method, system, component and non-transitory computer readable medium, as represented in the attached figures, is not intended to limit the scope of the application as claimed but is merely representative of selected embodiments.
The instant features, structures, or characteristics as described throughout this specification may be combined in any suitable manner in one or more embodiments. For example, the usage of the phrases “example embodiments”, “some embodiments”, or other similar language, throughout this specification refers to the fact that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. Thus, appearances of the phrases “example embodiments”, “in some embodiments”, “in other embodiments”, or other similar language, throughout this specification do not necessarily all refer to the same group of embodiments, and the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In addition, while the term “message” may have been used in the description of embodiments, the application may be applied to many types of network data, such as, packet, frame, datagram, etc. The term “message” also includes packet, frame, datagram, and any equivalents thereof. Furthermore, while certain types of messages and signaling may be depicted in exemplary embodiments they are not limited to a certain type of message, and the application is not limited to a certain type of signaling.
As a person navigates to a destination, there will be certain audio patterns emanating from various locations during the route. Some of these audio patterns should be consistently similar according to a path taken to a specific destination.
As a person is walking along a path to a destination, there is a certain noise pattern that is usually present. For example, if a user is walking through an office environment, noises will be present that may be used as cues to verify if the path to the desired destination is being followed. These noises may be unnoticeable to a person if they are not attempting to sense them, but various locations may have a consistent noise stamp that is recorded and can be compared to audio samples that are taken.
As a person is arriving at a location, or passing a location, such as a break room, for example, the noise inside the break room will begin to rise and lessen as the person leaves the break room area. Noises are usually emitted from devices that are usually unnoticed by people yet are there none-the-less. The noises may be from a refrigerator, an icemaker, or other appliances in the break room or from individuals in the break room at certain times.
Other noises may be present in particular areas of an environment. For example, from air conditioner vents, outside equipment, the hum of florescent lights, etc. These noise patterns may be picked up by devices recording audio at particular coordinates.
Also, in various locations (such as cubes where people work) also emit particular noises. For example, in a row of 10 cubes along a path, a noise pattern is possible to detect according to whether or not people are working in the cubes or not.
In one embodiment, a device that is recording audio normalizes the audio patterns recorded along a particular path (for example, a path of cubes). Audio is recorded at areas within the cubes on the path and the audio samples are normalized. The client device 102 recording the audio may send the audio samples to a server 106 wherein the audio samples are normalized.
The averaging out process may average out recorded audio taken at various coordinates wherein various sounds are used as a validation of an audio sample match. For example, the averaging out process takes recorded audio at various coordinates along the path of cubes wherein the noises of people working in a cube are used as a validation of an audio sample match.
The averaging out is functionally implemented by recording via the current application executing on the device 102, audio during different parts of the day along a path of cubes, for example 7 cubes (henceforth referred to as the “multiple path samples”). The audio is not normalized. Normalization of audio is utilized for two reasons:
In this situation, it is desirable to obtain the possible audio along the path of cubes without knowing beforehand if the cubes are empty, partially occupied, or all occupied. Therefore, the multiple path samples are analyzed such that a number of samples along the path, for example 2 samples with the highest amount of audio are utilized for the final audio sample along the path.
Therefore, if anyone is working in any of the cubes, the sample may more likely match the audio sample taken anywhere along the path.
Therefore, if for example no one is working in the first 3 cubes, but a person is working in the 4th and 5th cube, the audio sample taken along the path of cubes may match because the audio samples along the path were averaged out such that a person working in any of the cubes would match the recorded audio, even though no one was working in the first 3 cubes.
In addition, if no one is working in the first 3 cubes, but a person is working in the 4th and 5th cube, the audio sample taken along the path of cubes may match the audio sample(s) known to emanate from the 4th cube and the 5th cube, such that a person walking along the path with a device capable of receiving and/or processing audio, could receive validation (or not) that they were on the correct path.
The current process of obtaining audio along the path of the cubes (7 cubes for example) will record the audio at cube 3 and obtain noises at that location coming from cubes 4 and 5 even though no one is in cubes 1, 2 and 3. This process continues as the recording device moves closer to the first occupied cube, cube 4.
In another embodiment, the final audio used compares the stored (for example at server 106) audio from different audio samples. It compares the received audio against the audio stored:
For example, noise or sound can be picked up from the low hum of machines such as printers, refrigerators, computers, etc. All of these machines, which are generally static in nature, emit sounds mostly from fans utilized to cool their motors.
One embodiment of the current application seeks to compare audio as received by a device on the user to previously recorded noises to validate the current path of the user.
Referring to
The system includes a client device 102. A client device may be at least one of a mobile device, a tablet, a laptop device, and/or a personal desktop computer. The client device is communicably coupled to the network 104. It should be noted that other types of devices might be used with the present application. For example, a PDA, an MP3 player, or any other wireless device, a gaming device (such as a hand held system or home based system), any computer wearable device, and the like (including a P.C. or other wired device) that may transmit and receive information may be used with the present application. The client device may execute a user browser used to interface with the network 104, an email application used to send and receive emails, a text application used to send and receive text messages, and many other types of applications. Communication may occur between the client device and the network 104 via applications executing on said device and may be applications downloaded via an application store or may reside on the client device by default. Additionally, communication may occur on the client device wherein the client device's operating system performs the logic to communicate without the use of either an inherent or downloaded application.
The system 100 includes a network 104 (e.g., the Internet or Wide Area Network (WAN)). The network may be the Internet or any other suitable network for the transmitting of data from a source to a destination.
A server 106 exists in the system 100, communicably coupled to the network 104, and may be implemented as multiple instances wherein the multiple instances may be joined redundant network or may be singular in nature. Furthermore, the server may be connected to database 108 wherein tables in the database are utilized to contain the elements of the stored data in the current application, such as Structured Query Language (SQL), for example. The database may reside remotely to the server coupled to the network 104 and may be redundant in nature.
Referring to
Computer system 200 may also include main memory 208, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 206 for storing information and instructions to be executed by a processor 205. Main memory 208 also may be used for storing temporary variables or other intermediate information during the execution of instructions to be executed by a processor 205. Such instructions, when stored in the non-transitory storage media accessible to processor 205, may render computer system 200 into a special-purpose machine that is customized to perform the operations specified in the previously stored instructions.
Computer system 200 may also include a read only memory (ROM) 207 or other static storage device which is coupled to bus 206 for storing static information and instructions for processor 205. A storage device 209, such as a magnetic disk or optical disk, may be provided and coupled to bus 206 which stores information and instructions.
Computer system 200 may also be coupled via bus 206 to a display 212, such as a cathode ray tube (CRT), a light-emitting diode (LED), etc. for displaying information to a computer user. An input device 211 such as a keyboard, including alphanumeric and other keys, is coupled to bus 206, which communicates information and command selections to processor 205. Other type of user input devices may be present including cursor control 210, such as a mouse, a trackball, or cursor direction keys which communicates direction information and command selections to processor 205 and controlling cursor movement on display 212.
According to one embodiment, the techniques herein are performed by computer system 200 in response to a processor 205 executing one or more sequences of one or more instructions which may be contained in main memory 208. These instructions may be read into main memory 208 from another storage medium, such as storage device 209. Execution of the sequences of instructions contained in main memory 208 may cause processor 205 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry or embedded technology may be used in place of or in combination with software instructions.
The term “storage media” as used herein refers to any non-transitory media that may store data and/or instructions causing a machine to operation in a specific fashion. These storage media may comprise non-volatile media and/or volatile media. Non-volatile media may include, for example, optical or magnetic disks, such as storage device 209. Volatile media may include dynamic memory, such as main memory 208. Common forms of storage media include, for example, a hard disk, solid state drive, magnetic tape, or other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
Various forms of media may be involved in the carrying one or more sequences of one or more of the instructions to processor 205 for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer may load the instructions into its dynamic memory and send the instructions over a medium such as the Internet 202.
Computer system 200 may also include a communication interface 204 coupled to bus 206. The communication interface may provide two-way data communication coupling to a network link, which is connected to a local network 201.
A network link typically provides data communication through one or more networks to other data devices. For example, the network link may provide a connection through local network 201 to data equipment operated by an Internet Service Provider (ISP) 202. ISP 202 provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 202. Local network 201 and Internet 202 both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface 204, carrying the digital data to and from computer system 200, are example forms of transmission media.
Computer system 200 can send messages and receive data, including program code, through the network(s) 202, the network link, and the communication interface 204. In the Internet example, a server 203 may transmit a requested code for an application program through Internet 202, local network 201 and communication interface 204.
The received code can be executed by processor 205 as it is received, and/or stored in storage device 209, or other non-volatile storage for execution at a later time.
Every action or step described herein is fully and/or partially performed by at least one of any element depicted and/or described herein. Additionally, any step depicted may be performed in any order, other than presented.
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, that the present disclosure may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present disclosure. Modifiers such as “first”, “second”, and “third” may be used to differentiate elements, but the modifiers do not necessarily indicate any particular order. For example, a first party may be so named although, in reality, it may be a second, third, and/or fourth party.
US patent application 2016/0314209 entitled “Navigating with A Camera Device”, henceforth referred to as the '209 patent, offers a system and method that allows the use of a camera or camcorder on a device to aid in a navigation system. As the device geographically moves from a starting location to a destination, images and/or video taken by a device associated with the user are analyzed wherein it is determined whether or not the device is on a valid path to the destination.
The movement of the user may be determined in a variety of ways in the '209 application, including the device being tracked using at least one of GPS technology, and indoor positioning system (e.g. WI-FI), an accelerometer on the device, or an altimeter on the device.
Images taken from the device in the '209 application are stored and compared to previously taken images along the path of the device from a starting location to the destination. In some embodiments, the image data is sent to a navigation module, which may search through stored images of locations to find an image sufficiently similar to the image obtained by the device.
The current application seeks to offer a solution that utilizes data other than image data (e.g. audio) to ascertain the current location of the device and determine if the device is properly routing to a destination.
To record audio from a device, the software initiates recording functionality. For example, in the Android operating system, the following Java code is utilized to initiate the recording of audio on the device:
In one embodiment, the recorded media is stored in the device 102 locally then sent to a server 104 for processing. In another embodiment, the recorded media is stored in the device 102.
Audio is recorded and stored in an environment. A recording device is used to record sounds in the environment wherein the audio is stored with the precise geographical location of the device. The data is then stored in a database, such as database 108.
Referring to
A UID column is present as the first column and may be used to index the row, in some implementations. A second and third column provides the exact geographic location of where the audio is taken. A fourth and final column is a link to the audio file, or the actual audio file. In many database systems, the actual file is stored in a server's file system, and the database simply points to the actual file.
Each audio file is sample audio taken at the corresponding geographic location. The audio file is not long in length but is long enough to provide an example of the audio condition at the location, for example 5 seconds.
In one embodiment, audio samples are taken on a device, such as device 102 wherein an audio sample is recorded on the device at particular intervals. The device begins at a starting point and moves to the destination. This path may be provided via a mapping application wherein the starting location and the destination location is determined. As the device moves to different points along the shortest or best path to the destination, audio recordings are made.
The recording of the path begins via the execution of the current application executing on the device 102. Along the path, a notification is made on the device wherein the user is notified to stop while audio is recorded. This notification may be an alert sound on the device, a notification message on the GUI of the current application executing on the device, or a combination of both. The recording of the audio at each interval is recorded for a predetermined amount of time, for example 5 seconds. When the audio sample is complete, another notification is presented on the device wherein the user then moves along the path until the next notification is presented.
The distance between notifications alerting the user that an audio sample is needed is determined by the distance between the starting location and the destination location wherein the delta distance is split by a predetermined value that changes based on the amount of distance between the starting location and the destination location.
For example, if the distance is 1000 yards, then audio samples are taken every 30 yards such that there are a total of 33 audio samples taken, or a sample taken every 33 yards. As another example, if the distance is 180 yards, then audio samples are taken every 9 yards such that there are a total of 20 samples taken, or a sample taken every 20 yards.
The exact number of samples required for a given distance is programmatically determined and one versed in common program techniques may easily determine the best number of samples over a given route length without deviating from the scope of the current invention.
Taking an audio sample along a route to a destination requires additional time. Each audio sample requires the user to stop and take an audio sample, which can take up to 5 seconds, for example, then the obtained audio sample taken at the device 102 must be compare against the audio sample store in the server 106. The requirement of taking samples at intervals that are too close together may allow the current application to be not useful, therefore it is important to require that samples be taken at intervals wherein the path to the destination is assured, yet not bother the user with having to take audio samples too often.
As an example, setting points along a route to a destination (otherwise known in the art as waypoints) allows for actions to occur at each point reached along the destination. The following code utilizes a mapping API to set two waypoints along a route:
When a waypoint is reached on a destination, a notification is presented on the GUI of the current application to request another audio sample.
Due to the difficulty of determining the exact location of the device inside a building, the approximate location of the device is used as the waypoint. For example, if the device is within a radius of 10 yards from the set waypoint, an audio sample is requested. The server 106 calculates the waypoint with a radius of equal to the set radius (e.g. 10 yards) and uses the radius coordinates to indicate that the device has arrived at the waypoint.
In another embodiment, the waypoints are not used along the route to the destination. The current geographic location of the device 102 is compared against the previous geographic location when a sample was taken, and the delta between the two points are used wherein the delta must be equal or greater to a previously determined distance. The previously determined distance is a distance hardcoded in the software of the current application, for example 10 yards.
Audio samples are compared against stored audio files taken at the same geographic location to determine similarities. The process of analyzing audio files is called acoustic fingerprinting. Acoustic fingerprinting is a condensed, digital summary, deterministically generated from an audio signal that can be used to identify an audio sample or quickly locate similar items in an audio database.
A robust acoustic fingerprint algorithm takes into account the perceptual characteristics of the audio. If two files sound alike to the human ear, their acoustic fingerprints should match, even if their binary representations are quite different. Acoustic fingerprints are not bitwise fingerprints, which must be sensitive to any small changes in the data. Acoustic fingerprints are more analogous to human fingerprints where small variations that are insignificant to the features the fingerprint uses are tolerated. A smeared human fingerprint impression can accurately be matched to another fingerprint sample in a reference database; acoustic fingerprints work in a similar way.
Perceptual characteristics often exploited by audio fingerprints include average zero crossing rate, estimated tempo, average spectrum, spectral flatness, prominent tones across a set of frequency bands, and bandwidth.
Most audio compression techniques (AAC, MP3, WMA, Vorbis) will make radical changes to the binary encoding of an audio file, without radically affecting the way it is perceived by the human ear. A robust acoustic fingerprint will allow a recording to be identified after it has gone through such compression, even if the audio quality has been reduced significantly. For use in radio broadcast monitoring, acoustic fingerprints should also be insensitive to analog transmission artifacts.
There are tools available that compare two acoustic files for similarities. For example, AcoustID ( ) is on open sourced application that offers the advantage of being actively developed. Many acoustic fingerprint applications are utilized in today's market for music recognition, but these offerings may be used to determine similarities in audio files.
As an example, a Java class called fingerprintSimilarity may be used to obtain a score of the comparison of two audio files. The following code compares the audio samples of two way files:
In another embodiment, the expected audio is stored along with the latitude/longitude of where the audio is recorded, then a delta between the audio stored and the audio received at the same geographic location. The delta is determined via logic in the code, for example utilizing the functionality of audio fingerprinting and/or spectral analysis or the like. For example, the following table contains the two audio files, and a score of the similarities of the two:
Referring to
If the response is between 3 and 7, it is not a verifiable comparison, the data in the message informs the user to take another recording at a nearby location, and another comparison is made with the new file.
If the response is below 2, text in the notification informs the use that they may be on an incorrect route and to return to the last verified waypoint and continue along another path.
Referring to
Two buttons at the bottom of the notification allow user to either press “Begin” to initiate the recording of audio at the current location, and “Cancel” to remove the notification window and continue.
In another embodiment, an audio notification is made, such as a beep, informing the user of the client device 102 to begin recording an audio sample. When the audio sample is complete, two beeps are made informing the user to walk to the next waypoint along the route.
Referring to
In another embodiment, other verifiers are used via the current application to assure the client device is on the correct route. For example, passing by a location where a particular sound is expected, set at a specific geographical latitude and longitude. This sound may not particularly be previously recorded at that location, but obtained via another source, such as file accessed via the Internet 104, or a similar source.
In another embodiment, a client device may capture a photo at a specific latitude/longitude wherein the photo is sent to the server 106 in the similar fashion as the previously mentioned audio sample. A set of photos are stored at the server tied to a latitude/longitude either locally in the server 106, in a database such as database 108, or stored remotely in the network 104. The received photo is compared against the stored photo at the same or similar geographic location and feedback is provided therein.
In yet another embodiment, a device containing a sensor and a motion device, for example, is set at a geographic location such that a message is sent to the server 106 upon detecting the client device 102 wherein the server responds back to the said client device that the path is indeed the correct path.
Referring to
In another embodiment, when the user returns to the previous, validated point the current application executing on the client device 102 does not validate audio as the audio would have previously been validated at that location. The next notification is triggered at the arrival of the device in the next waypoint.
Referring to
A starting point 402 is determined as well as a final destination 412. There are 3 waypoints along the route 406, 408, and 410. Each waypoint has a radius 406 (e.g. 2 yards) wherein the device is determined to have arrived in that waypoint if the device enters the radius of the waypoint 406.
The waypoints 404 and 406 may be set as distances from the previous point (e.g. the distance between the starting point 402 and the first waypoint 404 is 10 yards, the distance between the second waypoint 406 and the previous way point 404 is 10 yards, etc.
In another embodiment, the current application may provide not the shortest distance between the starting location and the ending location, but a route that may be longer in length, but allow for additional interactions.
The current application executing on the client device 102 may determine, through interactions with the user's (User A) client device's scheduling application that the user has an upcoming meeting with another user (User B) of whom is on an alternative route to the final destination. This interaction is through an API of the scheduling application, for example. Therefore, the user may wish to discuss the agenda of the meeting with User B to avoid a formal meeting as is currently scheduled.
Therefore, the current application will map a route in a similar fashion to the many routes a navigation application may offer a driver to a final destination.
The current application issues a notification to the user's (User A) client device 102 executing the current application. The notification has the following text:
If the response is “Yes”, then the current application executing on the client device 102 routes the device along the path to User B. If the response is “No”, then the route is not altered.
In another embodiment, the alternative route may include tasks such as a route by a printer if the current application determines that a printout has been requested via a printer on the route. The current application interacts with the enterprise's printer queue software to determine if a printout has been requested.
In another embodiment, the current application executing on the device 102, tracking the progress of the device on either the route or alternative route, may calculate the time of arrival to the printing device such that the document(s) to be printed are queued wherein the printout will be at the top of the print stack, allowing the user to easily remove the printout from the printer.
Referring to
The figure depicts the normal route as indicated via a solid line, and the alternative route 422. On the alternative route, there exists a waypoint set to a printer 424, and a waypoint set to the location of User B 426, as further discussed herein.
Referring to
The route is begun wherein the device begins the route along a path to the final destination. A waypoint is reached 504. This waypoint may be a set of coordinates as previously determined, or a distance traveled since the previous point.
A check is made whether the device 102 is at the destination 504. If the device is at the final destination, the process ends 506. If the destination is not reached, a notification is presented on the display of the client device 102 in the current application executing therein 508 to begin recording an audio sample 300.
The sample is completed 510 wherein the audio sample recorded is compared against the audio sample of the current waypoint 512. A check is made if the audio samples match 514. If the samples match a notification is presented on the display of the client device 102 in the current application executing therein 516 for the user to continue 310.
If the samples do not match, a notification is presented on the display of the client device 102 in the current application executing therein 510 for the user to return to the previous point 320.
Referring to
The client device 102 begins down a path to the final destination wherein a waypoint is reached 602. The client is notified to take an audio sample 602. When the sample is complete, an audio sample message 606 is sent to the server 106. The audio sample message contains at least two parameters: the sample of audio (audio1), and the current coordinates of the device (coordinates).
The server 106 may query the database 108 to obtain the audio sample previously recorded at the given coordinates (coordinates). A response message 610 is returned with the audio sample (audio2) at those coordinates 610.
The server 106 compares the two audio samples 612 (audio1 and audio2) wherein it is determined if the received audio sample from the device (audio1) matches the previously stored audio sample at that coordinate (audio2) as further disclosed herein.
A response is sent to the client device 102 with a notification (notification) of the success of the audio comparison 614. The notification will either reflect that a match was made when the audio was compared 310, or a match was not made, and the user should return to the previous point and continue 320.
The client device 102, upon receiving the response message 614 displays the notification 616 and the process continues.
In an alternate embodiment, the stored audio files at predetermined coordinates are stored in the server 106 wherein messaging 608 and 610 does not occur.
In another embodiment, transports have an entertainment system, henceforth referred to as a “radio”. These radios may contain multiple sources of data:
Many transports have multiple speakers throughout the interior such that regardless of where a person is sitting in the transport, they are able to hear sounds from the radio.
A problem arises when an occupant moves their head and the volume remains the same. Regardless of the position of the occupant's head, the volume remains the same.
The current invention offers a system and methods in a transport wherein a transport may be an automobile, airplane, train, bus, boat, or any type of vehicle that normally transports people from one place to another. This allows the speaker(s) near an occupant's ears to be modified either by an automatic change in the volume or by the movement of the speaker(s) according to the position of the occupant's ears.
Referring to
The transport 702 contains a device such as an in-transport navigation system or entertainment system, or any device including a processor and memory 704, henceforth referred to as the transport system and acts as the main communication device for the current application, and/or a client device 706, and an image/video camera 707, which communicates with the transport system 704. The client device being a device may communicate with the transport system 704 or may directly connect with the network 708. Transport system 704 contains a processor and memory. The processor receives input, analyzes/parses the input, and provides an output to one or more systems, modules, and/or devices.
The client device may be least one of a mobile device, a tablet, or a laptop device. It should be noted that other types of devices might be used with the present application. For example, a PDA, an MP3 player, or any other wireless device, a gaming device (such as a hand held system or home based system), any computer wearable device, and the like (including personal computer or other wired device) that may transmit and receive information may be used with the present application. The client device and/or the in-transport navigation system may execute a user browser used to interface with the network 708, an email application used to send and receive emails, a text application used to send and receive text messages, and many other types of applications. Communication may occur between the client device and/or the in-transport navigation system and the network 708 via applications executing on said device and may be applications downloaded via an application store or may reside on the client device by default. Additionally, communication may occur on the client device wherein the client device's operating system performs the logic to communicate without the use of either an inherent or downloaded application.
A server 710 exists in the system, communicably coupled to the network 708, and may be implemented as multiple instances wherein the multiple instances may be joined to form a complete cryptocurrency wallet or may be singular in nature. Furthermore, the server may be connected to a database (not depicted) wherein tables in the database are utilized to contain the elements of the system and may be accessed via queries to a database, such as Structured Query Language (SQL), for example. The database may reside remotely to the server coupled to the network 708 and may be redundant in nature.
The modification of the volume of headrest speakers, as well as the movement of the speaker is dependent on the initial position and movement of the occupant's head.
To track the position and movement of the occupant's head, the current application uses a camera 707 mounted such that all occupants are tracked.
In one embodiment, an image and/or video camera (henceforth referred to as a “monitoring camera” is mounted inside the transport 702 such that all people in the transport may be captured and monitored. For a mounted camera, an image is taken at a determined interval. The interval is hardcoded in the logic of the transport system 704 and set to a determined value, such as an image is taken every 1.5 seconds. The image may be stored in the transport system 704 of the current application either locally inside the transport, or in a remote server 710 such that communication between the server and the remote server is through the network 708.
In another embodiment, the camera is mounted in the interior roof of the transport, around the rear-view mirror and points towards the back such that all occupants in the transport can be monitored. The tracking of the occupants' head is performed via the transport system 704 through tracking software. For example, the code below utilizes a popular camera in a gaming system to track the movement of a player's head.
The use of this logic is utilized to track the occupants' head movements, therefore allowing the sending of commands to the speakers to modify the volume or change direction for each speaker.
The headrest speaker(s) are moveable wherein the controls of the movements are determined by the transport system 704. The transport system sends messages to the headrest speaker informing the speaker(s) of the direction of movement and amount of movement. The transport system may also send a command to return the speaker to a default position, such as pointing straight.
Referring to
In one embodiment, the movement of the headrest speaker tracks the occupant's 1002, 1004, 1006, 1008 ears. The movement signals are received from the monitoring camera 707, which sends images/video to the transport system 704 for processing. The transport system tracks the movements of the occupants' head in real-time, providing the ability for the altering of the direction of the headrest speakers as the occupants' head move.
Each seat in the transport has a pair of speakers near the occupant's head, henceforth referred to as “headrest speakers”. Headrest speakers are on the left and right side of the occupant's head when the occupant is sitting in the seat.
Referring to
In another embodiment, the headrest speakers are along a track 904 wherein they may be moved vertically to accommodate shorter or taller occupants. A lever 906 placed alongside the side of the seat allows the headrest speaker to move such that each headrest speaker may be moved inside the seat. The seat lining 904 over the speaker area is made of mesh such that regardless where the speaker is placed alongside the track, it is able to produce full sound due to the construction of the mesh covering.
In another embodiment, if the occupant does not occupy the seat, the headrest speakers 902 turn off.
In another embodiment, the speakers turn on and are ready to receive signals from the transport system when the occupant enters the transport and sits in the seat with the headrest speakers 902.
Referring to
There are four occupants in the transport 1002, 1004, 1006, and 1008 in the figure. The driver may be in the driver's seat 1004 (outside British colonies) in some implementations. All occupants are sitting down in seats. There are two speakers of the current application 1010 on both sides of all of the occupants' heads.
The monitoring camera 707 receives video that is sent to the transport system 704 where the image is analyzed as further disclosed herein. The transport system then, according to the determined motion of the occupant(s), will raise/lower the volume of the headrest speaker(s) and/or issue commands to change the direction of the headrest speaker(s).
In one embodiment, the headrest speakers 1010 automatically adjust for head movements. Normally, as a person brings the ear closer to the speaker, the volume appears to increase. In the current application, this phenomenon is accounted for. For example, as an occupant turns to the left, the left headrest speaker is lowered, and the right headrest speaker is raised, thus giving the impression that the volume of the audio has not been altered. In this scenario, the location of the left ear is closer to the headrest speaker and the position of the right ear is further from the headrest speaker.
In another embodiment, a headrest speakers 1010 may be turned off or the volume of the speaker set to 0 (zero) at times when the position of an ear is a particular distance from the headrest speaker.
The volume of each headrest speaker 1010 is independently controlled via the current system. Furthermore, the system tracks the occupant's head at all times such that each headrest speaker may adjust according to the position of the occupant's head. Therefore, if the occupant moves down in the seat, the speakers point down. If the occupant moves to the left or right, the speakers move direction left or right respectfully.
In another embodiment, depending on position of the occupant's head, the headrest speakers are modified, as is the entire volume of the system.
In another embodiment, each of the speaker's volume in the transport is modified by the transport system 704 except a separate sub-speaker, as the volume of the sub-speaker does not easily alter the ability for occupants to carry on a conversation.
In another embodiment, the headrest speaker 1010 or speaker housing or other device emits a low-grade laser, echo, or signal to detect the object in front of the speaker. This technology is similar to the sonar functionality wherein data is sent out from the headrest speaker and the amount of echo returned is utilized to determine how far an object is from the headrest speaker. The data collected by the headrest speaker utilizing the sonar technology is sent to the transport system 704 to determine any modification of either volume or movement of the headrest speaker.
Referring to
In some embodiments, the monitoring camera 707 is replaced by another method of monitoring the occupants' movements. For example, a motion detection device placed inside the transport or any other device that is used to track and record motion.
The transport system 704 may be part of the computer coupled with the transport or may exist entirely or partially in a client device 706 such as a mobile device, or any other device containing a processor and memory.
Data is received from the monitoring camera 1102, which may be images and/or video. This data is sent to the transport system 1104.
The transport system 704 receives the data 1106, which may be stored locally or in a remote database. The data is analyzed 1108 to examine to determine (further explained herein), among other functions:
The headrest speakers 1010 are modified according to the analysis of the data 1110. This may result in the modification of the volume of the headrest speakers 1112 and/or the modification of the direction of the headrest speakers 1114.
Number | Date | Country | |
---|---|---|---|
62677144 | May 2018 | US |