The present disclosure is related to consumer goods and, more particularly, to methods, systems, products, features, services, and other elements directed to media playback or some aspect thereof.
Options for accessing and listening to digital audio in an out-loud setting were limited until in 2002, when SONOS, Inc. began development of a new type of playback system. Sonos then filed one of its first patent applications in 2003, entitled “Method for Synchronizing Audio Playback between Multiple Networked Devices,” and began offering its first media playback systems for sale in 2005. The Sonos Wireless Home Sound System enables people to experience music from many sources via one or more networked playback devices. Through a software control application installed on a controller (e.g., smartphone, tablet, computer, voice input device), one can play what she wants in any room having a networked playback device. Media content (e.g., songs, podcasts, video sound) can be streamed to playback devices such that each room with a playback device can play back corresponding different media content. In addition, rooms can be grouped together for synchronous playback of the same media content, and/or the same media content can be heard in all rooms synchronously.
Features, aspects, and advantages of the presently disclosed technology may be better understood with regard to the following description, appended claims, and accompanying drawings, as listed below. A person skilled in the relevant art will understand that the features shown in the drawings are for purposes of illustrations, and variations, including different and/or additional features and arrangements thereof, are possible.
The drawings are for the purpose of illustrating example embodiments, but those of ordinary skill in the art will understand that the technology disclosed herein is not limited to the arrangements and/or instrumentality shown in the drawings.
Music and other media content can significantly affect a user's emotional state. Various attempts have been made to curate playlists of media content that are intended to direct a user's mood or other mental states (e.g., a mood-lifting playlist intended to raise a user's spirits, a study playlist intended to increase a user's concentration, etc.). However, the effects of a particular song or other media content may depend greatly on a user's present emotional state. For example, a user in a current state of severe depression user may be unmoved or even annoyed at hearing a cheerful, upbeat pop song. Accordingly, it can be useful to select songs or other media items that affect a user's mood incrementally or gradually, with each subsequent song intended to shift the user's mood closer towards a desired emotional state.
Additionally, current devices for influencing a user's emotional state via audio playback do not consider changes in the user's current emotional state during the playback. As such, current devices are unable to determine in real time the effect the audio playback is having on the user, and thus whether the playback is having the intended effect on the user's emotional state. Moreover, different users may respond differently to the same media content. For example, one user's mood may improve markedly upon listening to “Wake Up” by Arcade Fire, while another user's mood may darken in response to the same song. As a result, it can be useful to monitor a user's emotional state in real time during playback of media content intended to induce a desired emotional state in a user. Depending on the detected shifts in the user's emotional state, the playlist may be updated dynamically to achieve the desired gradual shifts in the user's mood.
Embodiments of the present technology address at least some of the above described issues, and generally relate to improved systems and methods for generating a playlist of media content to be played via a playback device. The generated playlist is based at least in part on a current emotional state of one or more users and/or desired emotional state. The generated playlist can be configured to influence and/or gradually transition the emotional state of the one or more users from the current emotional state to the desired emotional state.
Some embodiments of the present technology relate to receiving a first signal indicative of a current emotional state of a user, receiving a second signal corresponding to a desired emotional state of the user, and, based at least in part of the first and second signals, generating a playlist of media content. The first signal can be received from a sensor (e.g., a wearable brain sensing band) worn by the user. In some embodiments, generating the playlist comprises selecting items of media content including at least (i) a first item of media content having a first parameter corresponding to the current emotional state of the user, (ii) a second item of media content having a second parameter different than the first parameter, and (iii) an nth item of media content having an nth parameter corresponding to the desired emotional state of the user. The generated playlist is arranged in a sequential order such that the playlist transitions from the first item toward the nth item. The playlist can then be played back via a playback device. During playback, the user's current emotional state may be received, e.g., to determine whether the playlist is having an intended effect on the user and/or if the user's emotional state is gradually transitioning toward the desired emotional state. If the user's emotional state is gradually transitioning away from the desired emotional state or in an unexpected manner, the playlist may be updated.
As explained in more detail below, generating the playlist in such a manner, and playing back the playlist to the user, provides an improved ability to influence the emotional state of the user, e.g., from the current emotional state toward the desired emotional state. Unlike current devices or methods for influencing a user's emotional state, embodiments of the present technology consider the current and desired emotional states of the user, and play media content of the generated playlist to gradually influence the user's emotional state along a pathway that includes the current and desired emotional states. In doing so, the user's current emotional state is continuously and/or iteratively considered such that the playlist can be continuously updated during playback as necessary to ensure the user's emotional state gradually transitions toward the desired emotional state.
Some embodiments of the present technology can further relate to providing media content that includes generative audio. As explained elsewhere herein, generative audio can be created at least in part via an algorithm and/or non-human system that utilizes a rule-based calculation. As such, generative audio can be endless and/or dynamic audio that is altered in real time as inputs (e.g., parameters associated with the first, second, and/or other signals described herein) to the algorithm change. For example, generative audio can be used to direct a user's mood toward a desired emotional state, with one or more characteristics of the generative audio varying in response to real-time measurements reflective of the user's emotional state. As used in embodiments of the present technology, the system can provide generative audio based on the current and/or desired emotional states of a user. For example, an exemplary method can include receiving a first signal indicative of a current emotional state of one or more users, receiving a second signal corresponding to a desired emotional state, and causing a playback device to play back generative audio. After receiving the first signal and the second signal, one or more audio characteristics of the generative audio can be adjusted to provide at least (i) a first portion of media content having a first parameter corresponding to the current emotional state, (ii) a second portion of media content having a second parameter different than the first parameter, and (iii) an nth portion of media content having an nth parameter corresponding to the desired emotional state. In doing so, embodiments of the present technology can provide generative audio and/or alter acoustic characteristics of existing media items to provide media content that gradually transitions a user toward a desired emotional state.
While some examples described herein may refer to functions performed by given actors such as “users,” “listeners,” and/or other entities, it should be understood that this is for purposes of explanation only. The claims should not be interpreted to require action by any such example actor unless explicitly required by the language of the claims themselves.
In the Figures, identical reference numbers identify generally similar, and/or identical, elements. To facilitate the discussion of any particular element, the most significant digit or digits of a reference number refers to the Figure in which that element is first introduced. For example, element 110a is first introduced and discussed with reference to
As used herein the term “playback device” can generally refer to a network device configured to receive, process, and/or output data of a media playback system. For example, a playback device can be a network device that receives and processes audio content. In some embodiments, a playback device includes one or more transducers or speakers powered by one or more amplifiers. In other embodiments, however, a playback device includes one of (or neither of) the speaker and the amplifier. For instance, a playback device can comprise one or more amplifiers configured to drive one or more speakers external to the playback device via a corresponding wire or cable.
Moreover, as used herein the term NMD (i.e., a “network microphone device”) can generally refer to a network device that is configured for audio detection. In some embodiments, an NMD is a stand-alone device configured primarily for audio detection. In other embodiments, an NMD is incorporated into a playback device (or vice versa).
The term “control device” can generally refer to a network device configured to perform functions relevant to facilitating user access, control, and/or configuration of the media playback system 100.
Each of the playback devices 110 is configured to receive audio signals or data from one or more media sources (e.g., one or more remote servers or one or more local devices) and play back the received audio signals or data as sound. The one or more NMDs 120 are configured to receive spoken word commands, and the one or more control devices 130 are configured to receive user input. In response to the received spoken word commands and/or user input, the media playback system 100 can play back audio via one or more of the playback devices 110. In certain embodiments, the playback devices 110 are configured to commence playback of media content in response to a trigger. For instance, one or more of the playback devices 110 can be configured to play back a morning playlist upon detection of an associated trigger condition (e.g., presence of a user in a kitchen, detection of a coffee machine operation). In some embodiments, for example, the media playback system 100 is configured to play back audio from a first playback device (e.g., the playback device 110a) in synchrony with a second playback device (e.g., the playback device 110b). Interactions between the playback devices 110, NMDs 120, and/or control devices 130 of the media playback system 100 configured in accordance with the various embodiments of the disclosure are described in greater detail below with respect to
In the illustrated embodiment of
The media playback system 100 can comprise one or more playback zones, some of which may correspond to the rooms in the environment 101. The media playback system 100 can be established with one or more playback zones, after which additional zones may be added, or removed to form, for example, the configuration shown in
In the illustrated embodiment of
In some aspects, one or more of the playback zones in the environment 101 may each be playing different audio content. For instance, a user may be grilling on the patio 101i and listening to hip hop music being played by the playback device 110c while another user is preparing food in the kitchen 101h and listening to classical music played by the playback device 110b. In another example, a playback zone may play the same audio content in synchrony with another playback zone. For instance, the user may be in the office 101e listening to the playback device 110f playing back the same hip hop music being played back by playback device 110c on the patio 101i. In some aspects, the playback devices 110c and 110f play back the hip hop music in synchrony such that the user perceives that the audio content is being played seamlessly (or at least substantially seamlessly) while moving between different playback zones. Additional details regarding audio playback synchronization among playback devices and/or zones can be found, for example, in U.S. Pat. No. 8,234,395 entitled, “System and method for synchronizing operations among a plurality of independently clocked digital data processing devices,” which is incorporated herein by reference in its entirety.
a. Suitable Media Playback System
The links 103 can comprise, for example, one or more wired networks, one or more wireless networks, one or more wide area networks (WAN), one or more local area networks (LAN), one or more personal area networks (PAN), one or more telecommunication networks (e.g., one or more Global System for Mobiles (GSM) networks, Code Division Multiple Access (CDMA) networks, Long-Term Evolution (LTE) networks, 5G communication network networks, and/or other suitable data transmission protocol networks), etc. The cloud network 102 is configured to deliver media content (e.g., audio content, video content, photographs, social media content) to the media playback system 100 in response to a request transmitted from the media playback system 100 via the links 103. In some embodiments, the cloud network 102 is further configured to receive data (e.g. voice input data) from the media playback system 100 and correspondingly transmit commands and/or media content to the media playback system 100.
The cloud network 102 comprises computing devices 106 (identified separately as a first computing device 106a, a second computing device 106b, and a third computing device 106c). The computing devices 106 can comprise individual computers or servers, such as, for example, a media streaming service server storing audio and/or other media content, a voice service server, a social media server, a media playback system control server, etc. In some embodiments, one or more of the computing devices 106 comprise modules of a single computer or server. In certain embodiments, one or more of the computing devices 106 comprise one or more modules, computers, and/or servers. Moreover, while the cloud network 102 is described above in the context of a single cloud network, in some embodiments the cloud network 102 comprises a plurality of cloud networks comprising communicatively coupled computing devices. Furthermore, while the cloud network 102 is shown in
The media playback system 100 is configured to receive media content from the networks 102 via the links 103. The received media content can comprise, for example, a Uniform Resource Identifier (URI) and/or a Uniform Resource Locator (URL). For instance, in some examples, the media playback system 100 can stream, download, or otherwise obtain data from a URI or a URL corresponding to the received media content. A network 104 communicatively couples the links 103 and at least a portion of the devices (e.g., one or more of the playback devices 110, NMDs 120, and/or control devices 130) of the media playback system 100. The network 104 can include, for example, a wireless network (e.g., a WiFi network, a Bluetooth, a Z-Wave network, a ZigBee, and/or other suitable wireless communication protocol network) and/or a wired network (e.g., a network comprising Ethernet, Universal Serial Bus (USB), and/or another suitable wired communication). As those of ordinary skill in the art will appreciate, as used herein, “WiFi” can refer to several different communication protocols including, for example, Institute of Electrical and Electronics Engineers (IEEE) 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ac, 802.11ad, 802.11af, 802.11ah, 802.11ai, 802.11aj, 802.11aq, 802.11ax, 802.11ay, 802.15, etc. transmitted at 2.4 Gigahertz (GHz), 5 GHz, and/or another suitable frequency.
In some embodiments, the network 104 comprises a dedicated communication network that the media playback system 100 uses to transmit messages between individual devices and/or to transmit media content to and from media content sources (e.g., one or more of the computing devices 106). In certain embodiments, the network 104 is configured to be accessible only to devices in the media playback system 100, thereby reducing interference and competition with other household devices. In other embodiments, however, the network 104 comprises an existing household communication network (e.g., a household WiFi network). In some embodiments, the links 103 and the network 104 comprise one or more of the same networks. In some aspects, for example, the links 103 and the network 104 comprise a telecommunication network (e.g., an LTE network, a 5G network). Moreover, in some embodiments, the media playback system 100 is implemented without the network 104, and devices comprising the media playback system 100 can communicate with each other, for example, via one or more direct connections, PANs, telecommunication networks, and/or other suitable communication links.
In some embodiments, audio content sources may be regularly added or removed from the media playback system 100. In some embodiments, for example, the media playback system 100 performs an indexing of media items when one or more media content sources are updated, added to, and/or removed from the media playback system 100. The media playback system 100 can scan identifiable media items in some or all folders and/or directories accessible to the playback devices 110, and generate or update a media content database comprising metadata (e.g., title, artist, album, track length) and other associated information (e.g., URIs, URLs) for each identifiable media item found. In some embodiments, for example, the media content database is stored on one or more of the playback devices 110, NMDs 120, and/or control devices 130.
In the illustrated embodiment of
The media playback system 100 includes the NMDs 120a and 120d, each comprising one or more microphones configured to receive voice utterances from a user. In the illustrated embodiment of
b. Suitable Playback Devices
The playback device 110a, for example, can receive media content (e.g., audio content comprising music and/or other sounds) from a local audio source 105 via the input/output 111 (e.g., a cable, a wire, a PAN, a Bluetooth connection, an ad hoc wired or wireless communication network, and/or another suitable communication link). The local audio source 105 can comprise, for example, a mobile device (e.g., a smartphone, a tablet, a laptop computer) or another suitable audio component (e.g., a television, a desktop computer, an amplifier, a phonograph, a Blu-ray player, a memory storing digital media files). In some aspects, the local audio source 105 includes local music libraries on a smartphone, a computer, a networked-attached storage (NAS), and/or another suitable device configured to store media files. In certain embodiments, one or more of the playback devices 110, NMDs 120, and/or control devices 130 comprise the local audio source 105. In other embodiments, however, the media playback system omits the local audio source 105 altogether. In some embodiments, the playback device 110a does not include an input/output 111 and receives all audio content via the network 104.
The playback device 110a further comprises electronics 112, a user interface 113 (e.g., one or more buttons, knobs, dials, touch-sensitive surfaces, displays, touchscreens), and one or more transducers 114 (referred to hereinafter as “the transducers 114”). The electronics 112 is configured to receive audio from an audio source (e.g., the local audio source 105) via the input/output 111, one or more of the computing devices 106a-c via the network 104 (
In the illustrated embodiment of
The processors 112a can comprise clock-driven computing component(s) configured to process data, and the memory 112b can comprise a computer-readable medium (e.g., a tangible, non-transitory computer-readable medium, data storage loaded with one or more of the software components 112c) configured to store instructions for performing various operations and/or functions. The processors 112a are configured to execute the instructions stored on the memory 112b to perform one or more of the operations. The operations can include, for example, causing the playback device 110a to retrieve audio data from an audio source (e.g., one or more of the computing devices 106a-c (
The processors 112a can be further configured to perform operations causing the playback device 110a to synchronize playback of audio content with another of the one or more playback devices 110. As those of ordinary skill in the art will appreciate, during synchronous playback of audio content on a plurality of playback devices, a listener will preferably be unable to perceive time-delay differences between playback of the audio content by the playback device 110a and the other one or more other playback devices 110. Additional details regarding audio playback synchronization among playback devices can be found, for example, in U.S. Pat. No. 8,234,395, which was incorporated by reference above.
In some embodiments, the memory 112b is further configured to store data associated with the playback device 110a, such as one or more zones and/or zone groups of which the playback device 110a is a member, audio sources accessible to the playback device 110a, and/or a playback queue that the playback device 110a (and/or another of the one or more playback devices) can be associated with. The stored data can comprise one or more state variables that are periodically updated and used to describe a state of the playback device 110a. The memory 112b can also include data associated with a state of one or more of the other devices (e.g., the playback devices 110, NMDs 120, control devices 130) of the media playback system 100. In some aspects, for example, the state data is shared during predetermined intervals of time (e.g., every 5 seconds, every 10 seconds, every 60 seconds) among at least a portion of the devices of the media playback system 100, so that one or more of the devices have the most recent data associated with the media playback system 100.
The network interface 112d is configured to facilitate a transmission of data between the playback device 110a and one or more other devices on a data network such as, for example, the links 103 and/or the network 104 (
In the illustrated embodiment of
The audio components 112g are configured to process and/or filter data comprising media content received by the electronics 112 (e.g., via the input/output 111 and/or the network interface 112d) to produce output audio signals. In some embodiments, the audio processing components 112g comprise, for example, one or more digital-to-analog converters (DAC), audio preprocessing components, audio enhancement components, a digital signal processors (DSPs), and/or other suitable audio processing components, modules, circuits, etc. In certain embodiments, one or more of the audio processing components 112g can comprise one or more subcomponents of the processors 112a. In some embodiments, the electronics 112 omits the audio processing components 112g. In some aspects, for example, the processors 112a execute instructions stored on the memory 112b to perform audio processing operations to produce the output audio signals.
The amplifiers 112h are configured to receive and amplify the audio output signals produced by the audio processing components 112g and/or the processors 112a. The amplifiers 112h can comprise electronic devices and/or components configured to amplify audio signals to levels sufficient for driving one or more of the transducers 114. In some embodiments, for example, the amplifiers 112h include one or more switching or class-D power amplifiers. In other embodiments, however, the amplifiers include one or more other types of power amplifiers (e.g., linear gain power amplifiers, class-A amplifiers, class-B amplifiers, class-AB amplifiers, class-C amplifiers, class-D amplifiers, class-E amplifiers, class-F amplifiers, class-G and/or class H amplifiers, and/or another suitable type of power amplifier). In certain embodiments, the amplifiers 112h comprise a suitable combination of two or more of the foregoing types of power amplifiers. Moreover, in some embodiments, individual ones of the amplifiers 112h correspond to individual ones of the transducers 114. In other embodiments, however, the electronics 112 includes a single one of the amplifiers 112h configured to output amplified audio signals to a plurality of the transducers 114. In some other embodiments, the electronics 112 omits the amplifiers 112h.
The transducers 114 (e.g., one or more speakers and/or speaker drivers) receive the amplified audio signals from the amplifier 112h and render or output the amplified audio signals as sound (e.g., audible sound waves having a frequency between about 20 Hertz (Hz) and 20 kilohertz (kHz)). In some embodiments, the transducers 114 can comprise a single transducer. In other embodiments, however, the transducers 114 comprise a plurality of audio transducers. In some embodiments, the transducers 114 comprise more than one type of transducer. For example, the transducers 114 can include one or more low frequency transducers (e.g., subwoofers, woofers), mid-range frequency transducers (e.g., mid-range transducers, mid-woofers), and one or more high frequency transducers (e.g., one or more tweeters). As used herein, “low frequency” can generally refer to audible frequencies below about 500 Hz, “mid-range frequency” can generally refer to audible frequencies between about 500 Hz and about 2 kHz, and “high frequency” can generally refer to audible frequencies above 2 kHz. In certain embodiments, however, one or more of the transducers 114 comprise transducers that do not adhere to the foregoing frequency ranges. For example, one of the transducers 114 may comprise a mid-woofer transducer configured to output sound at frequencies between about 200 Hz and about 5 kHz.
By way of illustration, SONOS, Inc. presently offers (or has offered) for sale certain playback devices including, for example, a “SONOS ONE,” “PLAY:1,” “PLAY:3,” “PLAY:5,” “PLAYBAR,” “PLAYBASE,” “CONNECT:AMP,” “CONNECT,” and “SUB.” Other suitable playback devices may additionally or alternatively be used to implement the playback devices of example embodiments disclosed herein. Additionally, one of ordinary skilled in the art will appreciate that a playback device is not limited to the examples described herein or to SONOS product offerings. In some embodiments, for example, one or more playback devices 110 comprises wired or wireless headphones (e.g., over-the-ear headphones, on-ear headphones, in-ear earphones). In other embodiments, one or more of the playback devices 110 comprise a docking station and/or an interface configured to interact with a docking station for personal mobile media playback devices. In certain embodiments, a playback device may be integral to another device or component such as a television, a lighting fixture, or some other device for indoor or outdoor use. In some embodiments, a playback device omits a user interface and/or one or more transducers. For example,
c. Suitable Network Microphone Devices (NMDs)
In some embodiments, an NMD can be integrated into a playback device.
Referring again to
After detecting the activation word, voice processing 124 monitors the microphone data for an accompanying user request in the voice input. The user request may include, for example, a command to control a third-party device, such as a thermostat (e.g., NEST® thermostat), an illumination device (e.g., a PHILIPS HUE ® lighting device), or a media playback device (e.g., a Sonos® playback device). For example, a user might speak the activation word “Alexa” followed by the utterance “set the thermostat to 68 degrees” to set a temperature in a home (e.g., the environment 101 of
d. Suitable Control Devices
The control device 130a includes electronics 132, a user interface 133, one or more speakers 134, and one or more microphones 135. The electronics 132 comprise one or more processors 132a (referred to hereinafter as “the processors 132a”), a memory 132b, software components 132c, and a network interface 132d. The processor 132a can be configured to perform functions relevant to facilitating user access, control, and configuration of the media playback system 100. The memory 132b can comprise data storage that can be loaded with one or more of the software components executable by the processor 112a to perform those functions. The software components 132c can comprise applications and/or other executable software configured to facilitate control of the media playback system 100. The memory 112b can be configured to store, for example, the software components 132c, media playback system controller application software, and/or other data associated with the media playback system 100 and the user.
The network interface 132d is configured to facilitate network communications between the control device 130a and one or more other devices in the media playback system 100, and/or one or more remote devices. In some embodiments, the network interface 132d is configured to operate according to one or more suitable communication industry standards (e.g., infrared, radio, wired standards including IEEE 802.3, wireless standards including IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.15, 4G, LTE). The network interface 132d can be configured, for example, to transmit data to and/or receive data from the playback devices 110, the NMDs 120, other ones of the control devices 130, one of the computing devices 106 of
The user interface 133 is configured to receive user input and can facilitate ‘control of the media playback system 100. The user interface 133 includes media content art 133a (e.g., album art, lyrics, videos), a playback status indicator 133b (e.g., an elapsed and/or remaining time indicator), media content information region 133c, a playback control region 133d, and a zone indicator 133e. The media content information region 133c can include a display of relevant information (e.g., title, artist, album, genre, release year) about media content currently playing and/or media content in a queue or playlist. The playback control region 133d can include selectable (e.g., via touch input and/or via a cursor or another suitable selector) icons to cause one or more playback devices in a selected playback zone or zone group to perform playback actions such as, for example, play or pause, fast forward, rewind, skip to next, skip to previous, enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross fade mode, etc. The playback control region 133d may also include selectable icons to modify equalization settings, playback volume, and/or other suitable playback actions. In the illustrated embodiment, the user interface 133 comprises a display presented on a touch screen interface of a smartphone (e.g., an iPhone™, an Android phone). In some embodiments, however, user interfaces of varying formats, styles, and interactive sequences may alternatively be implemented on one or more network devices to provide comparable control access to a media playback system.
The one or more speakers 134 (e.g., one or more transducers) can be configured to output sound to the user of the control device 130a. In some embodiments, the one or more speakers comprise individual transducers configured to correspondingly output low frequencies, mid-range frequencies, and/or high frequencies. In some aspects, for example, the control device 130a is configured as a playback device (e.g., one of the playback devices 110). Similarly, in some embodiments the control device 130a is configured as an NMD (e.g., one of the NMDs 120), receiving voice commands and other sounds via the one or more microphones 135.
The one or more microphones 135 can comprise, for example, one or more condenser microphones, electret condenser microphones, dynamic microphones, and/or other suitable types of microphones or transducers. In some embodiments, two or more of the microphones 135 are arranged to capture location information of an audio source (e.g., voice, audible sound) and/or configured to facilitate filtering of background noise. Moreover, in certain embodiments, the control device 130a is configured to operate as playback device and an NMD. In other embodiments, however, the control device 130a omits the one or more speakers 134 and/or the one or more microphones 135. For instance, the control device 130a may comprise a device (e.g., a thermostat, an IoT device, a network device) comprising a portion of the electronics 132 and the user interface 133 (e.g., a touch screen) without any speakers or microphones.
A playback device can be configured to generate a playlist of media content based at least in part on (i) a received first signal indicative of a user's current emotional state and (ii) a received second signal indicative of a user's desired emotional state. The generated playlist can be played back via a playback device, as previously described, to influence the user's emotional state from the current emotional state toward and/or to the desired emotional state. As explained in detail elsewhere herein, the user's emotional state can be constantly or periodically monitored and considered by the system as media content is played for the user. In doing so, the system can determine whether the playlist is having an intended effect on the user and/or whether the playlist needs to be updated. Although several embodiments of the present technology relate to methods for generating such a playlist via a playback device, in some embodiments a control device (e.g., the control device 130;
As shown in
The sensor 202 is configured to generate information generally corresponding to a user's mood or emotional state.
Referring back to
As described elsewhere herein, embodiments of the present technology are configured to process sensor data 203 (
For example, in some embodiments the horizontal axis 401a of the plane 400 corresponds to a range of values associated with a ratio of the “left channel” signals and “right channel” signals, with a higher ratio corresponding to a more positive value on the horizontal axis 401a and a low ratio corresponding to a more negative value on the horizontal axis 401a. The left channel can be indicative of brain activity levels in the user's left hemisphere or a portion thereof, and the right channel can be indicative of brain activity levels in the use's right hemisphere or a portion thereof. Because positive emotional states are associated with relatively higher activity levels in the right hemisphere (and conversely negative emotional states are associated with relatively higher activity levels in the left hemisphere), the ratio of these channels can be used to calculate an associated valence value along the horizontal axis 401a of the plane 400. Although the illustrated example utilizes left and right channels to calculate a valence score, in other embodiments different techniques can be used to assign values indicative of the valence of a user's emotional state. For example, the valence can be calculated by comparing certain frequency bands, by evaluating brain activity in different anatomical regions beyond left and right hemisphere, or any other suitable technique.
Additionally or alternatively, in some embodiments the vertical axis 401b of the plane 400 corresponds to a range of values associated with a ratio of (i) the relative power of the beta band signals, (ii) the relative power of the theta band signals, and (iii) the relative power of the alpha band signals. A higher ratio of these signals may correspond to a higher value on the vertical axis 401b. Because high levels of theta and alpha band signals are associated with low-arousal states (e.g. deep relaxation or sleepiness) and high levels of beta band signals are associated with high-arousal states (e.g., focused and alert), the ratio of these signals can be used to calculate an associated arousal value along the vertical axis 401b of the plane 400. Although the illustrated example utilizes beta, theta, and/or alpha signals to calculate an arousal score, in other embodiments different techniques can be used to assign values indicative of the arousal of a user's emotional state. For example, the arousal can be calculated by evaluating brain activity in one or more anatomical regions, by comparing certain frequency bands other than beta, theta, or alpha, or by using any other suitable technique.
In some embodiments, the plane 400 is tailored to correspond to a particular user. Because the sensor data associated with a given emotional state may be slightly different for each user, adjusting the plane 400 (e.g., the origin of the plane 400) for each user can beneficially improve the accuracy of determining a user's current emotional state and/or the pathway needed to obtain a desired emotional state. Stated differently, a state of calm for a first user may correspond to a first parameter value, whereas a state of calm for a second user may correspond to a second parameter value different than the first parameter value.
The method 600 further includes receiving a second signal indicative of a desired emotional state of the user (process portion 604). In some embodiments, the second signal is received from a control device (e.g., the control device 130;
The method 600 further includes, based at least in part on the first and second signals, generating a playlist of media content (process portion 606). The playlist can include items (e.g., audio content, songs, podcasts, video sounds, videos, etc.) that, when played back to the user, are configured to influence the user's emotional state and gradually transition the user from the current emotional state toward and/or to the desired emotional state. Stated differently, the items of the playlist are configured to transition the user from the current emotional state, to one or more intermediate emotional states, and then to the desired emotional state.
As explained in detail elsewhere herein (e.g., with reference to
Each of the playlist's items can be selected to be part of the playlist based on an association with a particular emotion or set of emotions. For example, the items may be selected from a database of media content items, and may include metadata linking a particular item to a particular emotion. Additionally or alternatively, each of the playlist's items can be selected based on factors associated with the user, such as the user's musical interest and/or demographic (e.g., age, gender, personality type, nationality, etc.). In some embodiments, the user's musical interest can be determined based on the user's profile on a media content mobile application (e.g., Spotify®, YouTube®, YouTube Music®, Apple Music®, Amazon Music®, etc.). Additionally or alternatively to the above-mentioned factors, the playlist's items may be based at least in part on a temporary or permanent condition (e.g., a medical condition) of the user. For example, if the user experiences depression, sleep deprivation, hyperactivity, attention-deficit, and/or other symptoms, the playlist may consider the condition, e.g., by selecting items known or expected to at least partially mitigate the associated symptoms.
The method 600 can further include playing back, via a playback device (e.g., the playback device 110c;
As shown in
As described elsewhere herein, in some embodiments the user's current emotional state is constantly and/or iteratively monitored or measured (e.g., at predetermined intervals) to ensure the user's current emotional state is transitioning toward the desired emotional state or at least not in a direction opposite the desired emotional state. In such embodiments, as items of the playlist are played back via the playback device to the user in the arranged order defined by the pathway, the user's current emotional state is measured simultaneously. As explained in detail elsewhere herein (e.g., with reference to
Referring next to
Referring next to
In some instances, the media content played back to influence a user's emotional state can include generative content (e.g., generative audio content and/or generative visual content). As used herein, generative content includes media content that is generated in real-time or near-real-time using an algorithm or other non-human system utilizing rule-based computations to create a bespoke or customized composition. In various examples, the generative media content can be varied based on input from sensors or other parameters to influence a user's emotional state over time. Because generative media can be adjusted on a number of different characteristics, this approach may enable more fine-tuned control of a user's emotional state than selecting pre-recorded media content for a playlist.
The method 1000 further includes receiving a second signal indicative of a desired emotional state of the user (process portion 1002). The second signal may be similar or identical to the second signal described with reference to
The method 1000 can further include providing generative audio (process portion 1006). The provided audio may be a playlist of distinct items or portions of generative audio each having different acoustic characteristics than one another. In such embodiments, the generative audio can be configured to be played sequentially to gradually transition the user from the current emotional state to or toward the desired emotional state. Generative audio can be any audio generated at least in part from an algorithm and/or non-human system utilizing a rule-based computer composition. As such, in some embodiments generative audio can be endless and/or dynamic audio that is altered (e.g., altered in real time) as inputs (e.g., parameters associated with the first, second, and/or other signals described herein) to the algorithm change. Additionally or alternatively, in some embodiments generative audio may be a media item (e.g., a song) with altered acoustic characteristics. Moreover, the generative audio may utilize one or more existing media items as a template that is altered based on inputs, for example, corresponding to the one or more users' current and/or desired emotional states. As an example, the generative audio may generally correspond to a media item that generally corresponds to an emotional state of the user.
The method 1000 can further include adjusting one or more audio characteristics of the generative audio, based on at least one of the first signal or the second signal (process portion 1008). Stated differently, adjusting the one or more audio characteristics of the generative audio can be based on the first signal, the second signal, or the first and second signals. The audio characteristics can include at least one of tempo, scale, pitch, beats per minute, bass, treble, mid-range volume, length, key, genre, or frequency content. As an example, a given media item can be adjusted to increase or decrease one or more of these audio characteristics to achieve a desired output. In some embodiments, adjusting the one or more audio characteristics may be based on an algorithm, and/or performed via a cloud network or remote computing device (e.g., a device other than a playback device of the user) or a local device (e.g., without utilizing a cloud network).
In some embodiments, the one or more audio characteristics of the generative audio can be adjusted to provide at least (i) a first portion of media content having a first parameter corresponding to the current emotional state (e.g., from the first signal referenced in process portion 1002), (ii) a second portion of media content having a second parameter different than the first parameter, and (iii) an nth portion of media content having an nth parameter corresponding to the desired emotional state. Each of the first portion, second portion, and nth portion can correspond to a single media item. Additionally or alternatively, transition between each of the portions can be seamless or without any substantial change in audio output. That is, unlike the transition between two songs in which there is a gap in audio output between one song ending and the other beginning, the transition between portions of generative audio need not have a gap in audio output. Instead, the transition may comprise altering a different acoustic characteristic or altering the same acoustic characteristic as the previous portion in a different manner. In such embodiments, each portion may correspond to a particular block of time during which the corresponding media item is played back to one or more users. For example, the first portion may correspond to a media item in which one or more acoustic characteristics (e.g., tempo) of the original media item has been adjusted (e.g., increased or decreased) based on the current emotional state, the second portion may correspond to the media item in which one or more acoustic characteristics (e.g., tempo and/or bass) has been adjusted relative to that of the first portion, and the third portion may correspond to the media item in which one or more acoustic characteristics (e.g., treble) has been adjusted relative to that of at least one of the first portion, second portion, or other previously provided portions.
Each portion (e.g., the first portion, second portion, etc.) provided to or played back for the user(s) can be configured to alter the emotional state of the user(s) generally toward the desired emotional state. As but one example, if a user has a current emotional state of bored and a desired emotional state of excited or happy, the first portion may correspond to the media item having a first tempo, the second portion may correspond to the media item having a second tempo greater than the first tempo, and the nth portion may correspond to the media item having a third tempo greater than the second tempo. In such examples, other acoustic characteristics in addition to tempo may also change for one or more of the portions. As another example, if a user has a current emotional state of excited and a desired emotional state of calm, the first portion may correspond to the media item having a first tempo, the second portion may correspond to the media item having a second tempo less than the first tempo, and the nth portion may correspond to the media item having a third tempo less than the second tempo.
In some embodiments, at least one of the first portion, second portion, or nth portion can correspond to a different media item. For example, the first portion may correspond to a first media item having a first parameter associated with the current emotional state, the second portion may correspond to a second media item having a second parameter, and the nth portion may correspond to an nth media item associated with an nth parameter associated with the desired emotional state, in which each of the media items are different from one another. In such embodiments, transition between the portions may still be without any gap in audio output, as previously described. Additionally or alternatively, in such embodiments each of the media items may comprise generative audio in that each media item includes at least one acoustic characteristic different than that of the original (e.g., unaltered) media item. For example, the first media item, chosen because it generally has a first parameter corresponding to the current emotional state, may have at least one of its acoustic characteristics altered to match the current emotional state more closely.
In some embodiments, generative audio may be played back (e.g., only played back) when the emotional state of the user does not correspond to any particular media item. That is, the system may generally select or determine media items that each have parameters corresponding to a corresponding emotional state of the user, and utilize or create generative audio only when no media item is available that corresponds to a particular emotional state. In such embodiments, the generative audio may be utilized to bridge the gap between two media items that correspond to different emotional states. For example, given a current emotional state, desired emotional state, and/or neutral state of a user, a generated playlist for the user may include a first item corresponding to the current emotional state, a second item corresponding to the desired emotional state, and generative audio to be played back between the first and second items to help transition the user from the current emotional state to the desired emotional state. In such embodiments, the generative audio may comprise portions of the first item, the second item, or the first and second items. Moreover, the generative audio may alter audio characteristics of the first item, second item, or first and second items.
In some embodiments, the system may generate a playlist that includes generative media that is layered on top of or integrated with existing media content. For example, nature sounds, melodious tunes, or the like may be layered over existing media content, and thereby played in a synchronous manner along with media items (e.g., media items with unaltered audio characteristics). In some embodiments, the generative media may be layered on top of other generative audio, as previously described. The layered generative audio may help progress the user toward the desired emotional state more effectively than play back of just the media content. Additionally or alternatively, in some embodiments the generative media may correspond to visual media or other non-audio media. For example, the generative media may be associated with a display of lights configured to be coupled to a lighting device and/or synchronously provided with the first, second, and nth portions or items.
In some embodiments, adjusting one or more audio characteristics of the generative audio may further be based on a third signal received while playing back one of the portions and that is indicative of an updated emotional state of the user. In doing so, the device or system can determine whether the emotional state of the user is progressing as expected toward the desired emotional state. If the emotional state of the user is not progressing toward the desired emotional state, as indicated by the third signal, further adjustments may be made to a subsequent portion of the generative audio to be played back to the user. In such embodiments, for example, the second portion may be adjusted to have a third parameter which corresponds to the updated emotional state of the user and which is different than the first and second parameters. As described elsewhere herein, by receiving updated emotional states from the user as the user is exposed to certain media, adjustments may be made to the media items or portions played back to the user to better ensure the emotional state of the user continues to progress toward the desired emotional state.
As shown in
In some embodiments, the plurality of first signals may originate from the multiple sensors worn by different users. For example, one of the first signals may originate from a first sensor worn by a first user, another of the first signals may originate from a second sensor worn by a second user, and yet another of the first signals may originate from a third sensor not worn by a user and/or attached to a stationary structure. As described elsewhere herein, the plurality of the first signals may be utilized to generate a playlist, e.g., that includes generative audio. In some embodiments, the plurality of the first signals can generate a single playlist for multiple users or multiple playlists for multiple users.
The method 1100 can further include receiving a second signal indicative of a desired emotional state (process portion 1104). The second signal may be similar or identical to the second signal described with reference to
The method 1100 can further include providing generative audio (process portion 1106). Process portion 1106 may be similar or identical to the process portion 1006 described with reference to
The method 1100 can further include adjusting one or more audio characteristics of the generative audio, based on at least one of the first signal or the second signal (process portion 1108). The process portion 1108 can be similar or identical to the process portion 1008 described with reference to
As previously described, in some embodiments, generative audio may only be played back when the emotional state of the user does not correspond to any particular media item. In such embodiments, generative audio may be utilized as a bridge between two media items. For example, given a current emotional state, desired emotional state, and/or neutral state of a user, a generated playlist for the user may include a first item corresponding to the current emotional state, a second item corresponding to the desired emotional state, and generative audio to be played back between the first and second items. In such embodiments, the generative audio may comprise portions of the first item, the second item, or the first and second items.
Referring first to
As shown in
Referring next to
As previously described with reference to
As an example of the embodiments described herein, the users 5, 10 may be part of an exercise class. As the class begins, the users 5, 10 may be provided with a first media item or portion (as previously described) that corresponds to their individual or collective current emotional states. As the users begin to exercise, individual sensors on each of the users 5, 10 can provide data to the network 102 which can be used to determine changes in, for example, heart rate, perspiration, or the like. Based on the data and other signals, such as the desired emotional state of each of the users 5, 10, the system 1300 (e.g., the playback device 1210 or remote computing device) can adjust the media content accordingly. Other examples include dance parties, art installations, religious or cultural celebrations, or other such group activities. Various other examples and use cases will be readily apparent to one of ordinary skill in the art.
The above discussions relating to playback devices, controller devices, playback zone configurations, and media content sources provide only some examples of operating environments within which functions and methods described below may be implemented. Other operating environments and configurations of media playback systems, playback devices, and network devices not explicitly described herein may also be applicable and suitable for implementation of the functions and methods.
The description above discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including, among other components, firmware and/or software executed on hardware. It is understood that such examples are merely illustrative and should not be considered as limiting. For example, it is contemplated that any or all of the firmware, hardware, and/or software aspects or components can be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only ways) to implement such systems, methods, apparatus, and/or articles of manufacture.
Additionally, references herein to “embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one example embodiment of an invention. The appearances of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. As such, the embodiments described herein, explicitly and implicitly understood by one skilled in the art, can be combined with other embodiments.
The specification is presented largely in terms of illustrative environments, systems, procedures, steps, logic blocks, processing, and other symbolic representations that directly or indirectly resemble the operations of data processing devices coupled to networks. These process descriptions and representations are typically used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth to provide a thorough understanding of the present disclosure. However, it is understood to those skilled in the art that certain embodiments of the present technology can be practiced without certain, specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than the foregoing description of embodiments.
When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one of the elements in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, and so on, storing the software and/or firmware.
The present technology is illustrated, for example, according to various aspects described below. Various examples of aspects of the present technology are described as numbered examples (1, 2, 3, etc.) for convenience. These are provided as examples and do not limit the present technology. It is noted that any of the dependent examples may be combined in any combination, and placed into a respective independent example. The other examples can be presented in a similar manner.
Example 1: A playback device comprising: one or more amplifiers configured to drive one or more transducers; one or more processors; and tangible, non-transitory, computer-readable media storing instructions executable by the one or more processors to cause the playback device to perform operations comprising: receiving a first signal from a sensor, the first signal being indicative of a current emotional state of a user; receiving a second signal corresponding to a desired emotional state, the desired emotional state differing from the current emotional state; playing back generative audio via the one or more amplifiers; and after receiving the first signal and the second signal, adjusting one or more audio characteristics of the generative audio to provide at least (i) a first portion of media content having a first parameter corresponding to the current emotional state, (ii) a second portion of media content having a second parameter different than the first parameter, and (iii) an nth portion of media content having an nth parameter corresponding to the desired emotional state.
Example 2: The device of any one of the clauses herein, wherein the adjusting one or more audio characteristics comprises adjusting at least one of tempo, scale, pitch, beats per minute, bass, treble, mid-range volume, length, key, genre, or frequency content.
Example 3: The device of any one of the clauses herein, wherein the adjusting the one or more audio characteristics is based at least in part on the first signal, the second signal, or the first and second signals.
Example 4: The device of any one of the clauses herein, wherein the operations further comprise: playing back the first portion; while playing back the first portion, receiving a third signal indicative of an updated emotional state of the user; and modifying the second portion to have a third parameter corresponding to the updated emotional state of the user, wherein the third parameter is different from the first and second parameters.
Example 5: The device of any one of the clauses herein, wherein the operations further comprise: obtaining a neutral emotional state of the user; and sending two or more of the neutral emotional state, the first signal, and the second signal to one or more remote computing devices, wherein adjusting one or more audio characteristics of the generative audio comprises receiving the adjusted generative audio from the one or more remote computing devices.
Example 6: The device of any one of the clauses herein, the operations further comprising obtaining a neutral emotional state of the user by: providing one or more items of audio content to the user; and receiving from the user a response to each of the provided items of audio content.
Example 7: The device of any one of the clauses herein, wherein the sensor is configured to be worn by the user and to detect at least one of brain activity, voice, location, movement, heart rate, heart rate variation, body temperature, or perspiration of the user.
Example 8: The device of any one of the clauses herein, wherein adjusting the one or more audio characteristics of the generative audio comprises receiving data associated with the user's audio playback history, the history including at least one or more of items played back, time of playback, or location of the device during playback.
Example 9: The device of any one of the clauses herein, wherein the sensor is a first sensor and the first signal is a first sensor signal, the operations further comprising receiving a second sensor signal from a second sensor, and wherein adjusting one or more audio characteristics of the generative audio is based on the first and second sensor signals.
Example 10: The device of any one of the clauses herein, wherein the first sensor is a first type of sensor, and wherein the second sensor is a second, different type of sensor.
Example 11: The device of any one of the clauses herein, wherein the first sensor is configured to be worn by the user, and wherein the second sensor is configured to be attached to a fixed structure.
Example 12: The device of any one of the clauses herein, wherein the user is a first user and the sensor is a first sensor worn by the first user, the operations further comprising: receiving a signal from a second sensor worn by a second user; and receiving a signal from a third sensor worn by a third user, wherein adjusting one or more audio characteristics of the generative audio is further based on the received signals from the second and third sensors.
Example 13: The device of any one of the clauses herein, further comprising generating a playlist of visual content to be synchronously played back with the generative audio.
Example 14: A method comprising: receiving a first signal from a sensor, the first signal being indicative of a current emotional state of a user; receiving a second signal corresponding to a desired emotional state, the desired emotional state differing from the current emotional state; causing a playback device to play back generative audio; and after receiving the first signal and the second signal, adjusting one or more audio characteristics of the generative audio to provide at least (i) a first portion of media content having a first parameter corresponding to the current emotional state, (ii) a second portion of media content having a second parameter different than the first parameter, and (iii) an nth portion of media content having an nth parameter corresponding to the desired emotional state.
Example 15: The method of any one of the clauses herein, wherein adjusting one or more audio characteristics comprises adjusting at least one of tempo, scale, pitch, beats per minute, bass, treble, mid-range volume, length, key, genre, or frequency content.
Example 16: The method of any one of the clauses herein, wherein adjusting the one or more audio characteristics is based at least in part on the first signal, the second signal, or the first and second signals.
Example 17: The method of any one of the clauses herein, further comprising: playing back the first portion; and while playing back the first portion, receiving a third signal indicative of an updated emotional state of the user.
Example 18: The method of any one of the clauses herein, further comprising: obtaining a neutral emotional state of the user; and sending two or more of the neutral emotional state, the first signal, and the second signal to one or more remote computing devices, wherein adjusting one or more audio characteristics of the generative audio comprises receiving the adjusted generative audio from the one or more remote computing devices.
Example 19: The method of any one of the clauses herein, wherein the sensor is configured to be worn by the user, and wherein the first signal provided by the sensor corresponds to at least one of brain activity, voice, location, movement, heart rate, heart rate variation, body temperature, or perspiration of the user.
Example 20: The method of any one of the clauses herein, wherein adjusting the one or more audio characteristics of the generative audio comprises receiving data associated with the user's audio playback history, the history including at least one or more of items played back, time of playback, or location of the device during playback.
Example 21: The method of any one of the clauses herein, wherein obtaining a neutral emotional state comprises: providing one or more items of audio content to the user; and receiving from the user a response to each of the provided items of audio content.
Example 22: The method of any one of the clauses herein, wherein the sensor is a first sensor and the first signal is a first sensor signal, the operations further comprising receiving a second sensor signal from a second sensor, and wherein adjusting one or more audio characteristics of the generative audio is based on the first and second sensor signals.
Example 23: The method of any one of the clauses herein, wherein the first sensor is a first type of sensor, and wherein the second sensor is a second, different type of sensor.
Example 24: The method of any one of the clauses herein, wherein the first sensor is configured to be worn by the user, and wherein the second sensor is configured to be attached to a fixed structure.
Example 25: The method of any one of the clauses herein, wherein the user is a first user and the sensor is a first sensor worn by the first user, the method further comprising: receiving a signal from a second sensor worn by a second user; and receiving a signal from a third sensor worn by a third user, wherein adjusting one or more audio characteristics of the generative audio is further based on the received signals from the second and third sensors.
Example 26: The method of any one of the clauses herein, further comprising generating a playlist of visual content to be synchronously played back with the generative audio.
Example 27: The method of any one of the clauses herein, wherein the playlist comprises visual content.
Example 28: Tangible, non-transitory computer-readable media comprising instructions that, when executed by one or more processors of a playback device, cause the playback device to perform operations comprising: receiving a first signal from a sensor, the first signal being indicative of a current emotional state of a user; receiving a second signal corresponding to a desired emotional state, the desired emotional state differing from the current emotional state; play back generative audio via the playback device; and after receiving the first signal and the second signal, adjusting one or more audio characteristics of the generative audio to provide at least (i) a first portion of media content having a first parameter corresponding to the current emotional state, (ii) a second portion of media content having a second parameter different than the first parameter, and (iii) an nth portion of media content having an nth parameter corresponding to the desired emotional state.
Example 29: The computer-readable media of any one of the clauses herein, wherein adjusting one or more audio characteristics comprises adjusting at least one of tempo, scale, pitch, beats per minute, bass, treble, mid-range volume, length, key, genre, or frequency content.
Example 30: The computer-readable media of any one of the clauses herein, wherein adjusting the one or more audio characteristics is based at least in part on the first signal, the second signal, or the first and second signals.
Example 31: The computer-readable media of any one of the clauses herein, further comprising: playing back the first portion; and while playing back the first portion, receiving a third signal indicative of an updated emotional state of the user.
Example 32: The computer-readable media of any one of the clauses herein, further comprising: obtaining a neutral emotional state of the user; and sending two or more of the neutral emotional state, the first signal, and the second signal to one or more remote computing devices, wherein adjusting one or more audio characteristics of the generative audio comprises receiving the adjusted generative audio from the one or more remote computing devices.
Example 33: The computer-readable media of any one of the clauses herein, wherein the sensor is configured to be worn by the user, and wherein the first signal provided by the sensor corresponds to at least one of brain activity, voice, location, movement, heart rate, heart rate variation, body temperature, or perspiration of the user.
Example 34: The computer-readable media of any one of the clauses herein, wherein adjusting the one or more audio characteristics of the generative audio comprises receiving data associated with the user's audio playback history, the history including at least one or more of items played back, time of playback, or location of the device during playback.
Example 35: The computer-readable media of any one of the clauses herein, wherein obtaining a neutral emotional state comprises: providing one or more items of audio content to the user; and receiving from the user a response to each of the provided items of audio content.
Example 36: The computer-readable media of any one of the clauses herein, wherein the sensor is a first sensor and the first signal is a first sensor signal, the operations further comprising receiving a second sensor signal from a second sensor, and wherein adjusting one or more audio characteristics of the generative audio is based on the first and second sensor signals.
Example 37: The computer-readable media of any one of the clauses herein, wherein the first sensor is a first type of sensor, and wherein the second sensor is a second, different type of sensor.
Example 38: The computer-readable media of any one of the clauses herein, wherein the first sensor is configured to be worn by the user, and wherein the second sensor is configured to be attached to a fixed structure.
Example 39: The computer-readable media of any one of the clauses herein, wherein the user is a first user and the sensor is a first sensor worn by the first user, the method further comprising: receiving a signal from a second sensor worn by a second user; and receiving a signal from a third sensor worn by a third user, wherein adjusting one or more audio characteristics of the generative audio is further based on the received signals from the second and third sensors.
Example 40: The computer-readable media of any one of the clauses herein, further comprising generating a playlist of visual content to be synchronously played back with the generative audio.
Example 41: The computer-readable media of any one of the clauses herein, wherein the playlist comprises visual content.
Example 42: A playback device comprising: one or more amplifiers configured to drive one or more transducers; one or more processors; and tangible, non-transitory, computer-readable media storing instructions executable by the one or more processors to cause the playback device to perform operations comprising: receiving a plurality of first signals from multiple sensors, at least one of the first signals being indicative of a current emotional state of one or more users; receiving a second signal corresponding to a desired emotional state, the desired emotional state differing from the current emotional state; playing back generative audio via the one or more amplifiers; and after receiving the first signal and the second signal, adjusting one or more audio characteristics of the generative audio.
Example 43: The device of any one of the clauses herein, wherein the one of more users includes a first user and a second user, and the multiple sensors includes a first sensor worn by the first user, and a second sensor worn by the second user, wherein adjusting one or more audio characteristics of the generative audio is based on the received first signals from the first and second sensors.
Example 44: The device of any one of the clauses herein, wherein the generative audio is part of a playlist configured to alter the emotional state(s) of the first and second users to the desired emotional state.
Example 45: The device of any one of the clauses herein, wherein playing back the generative audio comprises playing back (i) a first playlist configured to alter the emotional state of the first user to the desired emotional state, and (ii) a second playlist, different than the first playlist, configured to alter the emotional state of the second user to the desired emotional state.
Example 46: The device of any one of the clauses herein, wherein receiving the second signal comprises receiving a plurality of second signals corresponding to a first desired emotional state of the first user and a second desired emotional state of the second user, and wherein playing back the generative audio comprises playing back (i) a first playlist configured to alter the emotional state of the first user to the first desired emotional state, and (ii) a second playlist, different than the first playlist, configured to alter the emotional state of the second user to the second desired emotional state.
Example 47: The device of any one of the clauses herein, wherein the multiple sensors includes a third sensor attached to a stationary structure, wherein adjusting one or more audio characteristics of the generative audio is based on the received first signals from the third sensor.
Example 48: The device of any one of the clauses herein, wherein adjusting comprises adjusting one or more audio characteristics of the generative audio to provide at least (i) a first portion of media content having a first parameter corresponding to the current emotional state, (ii) a second portion of media content having a second parameter different than the first parameter, and (iii) an nth portion of media content having an nth parameter corresponding to the desired emotional state.
This application claims the benefit of priority to U.S. patent application Ser. No. 62/706,544, filed Aug. 24, 2020, which is incorporated herein by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2021/071260 | 8/24/2021 | WO |
Number | Date | Country | |
---|---|---|---|
62706544 | Aug 2020 | US |