One or more embodiments relate generally to wearable audio devices, and in particular, to configurable wearable devices and services based wearable device configuration.
Personal listening devices, such as headphones, headsets, and ear buds, are used to reproduce sound for users from electronic devices, such as music players, recorders, cell phones, etc. Most personal listening devices simply pass sound from a sound producing electronic device to the speaker portions of the listening device.
One or more embodiments relate to a configurable wearable audio device and services based on wearable device configuration. In one embodiment, a method provides a notification on a wearable audio device. The method includes detecting a physical configuration of the wearable audio device. The physical configuration is determined using information provided by one or more sensors on the wearable audio device. At least one notification routed from a mobile device which is connected with the wearable audio device is provided in a manner corresponding to the determined physical configuration.
In another embodiment, a system provides a host device including a manager that is configured for providing at least one notification to a connected wearable audio device in a manner corresponding to a detected physical configuration of the wearable device.
In one embodiment, a non-transitory computer-readable medium having instructions which when executed on a computer perform a method comprising detecting a physical configuration of a wearable audio device. In one embodiment, the physical configuration is determined using information provided by one or more sensors on the wearable audio device. At least one notification routed from a mobile device which is connected with the wearable audio device is provided in a manner corresponding to the determined physical configuration.
These and other features, aspects and advantages of the one or more embodiments will become understood with reference to the following description, appended claims and accompanying figures.
The following description is made for the purpose of illustrating the general principles of one or more embodiments and is not meant to limit the inventive concepts claimed herein. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations. Unless otherwise specifically defined herein, all terms are to be given their broadest possible interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc.
One or more embodiments relate to a configurable wearable audio device and services based on wearable audio device configuration. In one embodiment, a method provides a notification on a wearable audio device. The method includes detecting a physical configuration of the wearable audio device. The physical configuration may be determined using information provided by one or more sensors on the wearable audio device. At least one notification routed from a mobile device which is connected with the wearable audio device is provided in a manner corresponding to the determined physical configuration.
One or more embodiments provide managed services based on detected wearable configuration information and other context. The services may include readouts of important information, notifications, and enhanced voice commands. Other features may include multi-device coordination/intelligent routing, information aggregation, and intelligent notification population on one or more devices or devices that are in focused use by a user (i.e., reading a screen display, listening to a device, manipulating a display on a screen, etc.). One embodiment provides for device state detection though sensors (e.g., light sensor, touch sensor, movement sensor, location sensor, etc.), which may assist in controlling various modes of device usage. One embodiment provides for screen detection-smart device routing to a current device in use. One or more embodiments provide for multi-service shuffle or for aggregation to facilitate action (e.g., across multiple applications). Intelligent population of notifications received on a display or through audio (which may be limited to useful notifications) and intelligent management of information (e.g., interested news, weather, calendar events, traffic, etc.) is provided and may be limited to a device in focused use.
In one embodiment, the cord or cable 116 may include a cable running through the cord or cable for communication between the audio modules 110 and 112. In one embodiment, the cord or cable 116 may include material overmolded of other soft material (e.g., foam, gel, plastic, other molded material, etc.) for wearable comfort. In one example, the cord or cable 116 may be shaped for comfortable fit when placed against a user's neck. In one embodiment, the cord or cable 116 is designed based on specific uses, such as water resistant or waterproof for watersport use, includes additional padding or material for jogging or sports/activities that would cause the cable or cord 116 to move when the wearable device 105 is in use (e.g., ear buds deployed in a user's ear, worn as a necklace and audio modules 110 and 112 are powered on, in stand-by or operational, etc.). In one embodiment, the cord or cable 116 may include shape-memory alloy or superelastic (or pseudoelastic) material, such as nitinol.
In one embodiment, the wearable device 105 has a weight that is ergonomically distributed between the cable or cord 116 and the ear buds 111 and 113 when worn by a user (either as a necklace, worn in one ear, or worn in both ears).
In one example, the audio module 110 may include a battery (e.g., rechargeable battery, replaceable battery, etc.), indicator LED(s), voice activation button (e.g., digital assistant activation, voice command acceptance trigger, etc.) or touch activated device (e.g., resistive digitizer, touchscreen button, capacitive area or button, etc.), power button or touch activated device, and an audio driver. In one example, the audio module 110 may include a capacitive area or button and resistive digitizer, which may be programmable to serve as controls (e.g., volume, power, microphone control, mute, directional control (forward/back), etc.).
In one example, the cord or cable 116 may include one or more haptic elements including a haptic motor for haptic notifications (e.g., low battery warning, incoming messages (e.g., voicemail or text message), incoming calls, specific caller, timer notifications, distance notification, etc.). In one example, the haptic element(s) may be located behind the neck when the wearable device 105 is worn by a user, spread out around the cable or cord 116, or a single haptic element placed in a desired or configurable location on the wearable device 105.
In one example, the audio module 112 may include a controller module, connection module, volume buttons or touch sensitive controls, play button or touch control, a Hall-effect sensor, one or more microphones, and an audio driver. In one example, the audio modules 110 and 112 may include other sensors, such as a motion sensor, pressure sensor, touch sensor, temperature sensor, barometric sensor, biometric sensor, gyroscopic sensor, global positioning system (GPS) sensor or module, light sensor, etc.
In one example, the connection module of one audio module (e.g., audio module 112) may comprise a wireless antenna (e.g., a BLUETOOTH® antenna, Wi-Fi antenna, cellular antenna, etc.) to wirelessly connect to a host device 120. Other components may include a controller module, physical buttons (configured to control volume, play music, etc.), transducers (such as a Hall-effect sensor), microphone, or audio driver. The other audio module (e.g., audio module 110) with ear bud 111 may comprise a battery for powering the wearable device 105, along with one or more indicator LEDs, physical buttons (configured to be a power button, or virtual assistant activation, or an audio driver.
In one example, the ear buds 111 and 113 may have any type of configurations for in ear placement, over ear loops or flange, assorted sizes and materials (e.g., silicon, elastomer, foam, etc.). In one embodiment, the material of the inner ear portion of the ear bud 111 and ear bud 113 may be sized for noise cancellation along with electronic noise cancellation of the audio module 112.
In one example, the audio module 110 may include a rechargeable battery and ear bud 111 connected for charging for a wearable device 105 for audio communication.
In one example the audio module 110 with ear bud 111 may include a magnet (or one or more magnetic elements) for mating with an audio module 112 with ear bud 113 and another magnet for the wearable device 105 for audio communication. In one example, the audio modules 110 and 112 include magnets for magnetically attracting one another for mating the audio modules 110, 112, ear buds 111, 113 and forming a necklace. In one example the wearable device 105 communicates with the host device 120. The user may utilize physical control buttons, touch sensitive areas or provide voice commands to the wearable device 105 for control and use. In one example, the wearable device 105 is wirelessly connected to a host device 120. In one embodiment, the wearable device 105 includes a clip (e.g., a collar clip) for reducing movement when worn by a user (e.g., when jogging, horseback riding, etc.).
In one example, instead of magnetic elements or magnets, other coupling elements may be used, such as removable (or breakaway) locking elements, electronic magnets, a clasp, hook and loop fastening elements, etc.
In one example, the wearable audio modules 110 and 112 with ear buds 111 and 113, respectively, may comprise one or more sensors for the wearable device 105 to detect the configuration of the device (i.e., configuration detection). For example, the sensors may assist the wearable device 105 for determining a state of configuration of the wearable device (e.g., whether an ear bud is in one ear, both ear buds are in respective ears, the wearable device is in a necklace configuration, or the wearable device is not worn by a user).
In one example, each audio module 110 and 112 for an ear bud has an accelerometer which senses a user's motion or audio module and ear bud orientation. In some embodiments the worn audio modules 110 and 112 with ear buds 111 and 113 will be in some level of constant motion or have the cord 116 pointed roughly downwards. Thus, allowing determination of whether one, both or no ear buds 111 and 113 are in use. In other embodiments, the audio modules 110 and 112 and ear buds 111 and 113 may be configured to respond to various gestures, such as double-tap, shake, or other similar gestures or movements that can be registered by the accelerometer.
In one example, each audio module 110 and 112 for an ear bud 111 and 113 comprises two microphones: one microphone that samples the outside environment, and one microphone that samples inside an ear bud. Signals are compared for selecting the best signal and further audio processing. For example, the signal comparison using a microphone differential may register a muffled noise on the microphone inside the ear bud to determine if the ear bud is in use (e.g., in a user's ear). Optionally, the microphones may be used to perform audio processing, such as noise cancellations or “listen” for voice commands. In some embodiments the microphones may be subminiature microphones, but other microphones may be utilized as well.
In one embodiment, each audio module 110 and 112 for an ear bud includes a pressure sensor. In one example, when an ear bud 111, 113 is inserted into an ear or removed from an ear, an event shows up as a pressure spike or valley. The pressure spike or valley may then be used for determining the state of the wearable device.
In one example, each audio module 110, 112 for ear buds 111 and 113 comprises an optical proximity sensor, such that when worn, a steady proximity signal is generated. In one embodiment, the optical proximity sensor may be located within the housing for the ear bud 111 and/or 113, such that when the ear buds are worn, the optical proximity sensor lies against a user's skin. In one example, the optical proximity sensors provide for determination of whether one, both or no ear buds are in use.
In one embodiment, each audio module 110 and 112 for an ear bud includes a housing element that is sensitive to touch (capacitive sensing). For example, each ear bud housing structure may comprise capacitive touch rings near the flexible ear bud portion of ear buds 111 and 113 that is inserted in a user's ear. Such structure may contact or touch a user's skin allowing determination of whether one, both or no ear buds are in use.
In one embodiment, each audio module 110 and 112 for an ear bud has a mechanical conversion interface to hide the ear buds 111 and 113 in a necklace state. For example, the conversion interface may comprise a magnetic snap which activates a limit switch (e.g., using a hinge) depending on whether the ear bud is in an open or closed position allowing determination of whether one, both or no ear buds are in use.
In one example, electronic components are concentrated in the audio module 110 connected with the left ear bud 111 and in the audio module 112 connected with the right ear bud 113. In one example, one or more LEDs may be distributed around a band or cover of the swappable cord 116 for different functions. In one example, the LEDs may be used for informing a user by using light for alerting to received messaging and notifications. For example, different light patterns or colors may be used for different notifications and messaging (e.g., alerting of particular users based on color or pattern, alerting based on type of message, alerting based on urgency, etc.). In another example, the LEDs may be used for providing light for assisting a user to see the wearable device 105 or elements thereof, such as buttons or control areas, instructions or indications on attaching elements, etc. In one example, the LEDs may be used for providing illumination for seeing the surrounding area (e.g., similar as a flash light). In another example, the LEDs may be used for identifying particular users in the dark (e.g., when in a crowd, a particular user may be associated with a particular pattern of lights, colors, etc.).
In one embodiment, the electronic wearable device may be directly connected with each host device through a communication module (e.g., Bluetooth®, Wi-Fi, Infrared Wireless, Ultra Wideband, Induction wireless, etc.). In another embodiment, the electronic wearable device may interact with other devices through a single host device (e.g., smartphone).
In one embodiment, the connection between the electronic wearable (audio) device and the host device (e.g., a smartphone) may be wireless with the interface between the host device and the rest of the ecosystem occurring over a wired or wireless communication. In one embodiment, available services or processes performed though the electronic wearable device may be performed in several ways. In one embodiment, the processes for the electronic wearable device may be managed by a manager application or module located on a host device. In one embodiment, the processes may be incorporated as extensions of other features of a mobile operating system. Some embodiments may include: the processes solely run/executed from the electronic wearable device; a more robust process run from a host device with a limited version run from the electronic wearable device if there are no host devices to connect to; run solely from a cloud platform, etc. Content may be provided or pulled from various applications or content providers and aggregated before presentation to an end user through a display on a host device, other wearable device (e.g., an electronic wearable bracelet or watch device, a pendant, etc.), or through audio from the electronic wearable device.
In one embodiment, the electronic wearable 105 device may comprise a suggestion application or function 772. The suggestion application or function 772 may be triggered by a physical button and provide relevant information based on location, time of day, context and activity (e.g., walking, driving, listening, talking, etc.), calendar information, weather, etc. The suggestion application or function 772 may interact with functions in connected host devices to obtain appropriate information. In one embodiment, the suggestion application provides appropriate information based on information learned about the user from context, interactions with others, interaction with the electronic wearable 105 device, personal information, interactions with applications (e.g., obtaining information from social media platforms, calendar applications, email, etc.), location, time of day, etc.
In one embodiment, the companion application (e.g., companion app 712, 722) enables a user to choose services that the user desires. The companion application may also gather content from various sources from smartphone applications and cloud services. For example, for “morning readout,” today's calendar events and weather are gathered prior to being called out so that a playback may be performed by the suggestion application or function 772 on the wearable device 105 immediately/smoothly without any time lag. The companion application may also facilitate other functions, such as controlling a media/music player 762 for media/music player 713, location service applications 714, 763, fitness applications 715, news/podcast applications 716, etc.
In one embodiment, the companion application may be implemented on a host device (e.g., smartphone, tablet, etc.) and may query other devices in the ecosystem. In one example, a smart phone 120 may include functions for voice command 711 (e.g., recognition, interactive assistant, etc.), location services 714, fitness applications 715 and news/podcast 716. The computing device or tablet device 720 may include voice command functionality 721 that operates with the companion app 722.
In one embodiment, the cloud information platform (info platform) 704 comprises a cloud based service platform that may connect with other devices in the ecosystem. The cloud information platform 704 may comprise information push 751 functions to push information to the electronic wearable device 105 or other host devices or assist with context/state detection through a context/state detection function 752.
In one embodiment, an audio manager function may be implemented as a component of the voice assistant function or the companion application 712, 722. The audio manager may be implemented on a host device (e.g., smartphone, tablet, etc.). In one embodiment, the audio manager manages incoming information from other devices in the ecosystem and selectively routes the information to the appropriate device.
In one embodiment the host device may be a smart appliance 702 or the electronic wearable device may interact with a smart appliance through a host device. The smart appliance 702 may comprise functions allowing interaction with the electronic wearable device 105. For example, the functions may allow for execution of voice commands (e.g., voice command function 731) from the electronic wearable device 105, such as temperature control 732 (raise/lower temperature, turn on/off heat/air conditioning/fan, etc.), lighting control 733 (turn on/off lights, dim lights, etc.), provide current status 734 (e.g., time left for a dishwasher/washing machine/dryer load, oven temperature or time left for cooking, refrigerator door status, etc.), electronic lock control 735 (e.g., lock/unlock doors or windows adapted to be wirelessly opened/locked), or blind/shade control 736 (e.g., open/close/adjust blinds in windows adapted for wireless control).
In one embodiment, the electronic wearable device 105 may interact with an automobile or vehicle 780 as a host device or through another host device. The automobile or vehicle 780 may comprise functions to facilitate such an interaction. For example, the functions may allow for voice commands 781 to control navigation 782 (e.g., determining directions, route options, etc.), obtain real-time traffic updates 784, control temperature or climate adjustments 783, provide for keyless entry 785 or remote ignition/starting 786, alarm actions (e.g., horn/lights), emergency tracking via GPS, etc.
In one embodiment the electronic wearable device 105 may interface with a smart TV 703 host device or interact with a smart TV through another host device. The smart TV 703 may comprise functions to facilitate the interaction with the electronic wearable device 105. For example, the functions may allow for voice commands to power on or off the TV 742, control channel selection 741, control volume 743, control the input source 744, control TV applications, communicate with a viewer of the smart TV 703, control recordings, etc.
In one embodiment the electronic wearable device 105 may interface with another electronic wearable device 705 (e.g., a wearable wrist device, pendant, etc.) host device or interact with a wearable device through another host device. Such connections or interactions may occur similarly to the computing environment or ecosystem 700 (
The content manager application 1110 may aggregate content from various sources (content on device, other devices owned by a user, a user's personal cloud, third party content providers (from applications, cloud services, hot spots, beacons, etc.), live audio feed, etc.). In one embodiment, the aggregation may be performed through user selection in a device configuration setting. In one embodiment, the content manager application 1110 may evolve or iterate to add content for aggregation. Such inclusion may utilize various machine learning algorithms to determine or predict content that a user may desire to include. The prediction of content may be based on content currently selected as desired, the frequency of content accessed by the user (either through the electronic wearable device, on a host device, or on another device in the ecosystem) in the past or ongoing, suggestions by those having similar interests (e.g., friends, others in social network or circles, family, demographic, etc.), etc. Other examples for suggestions may involve major news or events, emergency information, etc. In one embodiment, the predicted content may be suggested to a user for inclusion through an audio prompt, pop-up notification, automatically included with a feedback request, or through other similar methods that may iterate or learn of user preferences.
In one embodiment, for content aggregation, the content manager application 1110 may limit content to a subset of the compiled or received information. For example, reading out only desired content or providing important notifications. The determination of a subset of information may be manually configured or curated by a user, or intelligently determined through machine learning. In one example, machine learning may gradually populate notifications based on notifications received (either from preloaded or third party applications) and may also learn based on whether the user took action (e.g., responded to the notification, dismissed/cleared, ignored, etc.). In one embodiment, the curation or configuration may be location based (e.g., utilizing GPS location, world region, etc.).
In one embodiment, the content manager application 1110 may control the connection with the electronic wearable device (e.g., the type of connection (wired, wireless, type of wireless) pairing, refreshing/resetting the connection, disconnecting, etc.). In one embodiment, the content manager application 1110 may be able to control certain aspects of the electronic wearable device (e.g., turning device on or off, turning haptic elements on or off, adjusting volume, etc.). The electronic wearable device may have multiple states or configurations (e.g., a necklace mode, mono audio mode, stereo audio mode, etc.). The content manager application 1110 may receive state information from the electronic wearable device sensors to determine the appropriate process or service to provide. For example, whether the device is in necklace mode (e.g., from Hall-effect sensor, determining if magnets are connected, other sensors, etc.) or whether one or both of the ear buds are detected as being in a user ear (e.g., pressure sensor, in use sensor, location sensor, etc.). In one embodiment, the state configuration may also determine whether the device is being worn (e.g., detecting motion from sensors, such as one or more accelerometers).
In one embodiment, the content manager application 1110 may provide voice delivery where the audio information is delivered in a natural sounding way. The information may be performed using scripts, templates, etc. In one embodiment, the content manager application 1110 may utilize an engine to perform grammar or format determination in real-time or near real time. In one embodiment, the content manager application 1110 may utilize the engine to determine the mood of the information, allowing different voice personalities or profiles along with an appropriate tone relating to the information. For example, sport scores may be provided with the inflection of a sports caster or announcer, while a news headline may be presented with a more reserved or conservative inflection. As further examples, sports scores may also be presented in an excited tone, a good news headline may be presented with a happy or cheerful tone, a bad news headline in a serious, somber, or controlled tone, breaking news may be provided in a tone that conveys urgency, etc.
In one embodiment, the content manager application 1110 may also handle various processes or services which may be triggered through voice control or commands, activating a hardware control, etc. The content manager application 1110 may allow for a user to curate or configure content they would like to be included with readouts along with additional settings (such as time period and content), which notifications are considered priority, device connection management, and accessing settings in the operating system or another application.
In block 1230 the content manager application 1110 may determine the state configuration of the electronic wearable device. In one example, this may be performed by receiving information from the sensors and analyzing the provided information to determine the current state (e.g., necklace mode, single in-ear, dual in-ear, not worn, etc.). The wearable device may provide already analyzed state information to the content manager application 1110. A change of device state may be an indication to initiate a task or perform a command. For example, if the electronic wearable device is detected changing from necklace mode to dual in-ear mode, a music application may be launched to begin playing a song, etc. In another example, changing state from in-ear to necklace may pause a task, and if the state is changed back to in-ear within a certain time frame, the task may resume.
In block 1240 the content manager application 1110 may determine the task to be performed. Such determination may be made based on context, such as the time of day, the input indication to perform a task (e.g., command, button press, incoming call/text, etc.), the device state configuration, etc. Examples of such tasks may include readouts, notifications, voice commands, etc.
In block 1250 the content manager application 1110 may retrieve additional information to perform the determined task. For example, the content manager application 1110 may request information from third parties to provide news, sports, weather, etc. If no additional information is necessary, the task may be carried out immediately, as in the case of notifying about an incoming call.
In block 1260 the content manager application 1110 may provide data or audio to the electronic wearable device to execute a task. The content manager application 1110 may process the gathered data and provide information or instructions to the wearable device to carry out the task, such as perform an audio playback. The content manager application 1110 may provide prompts (e.g., audio tone or command prompts), receive voice commands, etc. In block 1270 the process 1200 ends and waits to start again at block 1210.
In one embodiment, the content manager application 1110 (
In one embodiment, the content may be requested or pulled from the various sources. This content may have been curated by a user to select specific categories. Such curations may be received by the content manager application 1110 through a configuration menu. Examples of content categories that may be curated may include news, calendar (appointments/schedule), weather, traffic, sports, entertainment, etc.
In one example, calendar readouts may provide a playback of a user's upcoming schedule, which may be aggregated from the host device, user's cloud, or other user devices. In one embodiment, the calendar readout may respond differently in various instances based on the aggregated information (e.g., remaining events in the day, no remaining events, no scheduled events, etc.). For example, if there are remaining events, the readout may include the number of events for the day or the number of remaining events, and then provide further additional details such as the time or name of the events. In an example where no events remain, the readout may inform the user there is nothing left on the calendar and provide a preview of tomorrow's scheduled events (e.g., first scheduled item for the next day, or the next item scheduled if the next day is free). In an example where there are no events scheduled for the day, the user may be informed of such, and similarly provide a preview of the next scheduled event on an upcoming day.
In one example, weather readouts may provide varying indications of the weather at a location depending on the time of day. For instance, from 12 AM to 12 PM, the readout may include the forecast for the day and the current temperature. As the day progresses (e.g., from 12 PM to 7 PM) the readout may only include the current temperature. Even later in the day (e.g., 7 PM to 12 AM) the readout may provide the current temperature along with the weather forecast for the upcoming day. In one example, if there are upcoming weather alerts or warnings, they may be included for the duration of the warning.
In another example, news readouts may provide an overview of the news category followed by headlines. The content manager application 1110 may keep track of headlines to ensure there is no repeating of a previously read headline. The number of headlines may be capped to prevent an overflow of information. In one example, the information may be limited solely to the headline and not include additional information such as the author, source, etc. In a situation where no new headlines are available, a readout may indicate such to a user. In one example, important updates may be refreshed or represented indicating there is a change to the story.
In another example, sports readouts may provide different information based on the time in relation to the specific game (e.g., pre-game, during the game, post-game, etc.) The pre-game information may include the dates, times, and teams/competitors competing. There may be a limit of how far in advance schedules may be provided (e.g., a time window of 48 hours, etc.). In one embodiment, the pre-game information may read out multiple scheduled games within a window. During the game the readout may include information such as the score, the current time of the game (e.g., inning, quarter, half, period, etc.). After the game, the sports readout may indicate which team/competitor won and the final score. In one example, in a situation where there is a mixture of in-progress, completed, and future games, the sports readouts may prioritize games that are currently in progress over completed games or future games.
In another example, traffic readouts may provide different information between general traffic or reported accidents/incidents. For example, current traffic conditions may have the readout indicate the degree of traffic on a set route. Multiple routes may be read sequentially or prioritized based on location. In a situation where there is an accident (or multiple accidents), or incident (e.g., construction, debris, cleaning vehicles, etc.) the readout may indicate the accident or incident prior to the degree of traffic on a route. In one example, additional information may be provided, such as how far a backup reaches (e.g., an estimated distance (one mile backup), or to a specific exit, etc.).
One embodiment provides for selection of readout setup configuration for different readouts at different times of the day. For example, a profile may be created for all weekdays from the specific times of 6 AM to 8 AM and include selected content of calendar, and weather. The content may have further settings for what the user would like provided from the content category. For example, the calendar category may include holidays, reminders, appointments, etc. In one example, the weather category may include cities or locations and additional details such as the temperature scale or what granularity of temperature information (e.g., only current temperature, including the high, or including both the high and low). Other embodiments may involve additional contextual aspects such as location. In one example, multiple profiles may be configured to address various times, days, locations, or other aspects, which may result in a user preferring a different readout.
In one embodiment, if there are two readouts which overlap in time, day, location, or other contextual aspects, the manager may intelligently determine (through a process or algorithm) which readout is preferred and play that readout. Such a determination may analyze various aspects such as a user's calendar, current location, readout history (e.g., preferring news over traffic), information from other devices in an ecosystem, or other similar aspects and may utilize a score, weighting, factors, or other similar determinations to predict the preferred profile. For example, if there are two profiles for Monday which overlap at 9 AM, but one has traffic and the other has news. The content manager application 1110 may utilize the GPS location and if the location shows the device (and user) is commuting, the profile with local traffic may be used over news. In one example, other ways to determine profiles may include a user set priority or preference for a profile. Additionally, a command or selection may be received to select a specific readout profile.
In one example, the content manager application 1110 may recognize readouts with overlapping time and provide a prompt to make the appropriate corrections. In another example, the content manager application 1110 may provide the contents from both the readouts for the overlapping period but remove the duplicative categories. For example, if a first readout from 9 AM to 11 AM includes calendar and weather, while a second readout from 10 AM to 12 PM has weather and news, a request for readout between 10 AM to 11 AM plays calendar, weather and news.
In block 1540 the content manager application 1110 may retrieve specific information or content as needed for the chosen profile (e.g., calendar information, news or sports categories, weather, traffic, etc.). In block 1550 the specific information may be processed and arranged in a format suitable for a readout, allowing for human sounding information. The processing may also reduce the available information to easily digestible segments (e.g., choosing a subset that is most interesting to or preferred by the user). In block 1560 the processed data may be provided to an electronic wearable device for readout. In block 1570 the process 1500 ends or waits to start again at block 1510.
In one example, missed notifications may be available as audio for a limited time window/frame. Such a window/frame may be user-configured or a preset time (e.g., 60 minutes). Missed notifications beyond the time window/frame may still be accessible in other forms on other devices. In one example, the content situations 1610 may be regular notifications, priority notifications, and incoming calls. Other content situations may be included such as emergency alerts, etc.
In one example, in a situation with regular notifications, these notifications may include incoming information that a user has opted to receive as an audio notification (e.g., third party notifications from applications, SMS, email, etc.). The state of configuration of the electronic wearable device may determine the subsequent action taken on the information. For example, if the electronic wearable device is in a necklace configuration, no additional action is taken. The notifications may still be accessible and unread on other devices (e.g., a smartphone, another wearable device, etc.). In one example, if a user changes configuration within a limited time window, the audio notification may be available and triggered via a prompt. If the state of configuration is determined to be an in-ear mode, a prompt for action may occur prior to playing the notification.
In an example where the content is a priority notification, based on the state of configuration different actions may be performed. In one example, if the configuration is determined to be in-ear, the priority notification may automatically begin playing without receiving any responses form a user. If the state is determined to be a necklace configuration, an indication to provide a haptic response (e.g., vibration notification) may be given, and depending on whether a state change is detected within a preset time window/frame (e.g., within 5-10 seconds), the priority notification may automatically play or may require further confirmation to play (e.g., after 5-10 seconds, receiving hardware button press), or an audio indication (e.g., a tone) may be audible only to the user when in necklace mode.
In one example, for an incoming call, depending on the electronic wearable device state of configuration, different actions may be performed. If the detected state is in-ear, the caller information may be provided and await a response (button press, voice command, etc.) before the call is answered. If the detected state is in a necklace mode, a haptic notification may be provided (optionally a ringtone may sound). If the device state is registered as changing from necklace to in-ear while the haptic notification or ringing is occurring, the call may be answered. If the state change is detected after, further received input may be required to play a missed call notification.
In one example, the content manager application 1110 (
In block 1840 the content manager application 1110 may receive information allowing it to monitor device state and detect any state changes. In block 1850, the content manager application 1110 may optionally determine whether various actions such as playing the notification is to be performed. In block 1860, the content manager application 1110 may optionally coordinate among one or more connected devices. For example, an incoming notification may also provide an indication on a screen of a connected device (e.g., smartphone or another wearable device). The notifications may be sent to all connected devices.
In one example, the routing may be performed based on screen detection or other received sensor information of the device determined to be the most appropriate (as described further below). The screen notification may be performed on the most appropriate device (e.g., device of current user focus or activity). In one example, combinations of readout and audio notifications may occur with priority being placed on one feature over another (e.g., notifications played before Readout, etc.). In one example, in block 1870 the process 1800 ends and waits to start again at block 1810.
The following
In an exemplary embodiment, in the morning a user may don the electronic wearable device 105 and insert one ear bud (e.g., ear bud 11 or 13,
The retrieved information may be provided (e.g., sent at 3530) to the wearable device controller, which may comprise an audio manager. In block 3540 the audio manager may determine how to organize and render the content 3541 into a morning briefing. The morning briefing, such as the example shown in Table 1 above, may be played to a user at 3545. Optionally, certain physical button presses at the companion app 3550 may be used to skip messages (e.g., double press) or cancel the briefing (e.g., long press).
In one exemplary embodiment, a user may be wearing both ear buds (e.g., ear buds 111, 113,
In an optional embodiment, another electronic wearable device, such as a smart wrist or watch electronic device, may be incorporated into the process in block 3622. As part of the request from the companion application or information platform 3515 the third party may provide information for display on the other electronic wearable device. In another optional embodiment, the user may choose to “star” a song (e.g., mark, mark as a favorite, etc.). This information may be provided to the third party application 3630 through the information platform 3515 at 3631 or through the companion application at 3640, and the audio confirmation may be provided at 3640 to the electronic wearable device 105 with audio confirmation 3612.
In an exemplary embodiment, the electronic wearable device 105 may be in a necklace state as the user enters an automobile/vehicle 780. Once in the vehicle 780, the electronic wearable device 105 may interface with the vehicle's infotainment system, either directly or through a host device 120. The user may activate a physical button (e.g., on the electronic wearable device 105, or in the vehicle 780) at 3701 to trigger a function. At 3706 the companion application 3550 on the host device 120 may determine relevant context at 3710 (e.g., wearable device state, user is driving, car stereo is on, time of day, etc.). The companion application 3550 may gather relevant contextual information such as news headline 3730 or podcast content 3731 locally from the host device or through requests 3711 to the cloud information platform 3515.
The information platform 3515 or the companion application 3550 may compile the information at 3720 and provide it at 3740 to the audio manager 3540, which determines how to organize and present the content at 3741 and 3742. The resulting information choices may be played through the vehicle 780 speakers at 3743. User choices may be received by the electronic wearable device 105 or the vehicle 780 microphones and a request 3750 may be made to the appropriate third party application, such as the news headlines. For example, the companion application 3550 understands button presses or voice commands at 3744. In one example, the voice recognition application builds grammar based on content and stores the information on the host device or the information platform 3515. The information platform 3515 requests the content 3721 from the third party application and may push information (e.g., graphics, displays, text, etc.) to the companion application 3550. The companion application 3550 then plays the headlines at 3760 on the host device 120.
Optionally, additional choices may be provided for the user to choose from, such as selecting the news story to listen to, etc. In an optional embodiment, the user's location may cause a traffic alert 3770 to be sent to the information platform 3515 or the companion application 3550. In one example, the alert may indicate a traffic issue 3722 (based on a received traffic card 815 published from the information platform 3515) in the vicinity and recommend a detour. The alert may interrupt 3780 the currently playing information.
In an exemplary embodiment, the electronic wearable device 120 may be in a necklace configuration and the user may activate the voice command function using a physical button at 3801. On an audio prompt, a command or request 3802 may be provided (e.g., what time is a specific game, directions, etc.). At block 3620 the voice command function of the wearable device may direct the request to the companion application on a host device 120 or, optionally, directly to the information platform 3515. At 3806 the companion application may determine the context (e.g., wearable device state 3810, current date, etc.) and at 3820 send a request to the information platform 2515 or check if the information is found locally on a host device 120 at 3821. In one example, at 3822 the information platform 3515 determines the best method to display results, for example to another wearable device, such as a smart wearable wrist device. At 3823 the retrieved information may be played through the electronic wearable device 105 ear buds 111 and 113 (
In one exemplary embodiment, the companion application of the host device 120 may create an automatic proactive alert 3901, which may be based in part on calendar information, geolocation, to-do-lists, traffic or other similar factors. The companion application may determine the context or state at 3906 of the wearable device 105 (e.g., one ear bud in, necklace configuration, etc.). At 3911 the alert information 3910 may be provided to the information platform 3515, which may use the information to determine when to provide an alert to the user. For example, at 3921 calculating the user's location, traffic to the location of the tasks, subsequent meetings or appointments in the user's schedule, etc. At 3922 the information may be published as a card or other notification to the companion application of the host device 120, which in turn provides the information to be played on the wearable device 105 at 3930.
In an exemplary embodiment for augmented audio, a user of an electronic wearable device 105 may trigger the function using a physical button at 4001 on the electronic wearable device 105, allowing the input of a command. In block 4020 the voice command function on the electronic wearable device may recognize the request 4002 and pass the command at 4021 to a companion application on a host device 120 or to an information platform 3515. The companion application may determine context (e.g., state of the wearable device, geolocation, user's characteristics/past information, etc.). The companion application may provide the information to the information platform which may query a third party application 4023 for results 4022 (e.g. using location and command to calculate distance and provide user's average distance in a sport, for example golf). At 4030 the results may be provided back to the companion application which, in turn, provides the information 4040 to the audio manager 3540 for playback at 4041 on the electronic wearable device 105.
In an exemplary embodiment, the electronic wearable device 105 may be utilized for controlling devices, appliances, etc. In one example, at 4101 the voice command may be triggered using a physical button on the electronic wearable device 105 to allow input of a voice command 4102. In block 4120 the voice command 4102 may be passed to a companion application on a host device 120 or directly to an information platform 3515. The companion application may gather contextual information (e.g., wearable device state, geolocation, etc.) and provide the additional information to the information platform 3515 along with the understood command at 4122. The information platform 3515 may interface with a third party application 4123 to carry out the command (e.g., turning up the temperature) and provide confirmation 4122 back to the companion application of the host device 120 at 4130. The confirmation may be played back at 4140 on the electronic wearable device 105 as shown in the example 4141.
In an exemplary embodiment for device integration with an ecosystem, the user may trigger a physical button at 4201 while the electronic wearable device 105 is in an appropriate configuration (e.g., one ear bud in, necklace state, etc.) The request may be in the form of a voice query 4202 (e.g., requesting a sporting event score, etc.) The voice command function of the electronic wearable device 105 may pass the request to a companion application or to an information platform 3515 at block 4220 to receive the answer. The companion application may include additional contextual information at block 4221 (e.g., geolocation, other known devices in the vicinity, etc.). The information platform 3515 obtains the information from a third party application 4224 and publishes, for example, a card 4230 and TV action 4222 (e.g., shows content, offers actions, etc.).
The resulting information may be passed back to the audio manager 3540, which determines how to organize, render content and accesses a text to speech (TTS) function at 4240 to audibly provide the response to a user along with a query if the user would like to perform an activity along the lines of the initial query (e.g., watch the specific game for which the score was requested). At 4241 the response is played on the electronic wearable device 105. At 4242 the companion application waits for button presses (or voice commands 4243 or other input) during audio playback. If an affirmative response is received at block 4250, the host device 120 or information platform 3515 may cause the appropriate device to be activated and tuned appropriately (e.g., turning on the TV and selecting the channel for the appropriate game) at 4223.
In an embodiment, how the physical buttons are activated on the device may trigger different functions. For example, a long press or a press and hold of a physical button may trigger the suggestion function which may result in flows for the embodiments shown in tables 1, 2, 3 and 6, as illustrated in
In another example, a single press may trigger a voice command function (e.g., from a voice recognition interface or process) which may result in flows for the embodiments shown in tables 4, 7, and 8, as illustrated in
In an embodiment, certain flows may be available based on context or a time of day. For example, the first time each day the electronic wearable device 105 is triggered using the physical button and if the time is in the morning, the morning readout, as exemplified in Table 1, may result. Subsequent triggers may perform other functions shown in Tables 2-8.
In an embodiment, the electronic wearable device 105 may perform context detection, either by itself or in conjunction with other devices in the ecosystem shown in
In one embodiment, the electronic wearable device may perform context detection 4610, either by itself or in conjunction with other devices in the ecosystem (e.g.,
In one embodiment, a companion application on a host device or the electronic wearable device may be configured to dynamically render the text to speech (TTS) 4820 by stitching the content together in order. This may result in a morning readout for the first activation of the electronic wearable device. In one example, the compilation of information may take place early on when a user's device may be idle or charging to preload the morning readout.
Information transferred via communications interface 5107 may be in the form of signals such as electronic, electromagnetic, optical, or other signals capable of being received by communications interface 5107, via a communication link 5109 that carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an radio frequency (RF) link, and/or other communication channels. Computer program instructions representing the block diagram and/or flowcharts herein may be loaded onto a computer, programmable data processing apparatus, or processing devices to cause a series of operations performed thereon to produce a computer implemented process.
Embodiments have been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments. Each block of such illustrations/diagrams, or combinations thereof, can be implemented by computer program instructions. The computer program instructions when provided to a processor produce a machine, such that the instructions, which execute via the processor, create means for implementing the functions/operations specified in the flowchart and/or block diagram. Each block in the flowchart/block diagrams may represent a hardware and/or software module or logic, implementing embodiments. In alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures, concurrently, etc.
Computer programs (i.e., computer control logic) are stored in main memory and/or secondary memory. Computer programs may also be received via a communications interface. Such computer programs, when executed, enable the computer system to perform the features of the embodiments as discussed herein. In particular, the computer programs, when executed, enable the processor and/or multi-core processor to perform the features of the computer system. Such computer programs represent controllers of the computer system.
Though embodiments have been described with reference to certain versions thereof; however, other versions are possible. Therefore, the spirit and scope of the embodiments should not be limited to the description of the preferred versions contained herein.
This application claims the priority benefit of U.S. Provisional Patent Application Ser. No. 61/937,389, filed Feb. 7, 2014 and U.S. Provisional Patent Application Ser. No. 62/027,127, filed Jul. 21, 2014, both incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61937389 | Feb 2014 | US | |
62027127 | Jul 2014 | US |