The present application relates generally to digital ecosystems that are configured for use when engaging in physical activity and/or fitness exercises.
Society is becoming increasingly health-conscious. A wide variety of exercise and workouts are now offered to encourage people to stay fit through exercise. As understood herein, while stationary exercise equipment often comes equipped with data displays for the information of the exerciser, the information is not tailored to the individual and is frequently repetitive and monotonous. As further understood herein, people enjoy listening to music as workout aids but the music typically is whatever is broadcast within a gymnasium or provided on a recording device the user may wear, again being potentially monotonous and unchanging in pattern and beat in a way that is uncoupled from the actual exercise being engaged in.
Thus, while present principles recognize that consumer electronics (CE) devices may be used while engaged in physical activity to enhance the activity, most audio and/or visual aids are static in terms of not being tied to the actual exercise.
Present principles recognize that portable aids can be provided to improve exercise performance, provide inspiration, enable the sharing of exercise performance for social reasons, help fulfill a person's exercise goals, analyze and track exercise results, and provide virtual coaching to exercise participants in an easy, intuitive manner.
Accordingly, in a first aspect a device includes at least one computer readable storage medium bearing instructions executable by a processor and at least one processor configured for accessing the computer readable storage medium to execute the instructions. The instructions configure the processor for receiving signals from at least one biometric sensor of an exerciser, and based at least in part on the signals, outputting an audio cue on a speaker indicating to the exerciser to speed up or slow down.
If desired, the biometric sensor may be a heart rate sensor. In such embodiments, the processor when executing the instructions may be configured for determining whether a heart rate of the exerciser as indicated by signals from the heart rate sensor exceeds a threshold. Responsive to a determination that the heart rate exceeds the threshold, the processor may output an audio cue on the speaker to slow down, and responsive to a determination that the heart rate does not exceed the threshold, the processor may not output an audio cue on the speaker to slow down.
Also in some embodiments, the processor when executing the instructions may be configured for determining whether a heart rate of the exerciser as indicated by signals from the heart rate sensor is below a threshold. Responsive to a determination that the heart rate is below the threshold, the processor may output an audio cue on the speaker to speed up, and responsive to a determination that the heart rate exceeds the threshold, the processor may not output an audio cue on the speaker to speed up.
Furthermore, if desired the audio cue may be verbal. Also in some embodiments, the audio cue may include music having a tempo that is increased or decreased, respectively, to indicate to the exerciser to speed up or slow down. Even further, in some embodiments the audio cue may include changing from playing a first music piece to playing a second music piece. Also if desired, in some embodiments the biometric sensor may include an exerciser breath sensor and/or an exerciser stride sensor.
In another aspect, a method includes receiving signals from at least one biometric sensor representing a biometric parameter of an exerciser, and automatically transmitting signals to a speaker to present audible cues to the exerciser based on the signals from the biometric sensor.
In still another aspect, a computer readable storage medium that is not a carrier wave bears instructions which when executed by a processor configure the processor to execute logic including accessing planned physical activity information for a person associated with a CE device including the processor, receiving at least one signal from at least one biometric sensor representing at least one biometric parameter of the person, and determining whether the biometric parameter conforms to at least a portion of the planned physical activity information. The instructions then configure the processor for, responsive to a determination that the biometric parameter does not conform to at least a portion of the planned physical activity information, automatically presenting on the CE device an indication that the person is not in conformance with the planned physical activity information.
The details of the present invention, both as to its structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
This disclosure relates generally to consumer electronics (CE) device based user information. With respect to any computer systems discussed herein, a system herein may include server and client components, connected over a network such that data may be exchanged between the client and server components. The client components may include one or more computing devices including portable televisions (e.g. smart TVs, Internet-enabled TVs), portable computers such as laptops and tablet computers, and other mobile devices including smart phones and additional examples discussed below. These client devices may employ, as non-limiting examples, operating systems from Apple, Google, or Microsoft. A Unix operating system may be used. These operating systems can execute one or more browsers such as a browser made by Microsoft or Google or Mozilla or other browser program that can access web applications hosted by the Internet servers over a network such as the Internet, a local intranet, or a virtual private network.
As used herein, instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware; hence, illustrative components, blocks, modules, circuits, and steps are set forth in terms of their functionality.
A processor may be any conventional general purpose single- or multi-chip processor that can execute logic by means of various lines such as address lines, data lines, and control lines and registers and shift registers. Moreover, any logical blocks, modules, and circuits described herein can be implemented or performed, in addition to a general purpose processor, in or by a digital signal processor (DSP), a field programmable gate array (FPGA) or other programmable logic device such as an application specific integrated circuit (ASIC), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be implemented by a controller or state machine or a combination of computing devices.
Any software modules described by way of flow charts and/or user interfaces herein can include various sub-routines, procedures, etc. It is to be understood that logic divulged as being executed by a module can be redistributed to other software modules and/or combined together in a single module and/or made available in a shareable library.
Logic when implemented in software, can be written in an appropriate language such as but not limited to C# or C++, and can be stored on or transmitted through a computer-readable storage medium such as a random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), compact disk read-only memory (CD-ROM) of other optical disk storage such as digital versatile disc (DVD), magnetic disk storage or other magnetic storage devices including removable thumb drives, etc. A connection may establish a computer-readable medium. Such connections can include, as examples, hard-wired cables including fiber optics and coaxial wires and digital subscriber line (DSL) and twisted pair wires. Such connections may include wireless communication connections including infrared and radio.
In an example, a processor can access information over its input lines from data storage, such as the computer readable storage medium, and/or the processor accesses information wirelessly from an Internet server by activating a wireless transceiver to send and receive data. Data typically is converted from analog signals to digital and then to binary by circuitry between the antenna and the registers of the processor when being received and from binary to digital to analog when being transmitted. The processor then processes the data through its shift registers to output calculated data on output lines, for presentation of the calculated data on the CE device.
Components included in one embodiment can be used in other embodiments in any appropriate combination. For example, any of the various components described herein and/or depicted in the Figures may be combined, interchanged or excluded from other embodiments.
“A system having at least one of A, B, and C” (likewise “a system having at least one of A, B, or C” and “a system having at least one of A, B, C”) includes systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.
Before describing
Two general types of computer ecosystems exist: vertical and horizontal computer ecosystems. In the vertical approach, virtually all aspects of the ecosystem are associated with the same company (e.g. produced by the same manufacturer), and are specifically designed to seamlessly interact with one another. Horizontal ecosystems, one the other hand, integrate aspects such as hardware and software that are created by differing entities into one unified ecosystem. The horizontal approach allows for greater variety of input from consumers and manufactures, increasing the capacity for novel innovations and adaptations to changing demands. But regardless, it is to be understood that some digital ecosystems, including those referenced herein, may embody characteristics of both the horizontal and vertical ecosystems described above.
Accordingly, it is to be further understood that these ecosystems may be used while engaged in physical activity to e.g. provide inspiration, goal fulfillment and/or achievement, automated coaching/training, health and exercise analysis, convenient access to data, group sharing (e.g. of fitness data), and increased accuracy of health monitoring, all while doing so in a stylish and entertaining manner. Further still, the devices disclosed herein are understood to be capable of making diagnostic determinations based on data from various sensors (such as those described below in reference to
Thus, it is to be understood that the CE devices described herein may allow for easy and simplified user interaction with the device so as to not be unduly bothersome or encumbering e.g. before, during, and after an exercise.
It is to also be understood that the CE device processors described herein can access information over its input lines from data storage, such as the computer readable storage medium, and/or the processor(s) accesses information wirelessly from an Internet server by activating a wireless transceiver to send and receive data. Data typically is converted from analog signals to digital and then to binary by circuitry between the antenna and the registers of the processor when being received and from binary to digital to analog when being transmitted. The processor then processes the data through its shift registers according to algorithms such as those described herein to output calculated data on output lines, for presentation of the calculated data on the CE device.
Now specifically referring to
Accordingly, to undertake such principles the CE device 12 can include some or all of the components shown in
In addition to the foregoing, the CE device 12 may also include one or more input ports 26 such as, e.g., a USB port to physically connect (e.g. using a wired connection) to another CE device and/or a headphone port to connect headphones to the CE device 12 for presentation of audio from the CE device 12 to a user through the headphones. The CE device 12 may further include one or more tangible computer readable storage medium 28 such as disk-based or solid state storage, it being understood that the computer readable storage medium 28 may not be a carrier wave. Also in some embodiments, the CE device 12 can include a position or location receiver such as but not limited to a GPS receiver and/or altimeter 30 that is configured to e.g. receive geographic position information from at least one satellite and provide the information to the processor 24 and/or determine an altitude at which the CE device 12 is disposed in conjunction with the processor 24. However, it is to be understood that that another suitable position receiver other than a GPS receiver and/or altimeter may be used in accordance with present principles to e.g. determine the location of the CE device 12 in e.g. all three dimensions.
Continuing the description of the CE device 12, in some embodiments the CE device 12 may include one or more cameras 32 that may be, e.g., a thermal imaging camera, a digital camera such as a webcam, and/or a camera integrated into the CE device 12 and controllable by the processor 24 to gather pictures/images and/or video in accordance with present principles (e.g. to share aspects of a physical activity such as hiking with social networking friends). Also included on the CE device 12 may be a Bluetooth transceiver 34 and other Near Field Communication (NFC) element 36 for communication with other devices using Bluetooth and/or NFC technology, respectively. An example NFC element can be a radio frequency identification (RFID) element.
Further still, the CE device 12 may include one or more motion sensors 37 (e.g., an accelerometer, gyroscope, cyclometer, magnetic sensor, infrared (IR) motion sensors such as passive IR sensors, an optical sensor, a speed and/or cadence sensor, a gesture sensor (e.g. for sensing gesture command), etc.) providing input to the processor 24. The CE device 12 may include still other sensors such as e.g. one or more climate sensors 38 (e.g. barometers, humidity sensors, wind sensors, light sensors, temperature sensors, etc.) and/or one or more biometric sensors 40 (e.g. heart rate sensors and/or heart monitors, calorie counters, blood pressure sensors, perspiration sensors, odor and/or scent detectors, fingerprint sensors, facial recognition sensors, iris and/or retina detectors, DNA sensors, oxygen sensors (e.g. blood oxygen sensors and/or VO2 max sensors), glucose and/or blood sugar sensors, sleep sensors (e.g. a sleep tracker), pedometers and/or speed sensors, body temperature sensors, nutrient and metabolic rate sensors, voice sensors, lung input/output and other cardiovascular sensors, etc.) also providing input to the processor 24. In addition to the foregoing, it is noted that in some embodiments the CE device 12 may also include a kinetic energy harvester 42 to e.g. charge a battery (not shown) powering the CE device 12.
Still referring to
Thus, for instance, the headphones/ear buds 46 may include a heart rate sensor configured to sense a person's heart rate when a person is wearing the head phones, the clothing 48 may include sensors such as perspiration sensors, climate sensors, and heart sensors for measuring the intensity of a person's workout, the exercise machine 50 may include a camera mounted on a portion thereof for gathering facial images of a user so that the machine 50 may thereby determine whether a particular facial expression is indicative of a user struggling to keep the pace set by the exercise machine 50 and/or an NFC element to e.g. pair the machine 50 with the CE device 12 and hence access a database of preset workout routines, and the kiosk 52 may include an NFC element permitting entry to a person authenticated as being authorized for entry based on input received from a complimentary NFC element (such as e.g. the NFC element 36 on the device 12). Also note that all of the devices described in reference to
Now in reference to the afore-mentioned at least one server 54, it includes at least one processor 56, at least one tangible computer readable storage medium 58 that may not be a carrier wave such as disk-based or solid state storage, and at least one network interface 60 that, under control of the processor 56, allows for communication with the other CE devices of
Accordingly, in some embodiments the server 54 may be an Internet server, may facilitate fitness coordination and/or data exchange between CE device devices in accordance with present principles, and may include and perform “cloud” functions such that the CE devices of the system 10 may access a “cloud” environment via the server 54 in example embodiments to e.g. stream music to listen to while exercising and/or pair two or more devices (e.g. to “throw” music from one device to another).
Turning now to
In any case, after block 70 the logic proceeds to block 72 where the logic determines music (e.g. one or more music files stored on and/or accessible to the CE device) to match at least the (e.g. estimated or user-indicated/desired) tempo and/or cadence of at least the first segment of the user's exercise routine/information (e.g. using the example above, at least selects music matching a tempo for the user to bicycle at a moderately fast pace to begin the routine). Note that the tempo to music matching may be e.g. initially based on an estimate by the CE device of a tempo/cadence the user should maintain to comport with the exercise information (e.g., a certain tempo for pedaling the exercise bicycle to maintain the desired speed). As another example, the tempo to music matching may be estimated at first and then later adjusted to match the actual cadence of the user after the beginning of the workout. As such, e.g. the first song before a user takes his or her first step on a jog may contain a tempo that is estimated to be the pace the user will set and/or should maintain, and thereafter the next song's tempo may be matched to the actual pace of the user. For instance and in terms of matching music to a user's actual pace, if the user is exercising at one hundred fifty strides per minute, a piece of music may be presented that includes one hundred fifty beats per minute for the user to thereby set his or her pace by moving one stride for every musical beat.
In addition, note that tempo of the music itself may be determined by accessing metadata associated with the respective music file that contains tempo information (e.g., in beats per minute). As another example, the CE device may parse or otherwise access the music file to identify a tempo (e.g. identify a beat based on a repeated snare drum sound, inflections in a singer's voice, the changing of guitar chords, etc.), and then use the identified music tempo if it matches the user's pace/cadence (e.g. as close as possible, e.g. accounting for minor variances in the user's cadence as may naturally occur from step to step on a jog, or revolution to revolution on an exercise bicycle). Thus, it may be appreciated that e.g. at a time prior to receiving exercise information at block 70, the CE device may access all music files that are accessible to it (or e.g. a subset of files based on genre, artist, song length, etc.) to determine the beats per minute of each one, and then create a data table and/or metadata for later access by the CE device for efficiently identifying music with a tempo that matches the user's cadence at a given moment during an exercise routine without e.g. having to at that time parse the user's entire music library for music matching the user's cadence.
Still in reference to
Regardless, if the logic determines that a turn is not upcoming (e.g. not within a predefined threshold distance for turns set by a user prior to embarking on the exercise), the logic proceeds to block 77 where logic continues monitoring the user's exercise and continues presenting music matched to the user's cadence in accordance with present principles. If, however, the logic determines that a turn is upcoming, the logic instead proceeds to block 78 where the logic notifies and/or cues the user of how to proceed at least using at least one non-verbal audio cue.
For instance, a single beeping sound may be associated with a left turn (e.g. the user has preset the single beep to be associated with a left turn) while a double beeping sound may be associated with a right turn (e.g., the user having preset the double beep as well). In addition to or in lieu of the foregoing, should the user be wearing head phones such as the ones described above, the non-verbal cue may be presented in the left ear piece (only, or more prominently/loudly) to indicate a left turn should be made, and the right ear piece (only, or more prominently/loudly) to indicate a right turn should be made. In addition to or in lieu of the foregoing, other non-verbal cues that may be presented to a user e.g. in ear pieces in accordance with present principles are haptic non-verbal cues and/or vibrations such that e.g. a non-verbal vibration cue (e.g. the ear piece(s) vibrates based on a vibrator located in each respective ear piece that is in communication with the CE device's processor) may be presented on the left ear piece (only, or more prominently) to indicate a left turn should be made, and the right ear piece (only, or more prominently) to indicate a right turn should be made.
Also in addition to or in lieu of the foregoing, if desired the non-verbal audio cue may be accompanied (e.g. immediately before or after the non-verbal audio cue) by a verbal cue such as an instruction to “turn left at the next street.” Also note that the non-verbal audio cue need not be a single or double beep and that other non-verbal audio cues may be used that themselves indicate detailed information such as e.g. using an audible representation of Morse code to provide turn information to a user.
After block 78, the logic proceeds to block 80 where the logic determines that another segment of the planned exercise/route has begun, and accordingly presents music matching the tempo/cadence of the user as he or she embarks on the next segment (e.g. actual cadence, or desired cadence based on exercise information determined by the user prior to embarking on the run). As an example, the logic may determine at block 80 that the user has transitioned from running on flat ground to running up a hill, and accordingly presents music with a slower tempo relative to the music presented while the user was on flat ground (e.g. and also based upon segment settings set by a user where the user indicated that a slower pace up the hill was desired relative to the user's pace on flat ground). Conversely, if the user wished to “push it” up the hill, music may be presented with a faster tempo than that presented when the user was on flat ground, thereby assisting the user with matching a running cadence to the music tempo to thus proceed up the hill at a pace desired by the user (e.g. also based on predefined settings by the user).
In any case, after block 80 the logic proceeds to decision diamond 82, at which the logic determines whether a virtual opponent, if the user manipulated the CE device to present a representation of one while proceeding, on the exercise, is approaching or moving away from the user. For instance, the user may set settings for a virtual opponent that represents the user's minimum preferred average pace or speed at which to exercise, and thus can determine based on the virtual opponent representation whether the user's actual pace has slowed below the minimum average pace based on a non-verbal audio cue including an up Doppler effect (e.g. sound frequency increasing) thereby indicating that the virtual opponent is approaching. Accordingly, the user can also determine that the virtual opponent is receding (e.g. that the “virtual” distance separating the user and the virtual opponent is becoming larger) based on a non-verbal audio cue including a down Doppler effect (e.g. sound frequency decreasing). Furthermore, from increasing to decreasing and vice versa, the Doppler effect sound may move from one earpiece of a headphone set to another (e.g. be presented more prominently in one ear piece, then fade in that ear piece and be increasingly more prominently presented in the other ear piece) to further signify the position of the virtual opponent. Also note that present principles recognize that such non-verbal Doppler cues need not be presented constantly during the exercise to indicate to the user where the virtual opponent is relative to the user, and may e.g. only be presented to the user responsive to a determination that the virtual opponent is within a threshold distance of the user (e.g. as set by the user prior to embarking on the exercise routine).
Still in reference to decision diamond 82, if the logic determines that a virtual opponent is not approaching or moving away from the user (e.g., the pace of the user and the “virtual” pace of the virtual opponent are identical or nearly identical, and/or the virtual opponent is not within a threshold distance to present any indication to the user of the location of the virtual opponent), the logic may revert back to decision diamond 76 and continue from there. If, however, the logic determines that a virtual opponent is approaching or moving away from the user in accordance with present principles, the logic moves to block 84 where at least one non-verbal audio cue that the virtual opponent is approaching or moving away from the user is presented on the CE device. Thereafter, the logic may revert from block 84 to decision diamond 76 and proceed from there.
Before moving on to
Also before moving on to
Continuing the detailed description in reference to
After block 96, the logic proceeds to decision diamond 98 where the logic determines whether the user's cadence has changed (e.g. actual cadence, and/or estimated based on the transition from one exercise segment to another based on time and/or location such as beginning to proceed up a hill). If the logic determines at diamond 98 that the user's cadence has not changed, the logic proceeds to block 100 where the logic continues presenting music from the playlist of music of the same tempo or substantially similar tempo. If, however, the logic determines at diamond 98 that the user's cadence has changed, the logic instead proceeds to decision diamond 102 where the logic determines whether a biometric parameter of a user has exceed a threshold, or is below a threshold, depending on the particular parameter, acceptable health ranges, user settings, etc. For instance, if the user's heart rate exceeds a heart rate threshold, that could be detrimental to the user's heart and the user may thus wish to be provided with a notification in such a case. As another example where a notification may be appropriate, if the user's core body temperature exceeds a temperature threshold (e.g. the user is too hot) or even falls beneath a threshold (e.g. the user is too cold), that could be detrimental to the user's brain and thus a notification of the user's temperature would be beneficial.
In any case, should the logic determined that at least one biometric parameter does not exceed a threshold or is not below another threshold (e.g. the biometric parameter is within an acceptable range, healthy range, and/or user-desired range as input to the CE device by the user), the logic proceeds to block 100 and may subsequently proceed from there. If, however, the logic determines that a threshold has been breached, the logic instead moves to block 104 where the logic instructs the user to speed up the user's cadence/pace and/or slow down as may be appropriate depending on the biometric parameter to be brought within an acceptable range. Also note that at block 104 should the biometric parameter be dangerous to the user's health (e.g. based on a data table correlating as much), the logic may instead instruct the user to stop exercising completely and/or automatically without user input provide a notification to an emergency service along with location coordinates from a GPS receiver on the CE device.
Regardless, after block 104 the logic proceeds to block 106 where the logic changes or alters the playlist (and even entirely replaces the previous playlist) to include music with a tempo matchable to bring the user's biometric parameter within an acceptable range. For example, if the logic determines that a biometric parameter exceeds a threshold, and thus that a user needs to slow down, the playlist may be altered to present (e.g., from that point on) music with a slower tempo than was previously presented. Then after block 106 the logic may revert back to decision diamond 98 and proceed again from there. For completeness before moving on to
Now in reference to
In any case, if the logic determines at diamond 114 that the first time has not expired, the logic proceeds to block 116 where the logic continues presenting music at the same tempo as prior to the determination. If, however, the logic determines at diamond 114 that the first time has expired, the logic instead proceeds to block 118 where the logic presents music with a second tempo (e.g. second beat speed different than the first) for a second time to match a user's actual and/or desired cadence for the second time in accordance with present principles. The logic then proceeds to decision diamond 120 where the logic determines if the second time has expired at which the user was to exercise at the second tempo. If the logic determines at diamond 120 that the second time has not expired, the logic may proceed to block 116. If, however, the logic determines at diamond 120 that the second time has expired, the logic instead proceeds to block 122 where the logic presents music with a third tempo (e.g. a third beat speed different than the first and second beat speeds, or just different than the second beat speed) for a third time to match a user's actual and/or desired cadence for the third time in accordance with present principles.
Continuing the detailed description in reference to
In any case, after block 134 the logic proceeds to block 136 where the logic provides (e.g., streams) the music to the CE device, along with providing any corresponding purchase information for music files being provided e.g. that the user does not already own and/or is not in the user's cloud storage (e.g. based on determinations that the user does not own the music e.g. by searching the user's storage areas for the piece of music), such as music provided using an Internet radio service. The logic then proceeds to decision diamond 138 where the logic determines whether input has been received that was input at the CE device and transmitted to the server that indicates one or more music files have been designated (e.g., “bookmarked” by manipulating a user interface on the CE device and/or providing an audible command thereto) for purchase by the user. For instance, the user may want to designate a song for later purchasing so the user does not forget the details of the song he or she wished to purchase and hence cannot locate it later, but at the same time does not wish to complete all necessary purchase steps while still exercising such as e.g. providing credit card information.
If the logic determines at decision diamond 138 that no input has been received to designate one or more music files for later purchasing, the logic proceeds to block 140 where the logic stores data indicating the music files provided to the CE device so that the same music files may be presented again at a later time should the user elect to do so by manipulating the user's CE device. Also at block 140 the logic may store any and/or all biometric information it has received from the CE device (e.g. for access by the user's physician to determine the user's health status or simply to maintain biometric records in the user's cloud storage). Referencing decision diamond 138 again, if the logic determines thereat that input has been received to designate one or more music files for later purchasing, the logic moves to block 142 where it stores data indicating as much for later access by the user to use for purchasing the music (e.g. creates a “bookmark” file indicating the music files designated for purchase). Concluding the description of
Continuing the detailed description in reference to
In addition to the foregoing, the UI 150 may include a non-verbal cue section 160. The section 160 may include left and right turn settings 162, 164, with respective input fields 166, 168 for inputting a user-specified number of beeps (e.g. relatively high-pitched sounds separated by periods of no sound) that are to be provided to the user while proceeding on an exercise route to instruct the user where to turn in accordance with present principles. Also note that the settings 162, 164 include respective selector elements 170, 172 that are selectable to cause another UI and/or a window overlay to be presented for selecting from other available sounds other than the “beeps” that may be used to indicate turns, and indeed it is to be understood that different sounds may be used to indicate turns in addition to or in lieu of differing sound sequences.
The UI 150 also includes a setting 174 for a user to provide input using the yes or no selectors 176 regarding whether e.g. non-verbal turn cues should be presented in only the ear piece corresponding to the direction of the turn. For instance, a right turn non-verbal cue would only be presented in the right earpiece, whereas a left turn non-verbal cue would only be presented in the left earpiece of headphones. A race virtual opponent setting 178 may also be included in the UI 150 and includes yes and no selector elements 180 for a user to provide input on whether the user wishes to have virtual opponent data (e.g. indications of the location of the virtual opponent represented as non-verbal audio Doppler cues) presented on the CE device in accordance with present principles. Last, note that a submit selector 182 may be presented for selection by a user for causing the CE device to be configured according to the user's selections as input using the UI 150.
Turning now to
In addition to the foregoing, the UI 190 may also include an exercise machine configuration setting 204 for providing input to the CE device for whether the CE device is to change exercise machine configurations for an exercise machine (e.g. increasing or decreasing resistance, speed, incline or decline, etc.) being used by the user and in communication with the CE device (e.g., using NFC, Bluetooth, a wireless network, etc.) based on the user's biometrics and even e.g. user-defined settings for targeted and/or desired biometrics for particular exercises and/or user-defined settings for safe ranges of biometrics. For example, if the user indicated that he or she wished their heart rate to average a particular beats per minute, the CE device may configure the exercise machine to increase or decrease its e.g. speed or resistance to bring the user's actual heart rate into conformance with the desired heart rate input by the user to the CE device. Thus, the setting 204 includes yes and no selector elements 206 for providing input to the CE device to command the CE device to change exercise machine configurations accordingly or not, respectively. Also note that the UI 190 also includes a select machine selector element 208 for selecting an exercise machine to be communicatively connected to and configured by the CE device (e.g. by presenting another UI or overlay window for machine selection) and also a pair using NFC selector element 210 that is selectable to configure the CE device to communicate with the exercise machine automatically upon close juxtaposition of the two (e.g. juxtaposition of respective NFC elements) to exchange information for the CE device to command and/or configure the exercise machine in accordance with present principles.
Moving from
It is to be understood that still other settings may be configured using the UI 220, such as a setting 242 for matching music using the likes and/or preferences of social networking friends, and accordingly includes respective yes and no selector elements 244 for providing input to the CE device for whether to match music to be presented with one or more biometric parameters based on likes from the user's social networking friends. Note that e.g. the CE device may be configured to access one or more of the user's social networking services (e.g. based on username and password information provided by the user), to parse data in the social networking service, and make correlations between social networking posts and e.g. track names (e.g. from a database of track names) for musical tracks to thereby identify music that is “trending” or otherwise “liked” by the user's friends. Still another setting 246 may be presented for matching music in accordance with present principles by using music that is currently popular based on e.g. Billboard ratings, total sales on an online music providing service, currently trending even if on a social networking site of which the user is not a member, etc., and accordingly includes yes and no selectors 248 for providing input to the CE device for whether to match music in accordance with present principles using currently popular music. The UI 220 may also include a cloud storage setting 250 with a cloud selector element 252 and a local storage selector element 254 that are both selectable by the user to provide input to the CE device for different storage locations from which the CE device may gather and/or stream music to be presented in accordance with present principles. Thus, selecting the selector element 252 configures the CE device to gather music from the user's cloud storage account, and selecting the selector element 254 configures the CE device to gather music from the CE device's local storage area, and indeed either or both of the selector elements 252, 254 may be selected. The UI 220 may include still another setting 256 with yes and no selectors 258 for providing input to the CE device on whether to instruct a server to insert recommended music into a playlist and/or sequence of music to be presented during the exercise routine, including e.g. Internet radio music, sponsored music, music determined by the processor as being potentially likeable by the user (e.g. based on genre indications input by the user, similar music already owned by the user, etc.), music not owned by the user but nonetheless comporting with one or more other settings of the UI 220 (such as being from a genre from which the user desires music to be presented), etc.
Still in reference to
Still in reference to the UI 220, a skipping music setting 268 is shown for skipping a piece of music the user does not like (e.g. if recommended to the user during an exercise routine). Thus, a gesture selector element 270 and a audible selector element 272 are both selectable for configuring the CE device to skip a piece of music being presented responsive to receiving a (e.g. predefined) gesture or audible command, respectively, indicating as much. Note further that each of the selector elements 270, 272 may be selectable configure the CE device to present another UI and/or an overlaid UI for a user to specify one or more particular gestures and/or audible commands that are to be associated by the CE device as being a command(s) to skip a piece of music in accordance with present principles.
Concluding the description of
Now in reference to
In addition to the foregoing, the UI 280 also includes a biometric parameter section 284 for presenting one or more pieces of information related to the user's biometric parameters as detected by one or more biometric sensors such as those described above in reference to
Furthermore, the UI 280 may include a prompt 286 for a user to provide input using yes and no selectors 288 while a piece of music is being currently presented during the exercise routine to easily bookmark the piece of music for later purchasing (e.g., one touch bookmarking). The UI 280 includes a second prompt 290 for a user to provide input using yes and no selectors 292 while a piece of music is being currently presented during the exercise routine to automatically without further user input store the particular piece of music in the user's cloud storage once purchased or if purchasing is not necessary. Last, an option 294 is presented on the UI 280 for whether to change exercise machine configurations manually using yes and no selectors 296, and thus e.g. selection of the yes selector from the selectors 296 may cause another UI to be presented and/or overlaid that includes exercise machine settings configurable by a user to configure the exercise machine. This may be desirable when e.g. the CE device automatically configures the exercise machine according to one or more biometric parameters in accordance with present principles but the user nonetheless wishes to manually override the automatic configuration.
Moving on in the detailed description with reference to
With no particular reference to any figure, it is to be understood that in accordance with present principles, the CE devices disclosed herein may be configured in still other ways to match music with one or more biometric parameters. For instance, when determining whether a biometric parameter conforms to at least a portion of planned physical activity information, such determining may be executed e.g. periodically at a predefined periodic interval, where responsive to the determination that the biometric parameter does not conform to at least a portion of planned physical activity information, the CE device may automatically present an audio indication in accordance with present principles by altering the time scale of a music file being presented on the CE device. E.g., rather than presenting an entirely different piece of music to the user, the CE device may digitally stretch or compress the currently presented music file to thereby adjust the beats per minute as presented to the user in real time. Thus, time stretching of the music file may be undertaken by the CE device, as may resampling of the music file to change the duration and hence beats per minute.
In reference to the automated and/or virtual coaching discussed herein, it is to be understood that the CE device may present such information when the user configures settings for it to do so (e.g. using a UT such as the ones described above). Virtual coaching may include notifying a user when the user is transitioning from one exercise segment to another (e.g. based on GPS data accessible to the CE device while on an exercise route). For instance, the virtual coach may indicate, “You are starting to proceed up a hill, which is segment three of your exercise.” Other instructions that may be provided by a virtual coach include, e.g., at the beginning of an exercise routine, “Starting your ride now,” and “At the fork in the road ahead, turn right.” Also at the beginning of the workout and assuming the user has not already provided input to the CE device instructing the CE device to present a virtual opponent in accordance with present principles, the CE device may provide an audio prompt at the beginning of the exercise routine asking whether the user wishes to race a virtual opponent (e.g., “Would you like to race against a virtual opponent?”), to which e.g. the user may audibly respond to in the affirmative as recognized by the CE device processor using natural language voice recognition principles.
As other examples of indications that may be made by a “virtual coach” using the CE device, the CE device may indicate after conclusion of an exercise routine how much time, distance, and/or speed by which the user beat the virtual opponent. Also after conclusion of the routine, the CE device may e.g. audibly (and/or visually) provide statistics to the user such as the user's biometric readings, the total time to completion of the exercise routine, the distance traveled, etc. Even further, the CE device may just before conclusion of the exercise routine provide an audible indication that the routine is almost at conclusion by indicating a temporal countdown until finish such as, “Four, three, two, one . . . finished!”
Referring specifically to gestures in free space that are recognizable by the CE device as commands to the CE device in accordance with present principles, note that not only may a user e.g. skip a song or request a song with a faster or slower pace based on gestures in free space detected by a motion/gesture detector communicating with the CE device, but may also e.g., pause a song if the user temporarily stops an exercise. For instance, if while proceeding on an exercise route the user happens upon a friend also walking therealong, the user may provide a gesture in free space predefined at the CE device as being a command to stop presenting music (and/or tracking biometric data) until another gesture command is received to resume presentation of the music.
Now in reference to the music, music files, songs, etc. described herein, present principles recognize that although much of the present specification has been directed specifically to music-related files, present principles may apply equally to any type of audio file and even e.g. audio video files as well (e.g., presenting just the audio from an audio video file or presenting both audio and video). Furthermore and in the context of a music file, the metadata for music files described herein may include not only beats per minute and music genre but still other information as well such as e.g., the lyrics to the song.
Present principles also recognize that although much of the specification has been directed specifically to exercise routines, present principles may apply not only to exercising but also e.g. sitting down at a desk, where the CE device can detect e.g. using a brain activity monitor and blood pressure monitor that a user is stressed and thus suggests and/or automatically presents calming music to the user.
Notwithstanding, present principles as applied to exercising recognize that the following are exemplary audible and/or visual outputs by the CE device in accordance with present principles:
“Different song to get going?”, which may be presented responsive to a determination that the user is not keeping up a pace input by the user as being the desired pace.
“You are slowing down, want a different song?”, which may be presented responsive to a determination that the user is beginning to slow down his or her pace (e.g. gradually but falling outside the predefined desired pace).
“Run until end of song,” which may be presented responsive to a determination that the user is about to come to the end of an exercise segment or the exercise routine in totality, and hence the end of the current song signifies the end of the segment and/or routine.
“Increase activity for next minute,” which may be presented responsive to a determination that the user needs to exercise faster for the next minute to comport with e.g. a predefined exercise goal. Such CE device feedback may also be provided e.g. for the user to gradually increase their tempo/cadence as a workout progresses from a lower intensity segment to a higher intensity segment.
“Your heart rate is one hundred two beats per minute,” which may be presented responsive to a determination that a user has input a command during an exercise routine requesting biometric information for heart rate.
Present principles also recognize that more than one CE device may be provide e.g. non-verbal audio cues to set a pace/cadence for respective users exercising together. For example, two or more people may wish to exercise together but do not wish to listen to the same music. The users' CE devices may communicate with each other and e.g. based on a predefined cadence/tempo metadata that is exchanged therebetween (e.g. based on a desired cadence indicated by a user prior to the workout routine) different songs with the same beats per minute matching the predefined cadence may be presented on each respective CE device so that the users may establish the same pace albeit with different music.
Moving on, it is to be understood that e.g. after conclusion of an exercise routine, the user may not only share the user's exercise routine over a social networking service but may also e.g. provide the exercise data to a personal trainer's CE device (e.g. using a commonly-used fitness application) so that the personal trainer may evaluate the user and view exercise results, biometric information, etc.
Describing changes in cadence/tempo of a user, it is to be understood that should the user break stride, the CE device although detecting as much may not automatically change songs to match the new cadence but in some implementations may e.g. wait for the expiration of a threshold time at which the user runs at the new cadence, thereby not changing songs every time the user accidentally breaks pace and instead changing songs once the user has intentionally established a new pace.
Describing the non-verbal cues with more specificity, note that e.g. the CE devices described herein may be configured to dynamically without user input change from providing verbal cues to only providing non-verbal cues in some instances when e.g., after a threshold number of times making the same turn or otherwise exercising on the same route, the CE device determines that only non-verbal cues should be presented. This may be advantageous to a user who is already familiar with a neighborhood in which the user is exercising and hence does not necessarily need verbal cues but may nonetheless wish to have non-verbal ones presented that do not audibly interfere with the user's music as much as the verbal cues. Such determinations may be made e.g. at least in part by storing GPS data as the user proceeds along the route each time it is traveled which at a later time may be analyzed to determine whether the threshold number of times has been met.
Present principles further recognize that although some of the specification describes CE device features in reference to e.g. running or cycling, present principles may apply equally to other instances as well such as e.g. swimming or any other exercises establishing repetitive/rhythmic exercise motions.
Last, note that the headphones described herein may be configured to e.g. undertake active noise reduction on ambient noise present while exercising, while still allowing “transient” sounds like the sound generated by passing cars or someone talking to the exerciser to be heard by the exerciser. This headphone configuration thus promotes safety but still allows for clearly listening to music without unwanted ambient noises interfering with the user's listening enjoyment.
While the particular PRESENTING AUDIO BASED ON BIOMETRIC PARAMETERS is herein shown and described in detail, it is to be understood that the subject matter which is encompassed by the present invention is limited only by the claims.
This application claims priority to U.S. provisional patent application Ser. No. 61/878,835, filed Sep. 17, 2013.
Number | Date | Country | |
---|---|---|---|
61878835 | Sep 2013 | US |