The subject matter disclosed herein relates generally to configuring a mobile device based on tracked user features.
When a user is participating in a call on his or her mobile telephone, there are numerous circumstances that draw the user's attention away from the ongoing call. For example, the user may participate in a real-world conversation with another person at the same time as the ongoing call. If the microphone on the mobile telephone remains open while the user converses with the other person, the microphone may pick up content that the user does not want to transmit in the ongoing call.
Users therefore often mute a telephone call in response to real world communication with another person. To manually mute the ongoing call, the user must remove the mobile telephone from their ear, access a screen, potentially sort through call options until mute is found, and manually select to mute the phone. During this time, the user will potentially miss a portion of incoming audio data for the ongoing call. Similarly, when the user has muted the call and needs to speak again (e.g. answer a question on the call), it is can be difficult to quickly unmute the mobile telephone in order to talk. Again, the user may miss a portion of incoming audio when they remove the phone from their ear to unmute the mobile telephone call.
Methods and systems are disclosed herein for automatically configuring a mobile device based on user feature data. In one embodiment, the mobile device may be a mobile telephone, a smartphone, or any other mobile device. For ease of discussion, the remaining description will utilize the terms mobile device and mobile telephone interchangeably, and not by way of limitation.
In one embodiment, in response to a telephone call being initiated or received on a mobile telephone, the mobile telephone attempts to extract an image of the ear of the user participating in the mobile telephone call. As discussed below, the relative position of the ear with respect to the screen of the mobile telephone is determined to detect when a microphone of the mobile telephone should be muted or unmuted. In one embodiment, the relative position of the ear enables inferences about a user, such as a relative position of a mouth of the user with respect to the phone, to be made. For example, when the mobile phone is shifted away from the user's mouth as determined by the change in relative ear position, the microphone is automatically muted. Similarly, when the mobile phone is shifted back to the user's mouth as determined by another change in relative ear position, the microphone is automatically unmuted.
In one embodiment, a user typically places the mobile telephone to their ear in order to participate in an incoming or outgoing telephone call. In one embodiment, a multi-touch screen of the mobile telephone, such as a capacitive touch sensitive screen, resistive touch sensitive screen, etc. of the mobile device, captures an initial image of the user and extracts user feature data from the initial image. In one embodiment, the initial image is an image of the user's ear captured from the multi-touch screen, and the user feature data may include one or more of an ear profile shape, angle of the anti-helix relative to the phone, curvature of the anti-helix, location of the tragus relative to the anti-helix, upper ear profile, or lobe profile, or any combination thereof. In one embodiment, the initial image, and position of the user feature data relative to the mobile telephone's multi-touch screen, is stored as a reference image. In one embodiment, data indicative of the user image and extracted user feature data may be stored as a biometric signature, feature vector, or other user identifier.
In one embodiment, the mobile telephone periodically captures additional images of the ear of the user with the multi-touch screen during the call. The new images of the ear, and relative positioning of user feature data extracted from the new images, are compared against the initial image and relative position of feature data, to determine if a shift has occurred. In one embodiment, the determined shift includes determining an amount of rotation that has occurred with respect to user feature data. That is, if the user feature data has rotated at least θ degrees, which is indicative of a movement of a microphone away from a user's mouth, an audio input of the mobile telephone is muted. In one embodiment, the image utilized to determine the shift is stored as a shifted image by the mobile telephone.
Images of the user's ear are continuously or periodically sampled, and features extracted from the images are compared with the features extracted from the shifted image. In one embodiment, the comparison is utilized to determine when a change in relative position of the user's extracted feature data indicates that a shift has occurred back towards the user's mouth. In one embodiment, when it is determined that the ear has rotated back at least φ degrees, the device is unmuted and a reference image is again stored.
Although mute and unmute are discussed above, in one embodiment, the sensitivity of the mobile telephone's microphone can be periodically adjusted as a function of the shift away from, or towards, the user's mouth. For example, the greater the shift away from the user's mouth up to the angle θ, the more the microphone sensitivity is reduced. Similarly, the greater the shift back to the user's mouth up to angle φ, the more the microphone sensitivity is increased. However, when the shift reaches the appropriate threshold, such as a rotation of θ or φ degrees, the audio input of the mobile telephone is muted or un-muted. As another example, the sensitivity of the microphone may be increased the greater the shift away from the user's mouth, and decreased as the microphone shifts back to the user's mouth. In one embodiment, the determination as to how the microphone sensitivity is adjusted based on detected shift may be selected by a user, or pre-configured by a telephone manufacturer.
In one embodiment, a user may enroll for auto mute and un-mute on a mobile telephone prior to placing or receiving a call. The mobile telephone captures one or more ear-print images associated with the user in a position that simulates the user talking during a call. The captured ear-print image(s) are then associated with a user identifier, and user preferences associated with the user identifier. Similarly, one or more shifted ear print images, such as when the microphone is shifted away from the user's mouth, could also be captured by the mobile telephone and associated with the user identifier. In one embodiment, and as discussed in greater detail below, the ear print images captured during user enrollment enable a mobile device to determine a current user of the mobile device and associate the current user with the appropriate user identifier.
In one embodiment, configuration options may be selected by a user and associated with the user identifier generated from user's enrolled ear print images. For example, a maximum and/or minimum angle of shift for auto muting and unmuting a microphone during a call could be selected by a user and associated with the template and/or shifted template. As another example, speaker volume could be associated with a user's ear print template. In one embodiment, when the initial image discussed above is matched against an enrolled user template, the mobile telephone can automatically apply the user preferences and/or options to a mobile device during a call.
Embodiments discussed herein may include configuring a mobile device based on captured user feature data and detected shifts in the user feature data during a telephone call. However, the techniques for configuring the mobile device, as discussed herein, need not be limited to the context of mobile telephone calls. In embodiments, the mobile device need not be a mobile telephone, and the mobile device may be configured during dictation operations, when audible commands are given to a mobile device, when receiving commands or information from the mobile device, as well as other user-mobile device interactions. For example, a personal assistant device's microphone may be muted based on captured and/or shifted feature data to avoid confusion with audible command entry. As another example, a mobile device's microphone may be muted based on captured and/or shifted feature data to keep the mobile device from entering a comment during dictation. The remaining description will illustrate the techniques for configuring a mobile device during a telephone call. However, the techniques discussed herein are not to be limited to telephone calls, as any user-mobile device interaction may utilize the techniques discussed herein.
Referring to
In one embodiment, from the periodically captured touchscreen images, processing logic is able to detect when the mobile device changes to a second orientation different from the first orientation. Processing logic applies a first configuration to the mobile device when the mobile device changes orientation (processing block 104). For example, when the mobile device changes orientation, processing logic can infer that the mobile device has shifted from an orientation associated with participation in the ongoing telephone conversation to a different orientation associated with non-participation in the ongoing telephone conversation. In this example, the mobile device may be rotated, translated, or otherwise shifted causing a corresponding shift of the user features in the captured touchscreen images. From this detected shift, processing logic can infer that the mobile device has changed orientation relative to an ongoing telephone conversation, and apply a different configuration (for example, a first configuration that is different from an initial configuration) to the mobile device, such as muting an audio input of the mobile device relative to the ongoing telephone conversation. As will be discussed in greater detail below, different orientations can be associated with different mobile device configurations, enabling processing logic to switch between the configurations in response to detected shifts between the different orientations.
Referring to
Furthermore, in one embodiment, as discussed in greater detail herein, the mobile device includes a multi-touch screen, such as a capacitive touch sensitive screen, resistive touch sensitive screen, etc. that enables a user to interact with the mobile device through touch. The user touches may include touches by a user's finger, face, ear, etc. Typically, in response to a telephone call event, a user would place the mobile device to his or her ear to participate in the telephone call. In one embodiment, the first image captured by processing logic captures an image of the user's ear and/or face. As illustrated in
Processing logic then extracts user feature data depicted in the image (processing block 154). In one embodiment, the features detected in the touchscreen image are user features 608 extracted from the first touchscreen image 606 of the user's ear 602. In one embodiment, the orientation and positioning of the extracted user features are detected relative to a position of the mobile device.
A second image is captured with the touchscreen of the mobile device (processing block 156), and processing logic extracts user feature data as depicted in the second captured image (processing block 158). In one embodiment, the second image is a second touchscreen image and the orientation and positioning determined using the extracted user features in the second touchscreen image are utilized by processing logic to detect a shift of the user feature from the first image and the second image (processing block 160). In one embodiment, processing logic determines a movement of user features as depicted in the touchscreen images relative to the touchscreen of the mobile device based on a comparison of the user feature data extracted from the first image with the user feature data extracted from the second image. In one embodiment, processing logic samples touchscreen images at processing block 106 on a periodic basis, such as every 0.1 seconds, every 0.5 seconds, every 1 second, etc., to enable processing blocks 108 and 110 to track the movement of the user's feature in real-time during an ongoing telephone call. In one embodiment, the tracking of the user's feature enables processing logic to determine and track a rotation of the user's feature, translation of the user's feature, or both, as well as other forms of movement of the user's feature relative to the mobile phone.
In one embodiment, as discussed in greater detail below, the tracked movement is a rotational movement of the at least one user's feature relative to the touchscreen of the mobile device. As illustrated in
A configuration is then applied to the mobile device when the detected shift exceeds a threshold (processing block 162). In one embodiment, where the shift is a rotational movement relative to the touchscreen of the mobile device, processing logic determines when the rotational movement exceeds a first rotational movement threshold, such as rotation beyond N degrees. In another embodiment, wherein the movement is a translational movement relative to the touchscreen of the mobile device, processing logic determines when the movement exceeds a translational movement threshold. In either embodiment, the threshold may be a default threshold or set through user selection. Furthermore, the threshold enables processing logic to infer that the location of the audio input has shifted away from the user's mouth a sufficient amount such that the mobile device should be configured. In one embodiment, processing logic configures the mobile device by muting the audio input of the mobile device when the movement threshold is exceeded. In one embodiment, the audio output of the mobile remains unchanged, to enable a user to continue listening to the ongoing call.
Processing logic returns to processing block 156 to continue to sample touchscreen images, detect user features, and determine movement of those features relative to the touchscreen of the mobile device. In one embodiment, the continued monitoring enables processing logic to capture additional images, such as a third touchscreen image used to detect additional movement from extracted user feature data. In one embodiment, rotation of the user feature may be detected in a second direction by comparison of the second touchscreen image and the third touchscreen image. In one embodiment, processing logic determines, from the movement of the user features in the additional touchscreen images, to un-mute the mobile device when the audio input moves back to the user's mouth, such as when a second rotational movement of the user feature data determined from the second and third touchscreen images exceeds a second rotational threshold. For example, when the rotational movement indicates that the user feature has rotated back to a position of the user feature as depicted in the first touchscreen image, such as a talking position, processing logic can return the mobile device to an original configuration or a different configuration associated with the second rotational threshold. For example, the mobile device may transition back to the talking position illustrated in
In one embodiment, processing logic configures the mobile device by automatically muting and un-muting an audio input of the mobile device during an ongoing call. However, other components of the mobile device may be automatically configured in a manner consistent with the discussion herein. For example, audio output volume, call status, touchscreen brightness, as well as other components of the mobile device may be automatically configured based on the tracked movement of user features.
Furthermore, the mobile device may be configured to receive and/or execute commands based on the tracked movement of user features and detected shifts in user features. For example, a mobile device shifting away from a user's mouth during a telephone call may indicate that captured audio data should not be transferred during an ongoing call, but instead a command should be entered and/or processed by the mobile device. For example, a mobile device may capture the audio “yes, dinner sounds like fun” while the audio input of the mobile device (for example, the microphone) is detected or inferred to be near a user's mouth. This captured audio data would be transferred as call audio data based on the tracked user features. However, when a detected shift away from the user's mouth is detected, the mobile device could be configured to process any received audio as a user command, such as “set meeting for Saturday at 8 PM, dinner with neighbors.” The mobile device would configure one or more applications, such as a calendar application, mail application, etc. based on this command. Then, when a shift in the mobile device is detected back to a talking position, captured audio could again be transferred to the caller.
In one embodiment, memory 205 may be coupled to processor 212 to store instructions for execution by the processor 212. In some embodiments, memory 205 is non-transitory. Memory 205 may store user feature tracker 230 to implement embodiments described herein. It should be appreciated that embodiments of the invention as will be hereinafter described may be implemented through the execution of instructions, for example as stored in memory or other element, by processor 212 of mobile device 210, and/or other circuitry of mobile device 210. Particularly, circuitry of mobile device 210, including but not limited to processor 212, may operate under the control of a program, routine, or the execution of instructions to execute methods or processes in accordance with embodiments of the invention. For example, such a program may be implemented in firmware or software (e.g. stored in memory 205) and may be implemented by processors, such as processor 212, and/or other circuitry. Further, it should be appreciated that the terms processor, microprocessor, circuitry, controller, etc., may refer to any type of logic or circuitry capable of executing logic, commands, instructions, software, firmware, functionality and the like.
In one embodiment, enrollment engine 232 of user feature tracker 230 is responsible for causing image collector 234 to capture one or more touchscreen images of a user prior to receiving or placing a telephone call. In one embodiment, the images are captured during the enrollment process discussed below in
In one embodiment, user feature tracker 230 is responsible for determining when a call occurs on mobile device 210. As discussed herein, the call may be an incoming or an outgoing call. In response to detection of a call, image collector 234 is triggered to capture an initial touchscreen image of a user participating in the call. As discussed herein, user features may be extracted by feature analyzer 236 from the initial image, and used to determine if an enrolled user is participating in a call by matching the extracted features from features extracted during an enrollment process. After feature analyzer 236 determines that a match has been found, configuration processor 238 applies any call preferences associated with an identified user to the call. In one embodiment, a user need not be enrolled to utilize the automatic configuration discussed herein. However, enrollment is a precondition to application of call specific preferences, such as applying a pre-set call volume, applying user-selected mute and un-mute rotation thresholds, selection of hard mute and un-mute of an audio input of mobile device 210, selection of a continuous incremental adjustments of the audio input of mobile device 210, as well as other device configuration options.
During a call, image collector 234 is responsible for periodically sampling touchscreen images of a user participating in the call. In one embodiment, image collector 234 causes touchscreen 220 to capture an image of the user's ear and/or face from simultaneous raw touch sensor data. The captured image is then provided to feature analyzer 236, which extracts user feature data from the captured image. For example, the user feature data may correspond to ear feature data, such as ear profile shape, angle of a user's anti-helix relative to the touchscreen 220 of mobile device 210, curvature of the anti-helix, location of the user's tragus relative to the anti-helix and/or relative to the touchscreen 220 of mobile device 210, the user's upper ear detail, ear canal shape, lobe detail, face data relative to one or more ear features, etc. In one embodiment, the user feature data may correspond to a capacitive image profile of the user's ear and/or face. In one embodiment, the capacitive image profile is a distribution of relative capacitance levels across the touched area measured from capacitive touch sensors, which is unique to the facial and/or ear features of each user.
Configuration processor 238 is responsible for tracking the user feature data, extracted by feature analyzer 236 during an ongoing call. In one embodiment, configuration processor 238 analyzes movement of one or more of the tracked features, such as rotational movement, translational movement, etc. relative to the touchscreen 220 of mobile device 210. In one embodiment, the tracked relative movement of the user's features enables configuration processor 238 to infer a location of the user's mouth relative to an audio input of the mobile device 210. For example, when the user features have rotated a threshold number of degrees θ, configuration processor 238 can infer that the position of the audio input is no longer close to a user's mouth and the mobile device has shifted from a talking to a non-talking position. Similarly, when the user features rotate back a threshold number of degrees φ, configuration processor 238 can infer that the position of the audio input has moved back to the user's mouth and the mobile device has shifted back to a talking position. Configuration processor 238 can apply similar thresholding to other types of movements, such as linear translation of user features relative to touchscreen 220 beyond a certain distance. Configuration processor 238 can apply thresholding for multiple types of movements, for example, both rotation and translation.
In one embodiment, when configuration processor 238 detects a specific type of movement and/or determines that the threshold amount of movement has been met, configuration processor 238 performs one or more configuration operations, such as applying different configurations to the mobile device 210. In one embodiment, hard mute and un-mute thresholds can be used as different mobile device configurations, such that the sensitivity of a microphone is unchanged until the mute and un-mute thresholds are satisfied and the corresponding configurations applied by configuration processor 238. In another embodiment, continuous mute and un-mute threshold can be used as additional mobile device configuration options, such that sensitivity of the microphone is continuously and incrementally lowered as the user's features are determined to be rotating towards the mute threshold θ. Similarly, the sensitivity of the microphone is continuously and incrementally increased as the user's features are determined to be rotating towards the un-mute threshold φ. In yet another embodiment, the sensitivity of the microphone may be increased the greater the shift away from the user's mouth, and decreased as the microphone shifts back to the user's mouth. In any of the embodiments, configuration processor 238 can provide notice to a user, such as by causing a sound tone to be played, causing mobile device 210 to vibrate, causing a visual notification to be displayed, etc. when mobile device is muted or un-muted.
Referring to
Features are extracted from the sampled touchscreen image(s) (processing block 306). In one embodiment, a user identifier, such as a template, biometric signature, feature vector, etc., is created from the user features extracted from the touchscreen image(s) for the set of talking position images and optional non-talking position images. In one embodiment, multiple users may be enrolled for automatic configuration on a single mobile device. Thus, as discussed below, the template, biometric signature, feature vector, etc. may be utilized as unique user identifiers to distinguish between different users from, for example, ear features, relative positioning of different ear features, positioning of ear features relative to facial features, etc., of the different users. Furthermore, when both talking position and non-talking position images are captured, user-specific mute and un-mute thresholds may be determined from the difference in shift, rotation, translation, etc. between the extracted user features in the two sets of images.
Processing logic then receives one or more user preference settings to be associated with the enrolled user (processing block 308). In one embodiment, additional configuration settings may optionally be specified by a user during the enrollment process. For example, minimum and/or maximum angles of rotation for automatic configuration can be selected by a user, a default device volume, whether or not to play an audio tone when auto configuring a mobile device, etc. may be specified by the user.
Referring to
Processing logic determines whether an ear is detected in the extracted feature data (processing block 408). In one embodiment, processing logic analyzes the extracted features to determine the presence, location, and/or relationship between an earlobe, tragus, anti-tragus, helix, anti-helix, or other ear features. In one embodiment, the process should not be limited to the use of ear features, as other user features may be extracted from the touchscreen images and utilized in accordance with the discussion herein.
When an ear is not detected in the touchscreen image, the process returns to processing block 404 to capture additional images. However, when an ear is detected, processing logic stores the captured image as a reference image (processing block 410). Alternatively, processing logic may generate a feature vector, biometric template, or other representation of the ear feature data extracted from the touchscreen image. In embodiments, the feature vector, biometric template, etc. may be stored along with the reference image, or stored in place of the reference image.
From the stored reference image and/or feature vector, biometric signature, template, etc., processing logic determines if there is a match with an enrollee (processing block 412). When there is match, processing logic configures the mobile device for the enrollee (processing block 414). In one embodiment, the configuration may include selecting a continuous audio input adjustment mode, selecting user-selected mute and un-mute thresholds, setting user notification options, setting a selected mobile device volume, etc.
When the user is not matched, or after the mobile device is configured for an enrolled user, processing logic proceeds to perform automatic muting and un-muting based on the tracked movement of user features in touchscreen images. In one embodiment, when an ear is detected but no user match is found, a default set of muting and unmuting configurations, such as default shift angles θd and φd, may be utilized by processing logic to configure the mobile device.
Returning to
However, when the threshold shift is not reached, processing logic returns to processing block 416 to capture a new touchscreen image. In one embodiment, until the mobile device is muted at processing block 424, processing logic captures and analyzes new touchscreen images on a periodic basis, such as every half second.
In response to the muting of a call at processing block 424, processing logic stores the new image as a shifted image (processing block 426). In one embodiment, the shifted image is utilized by processing logic as a reference image, as discussed above. Processing logic then captures a new touchscreen image (processing block 428), extracts user feature(s) from the new image (processing block 430), and compares the extracted user feature(s) to the features extracted from the shifted image (processing block 432).
When the shift in the extracted features, such as a rotational movement, translational movement, etc. relative to the touchscreen of the mobile device, meets or exceeds threshold φ, processing logic un-mutes the audio input of the mobile device (processing block 436). In one embodiment, the user may again be notified that the mobile device has been un-muted by playing a sound, causing the mobile device to vibrate, activating a user interface element, etc. In one embodiment, the un-mute notifications may be different from the mute notifications. For example, the mobile device may play a first tone accompanied by a short vibration when muted, but play a second tone accompanied by two short vibrations when un-muted. Furthermore, and similar to the discussion above, when the feature shift does not exceed φ, new touchscreen images are periodically captured and analyzed. In one embodiment, the movement tracked by processing logic and analyzed with respect to threshold φ represents a shift back to an initial talking position. That is, in response to detecting rotational movement, translational movement, or both, etc. back to the original talking position, processing logic infers that the audio input of the mobile device has moved back to the user's mouth, and the new image is stored is a reference image representing the mobile device in a talking position (processing block 438). The process then returns to processing block 416.
In one embodiment, processing blocks 416-438 continue to be performed by processing logic for the duration of a call. The process, however, may terminate at any processing block when an ongoing call is terminated. In one embodiment, when a user is not matched to an enrolled user at processing block 412, processing logic may trigger the enrollment process of
Furthermore, although not illustrated in
It should be appreciated that when the devices discussed herein are mobile or wireless devices, they may communicate via one or more wireless communication links through a wireless network that are based on or otherwise support any suitable wireless communication technology. For example, in some aspects a computing device or server may associate with a network including a wireless network. In some aspects the network may comprise a body area network or a personal area network (e.g., an ultra-wideband network). In some aspects the network may comprise a local area network or a wide area network. A wireless device may support or otherwise use one or more of a variety of wireless communication technologies, protocols, or standards such as, for example, CDMA, TDMA, OFDM, OFDMA, WiMAX, and Wi-Fi. Similarly, a wireless device may support or otherwise use one or more of a variety of corresponding modulation or multiplexing schemes. A mobile wireless device may wirelessly communicate with other mobile devices, cell phones, other wired and wireless computers, Internet web-sites, etc.
The teachings herein may be incorporated into (e.g., implemented within or performed by) a variety of apparatuses (e.g., devices). For example, one or more aspects taught herein may be incorporated into a phone (e.g., a cellular phone), a personal data assistant (PDA), a tablet, a mobile computer, a laptop computer, a tablet, an entertainment device (e.g., a music or video device), a headset (e.g., headphones, an earpiece, etc.), or any other suitable device.
In some aspects a wireless device may comprise an access device (e.g., a Wi-Fi access point) for a communication system. Such an access device may provide, for example, connectivity to another network (e.g., a wide area network such as the Internet or a cellular network) via a wired or wireless communication link. Accordingly, the access device may enable another device (e.g., a Wi-Fi station) to access the other network or some other functionality. In addition, it should be appreciated that one or both of the devices may be portable or, in some cases, relatively non-portable.
Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in random-access memory (RAM), flash memory, read-only memory (ROM), erasable programmable read-only memory (EPROM), electronically erasable programmable read-only memory (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a non-transitory computer-readable medium. Computer-readable media can include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such non-transitory computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of non-transitory computer-readable media.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.