Aspects of the present disclosure relate generally to wireless communications and, more particularly, to a method and apparatus for a heads-down display.
Personal wireless devices (e.g., Smartphones, tablets, and the like) include applications that render compelling user experiences. Users immerse themselves with these applications, sometimes at their own peril. For example, users may create danger to others, as these users continue to interact, mostly unconsciously, with their physical environment while absorbed in the use of the device.
As such, improved techniques to ensure users of wireless devices are aware of their surroundings may be desired.
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
In an aspect, a method for providing a heads-down display on a wireless device is described. The method may include receiving an environmental signal representing actual images from one or more cameras. The one or more cameras may be associated with the wireless device. The actual images may be of a physical environment in proximity to a current location of the wireless device. The method may include receiving an application signal representing application renderings associated with an application currently executing at the wireless device. The method may include simultaneously rendering the actual images and the application renderings on a screen associated with the wireless device. The actual images and the application renderings may be rendered as ordered layers on the screen.
In an aspect, a computer program product for providing a heads-down display on a wireless device comprising a non-transitory computer-readable medium including code is described. The code may cause a computer to receive an environmental signal representing actual images from one or more cameras. The one or more cameras may be associated with the wireless device. The actual images may be of a physical environment in proximity to a current location of the wireless device. The code may cause a computer to receive an application signal representing application renderings associated with an application currently executing at the wireless device. The code may cause a computer to simultaneously render the actual images and the application renderings on a screen associated with the wireless device. The actual images and the application renderings may be rendered as ordered layers on the screen.
In an aspect, a wireless device apparatus for providing a heads-down display is described. The wireless device apparatus may include one or more cameras associated with the wireless device and configured to receive an environmental signal representing actual images. The actual images may be of a physical environment in proximity to a current location of the wireless device. The wireless device apparatus may include an application component configured to receive an application signal representing application renderings associated with an application currently executing at the wireless device. The wireless device apparatus may include a rendering component configured to simultaneously render the actual images and the application renderings on a screen associated with the wireless device. The actual images and the application renderings may be rendered as ordered layers on the screen.
To the accomplishment of the foregoing and related ends, the one or more aspects comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the annexed drawings set forth in detail certain illustrative features of the one or more aspects. These features are indicative, however, of but a few of the various ways in which the principles of various aspects may be employed, and this description is intended to include all such aspects and their equivalents.
The disclosed aspects will hereinafter be described in conjunction with the appended drawings, provided to illustrate and not to limit the disclosed aspects, wherein like designations denote like elements, and in which:
Various aspects are now described with reference to the drawings. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that such aspect(s) may be practiced without these specific details.
Users of wireless devices (e.g., Smartphones, tablets, and the like) may interact with the increasingly capable devices visually, tactilely, or audibly, and in various combinations of the three. Wireless device capabilities, and applications that exploit them, require increasing user attention and focus, especially visually and tactilely. While operating such devices, users often choose, or are required, to simultaneously interact with their immediate “real world,” physical environment. If the user operates her device while she is relatively static but is located in a highly volatile environment, or the user operates her device while she is physically moving through some complex environment, the user may put herself and others in danger. In both cases, typically the user interacts with her device using her primary sensory focus while simultaneously interacting with the physical environment using her secondary sensory focus. By using only secondary focus to interact with the surrounding physical environment, the user loses information that can be highly valuable and/or critical to the user's safety, either in real time or in future considerations or actions.
As an example, and referring to
However, if user 105 is more aware of the surrounding physical environment 100, she may be safer (and less likely to cause injury to others); however, she also may have a poor application experience because she is constantly taking focus off of the screen of device 110 (and the user experience) to visually scan her physical surroundings.
According to the present aspects, and still referring to
Such a heads-down display may improve the application experience for user 105 when she is interacting with wireless device 110, while simultaneously allowing user 105 to safely navigate through her physical environment 100 by allowing user 105 to keep her visual focus in one place.
Referring to
In the example of
It will be understood that the present aspects are not limited to the specific examples of
As described herein, actual image 220 may be rendered on screen 120 of wireless device 110 with varying levels of opacity, such that even though actual image 220 is rendered as an ordered layer on top of application rendering 210, user 105 of wireless device 110 can still perceive application rendering 210 to some extent or degree. At one end of a spectrum is a complete takeover of screen 120 by actual image 220, causing obfuscation of application rendering 210, such that the heads-down experience becomes a heads-up experience. At the other end of the spectrum, application rendering 210 may be the only image rendered on screen 120—no information about physical environment 100 is displayed. In the middle of the spectrum, and most usefully, actual image 220 may be rendered on screen 120 as a translucent ordered layer overlaid on top of application rendering 210, such that user 105 could easily perceive both application rendering 210 and actual image 220.
In an aspect, the level of opacity of actual image 220 when rendered on screen 120 can be a default setting, e.g., a setting by a manufacturer of wireless device 110, installer of an operating system of wireless device 110, network services operator, and/or the like. In an aspect, the level of opacity may have been previously-defined by user 105, such that user 105 selects a setting related to a preferred level of opacity for actual image 220.
In an aspect, the level of opacity may be dynamically determined by wireless device 110 based on a determination as to whether a triggering event, such as, for example, a dangerous situation or potential collision exists in physical environment 100. In one non-limiting example, a triggering event may be a situation in which user 105 is about to step off a curb into a crosswalk and, potentially, oncoming traffic. Another, non-limiting example, is a situation in which wireless device 110 determines, based on, e.g., GPS data, that wireless device 110 is located in a dense city and, as such, user 105 should be more aware of physical environment 100. In such a case, the level of opacity may increase when a situation calls for increased awareness by user 105 of physical environment 100, and the level of opacity of actual image 220 may decrease when such a scenario is not present.
In an aspect, the level of opacity may be set by user 105 at the time that actual image 220 is rendered on screen 120. For example, a virtual or hardware-based switch, such as a volume rocker on a device, may be employed by user 105 to vary the opacity of actual image 220 to control the level of intrusion over application rendering 210. In another aspect, a device may include a multi- or dual-mode volume rocker switch, which would allow user 105 to indicate that she was using the rocker switch to control the opacity level of actual image 220, rather than audio volume. For example, a middle portion of the switch (which may be felt tactilely by a hump, depression, or striation) may be activated to indicate additional functionality requests. Once clicked in the middle, user 105 may receive some indication that the function of the rocker has been reset from volume control to opacity level control.
Similarly, and in another aspect, camera 115 of wireless device 110 may not capture environmental signals representing actual images of physical environment 100 unless, and until, a triggering event occurs. For example, under normal operation (e.g., a default setting), camera 115 may not capture environmental signals representing actual images of physical environment 100; however, when a triggering event is detected by wireless device 110, camera 115 may be directed to begin capturing the environmental signals representing actual images of physical environment 100 and rendering actual image 220 to screen 120 of wireless device 120. In such an aspect, the capturing of environmental signals that represent actual images by camera 115 and displaying actual image 220 with a particular (e.g., high) level of opacity may be based on different thresholds of the particular triggering event. In one non-limiting example, detecting that wireless device 110 is located in a dense city may cause camera 115 to begin capturing environmental signals that represent actual images, while detecting that a dangerous situation is occurring (or potentially occurring) in physical environment 100 may trigger rendering actual image 220 on screen 120 with a high level of opacity.
In another aspect and non-limiting example, wireless device 110 may be configured to specifically help user 105 avoid tripping hazards or potential collisions, e.g., hazards at the feet of user 105. For example, when activated (e.g., triggered by a dangerous situation or the like as described herein), an angle of camera 115 and/or a camera lens of camera 115, may be adjusted (dynamically, automatically, and/or manually) to “look for” ground obstacles (e.g., cracks in the sidewalk, glass on the ground, small dogs) in the area at the feet of user 105. In an aspect, object identification software may be used to determine whether such a tripping hazard exists in the actual images represented by environmental signals captured by camera 115 and, if so, the actual image 220 may be rendered on screen 120 with a level of opacity that will help user 105 avoid the tripping hazard.
In an aspect, rather than rendering actual image 220 on screen 120 and/or providing other multi-modal alerts to user 105 to alert her to physical environment 100, wireless device 110 may be configured to render representations of physical objects that are present in physical environment 100 on screen 120. Object identification software may be used to process actual images determined from environmental signals captured by camera 115 and determine if the actual images include an object that may pose a potential danger to user 105 or an object that user 105 may be interested in knowing is in her path. Furthermore, a proximity sensor may be included within wireless device 110 to determine how close (or far) user 105 is from the detected objects. In such an aspect, rather than rendering actual image 220 on screen 120 as an overlay on top of application rendering 210, wireless device 110 may be configured to render representations of objects found in physical environment 100, which may be an icon that represents an object, clip art of the object, and/or text identifying the object, as an overlay on top of application rendering 210. An indication as to the proximity of the object to user 105 also may be provided via the rendering of the representations of the objects (e.g., a number of feet or meters may appear with the representation). Similar to the rendering of actual image 220, representations of objects may be rendered with varying levels of opacity determined as described herein.
In an aspect, wireless device 110 may be configured to provide information to user 105 about physical environment 100 using a broader set of modalities instead of, or in addition to, providing a visual indication (e.g., actual image 220). For example, wireless device 110 may be configured to sound an alert in order to inform user 105 to pay attention to her environment. In another example, wireless device 110 may be configured to provide a haptic or tactile alert, such as a vibration of wireless device 110, a “kick back” of wireless device 110, and/or the like. In yet another example, a combination of visual, audio, and haptic (or tactile) alerts may be used simultaneously. These multi-modal alerts may be provided by wireless device 110 at varying levels of intensity—more intense alerts (e.g., stronger vibrations, louder sounds) may be used when a triggering event is detected, while less intense alerts (e.g., a single vibration, a low tone) may be used when such a scenario is not present. Such alert or alerts may, in a non-limiting example, indicate that actual image 220 is about to be rendered on screen 120, that physical environment 100 is particularly dangerous and user 105 should look up, or that a (user-configurable or default) situation exists in physical environment 120 (e.g., the coffee shop favored by user 105 is up ahead).
Referring to
In an aspect, one possible form factor, shown in
In an aspect, one possible form factor, shown in
In an aspect, one possible form factor, shown in
In an aspect, one possible form factor, shown in
In an optional aspect, any one (or more) of cameras 510, 520, 530, 540, 610, 620, and 630 may include a gyroscopic lens such that the lens (and/or camera) may be configured to rotate, as shown by the dotted lines in
For example, a camera and/or lens may be adjustable in order to compensate for various angles at which user 105 may hold the device while moving through (or being present in) physical environment 100. In a non-limiting example, if user 105 is holding wireless device 110 at a 30 degree angle relative to the ground (as shown in
In an aspect, a camera and/or lens may be configured to dynamically adjust its angle using, for example, gyroscope and/or accelerometer sensor feeds (which may be part of the device). In an aspect, and for example, the optics of a movable camera and/or lens may be recessed into the back pane of wireless device 110 to allow for independent angle movement such that a form factor of wireless device 110 can maintain a flat back surface. In an aspect where wireless device 110 includes a non-movable camera and/or lens, wireless device 110 may prompt user 105 to adjust the angle at which she is holding wireless device 110 when the actual images (corresponding to the captured environmental signals) are determined to not accurately represent a normal view of the physical environment 100 as would be seen by user 105 if she were not focused on wireless device 110. For example, the actual images (represented by the captured environmental signals) are determined to include an image of only the sky or the sidewalk.
In an aspect, any one of cameras 510, 520, 530, 540, 610, 620, and 630 may also use a reflective surface (e.g., mirror) that may be attached (temporarily, semi-permanently, or permanently) to wireless device 110 to adjust an angle of actual images represented by the environmental signals captured by the one or more cameras without adjusting the physical angle of the camera or its lens.
In an aspect, any one of cameras 510, 520, 530, 540, 610, 620, and 630 may capture environmental signals representing actual images such that the actual images have a lenticular effect. A lenticular effect is one whereby different images are magnified and/or shown when an image is viewed from slightly different angles. Some, non-limiting, common examples of an image with a lenticular effect may be images that include an illusion of depth and images that appear to change or move as the image is viewed from different angles. In an aspect, any one of cameras 510, 520, 530, 540, 610, 620, and 630 may be configured to provide a lenticular or similar visual effect using a lenticular lens, a non-lenticular lens configured to rotate and/or adjust, a gyroscopic lens, and/or the like. In an aspect, a lenticular effect may be a feature of wireless device 110 that is configurable and/or adjustable based on a user preconfiguration or user-provided input. In another aspect, a lenticular effect may be a default, or manufacturer-set, feature of wireless device 110. In another aspect, instead of, or in addition to, cameras 510, 520, 530, 540, 610, 620, and/or 630 having a lenticular or similar visual effect, a lenticular effect may be provided by screen 120. In such an aspect, a virtual visual effect may be produced by the angle of vision (e.g., the angle at which user 105 views screen 120) to the angle of wireless device 110.
In a non-limiting example, if user 105 is holding wireless device 110 at a particular angle (e.g., 30%) relative to the ground, and a lenticular feature of wireless device 110 is activated, an opacity level of images (e.g., actual image 210 and/or application rendering 220) displayed on screen 120 may change as the angle at which user 105 holds wireless device 110 changes. For example, user 105 can slightly adjust the angle at which she is holding wireless device 110 (e.g., relative to the ground or the horizontal) from, for instance, 30% to 32% to 28%, in order to view different aspects of application rendering 210 and actual image 220 and/or to adjust the opacity level of application rendering 210 and/or actual image 220 as displayed on screen 120.
In an aspect, wireless device 110 may be configured to include an image stabilization (IS) feature that is tuned to the cadence of user 105 as she walks. For example, if user 105 is walking while using wireless device 110, wireless device 110 may experience slight movement (e.g., bouncing up and down, side to side, and/or the like). More particularly, and for example, by using wireless device 105 while walking, user 105 may introduce instability to wireless device 110 and, as such, may cause any environmental signals captured by camera 115 to be received in an unstable manner leading to blurry or otherwise less than useful actual images. In an aspect, an image stabilization feature may be configured to use the cadence and/or gait of user 105 to compensate for, and/or correct, any instability in capturing environmental signals such that the actual images may not be blurry or otherwise problematic. In an aspect, the image stabilization feature may be configured with a particular cadence and/or gait of user 105 and/or may learn the cadence and/or gait of user 105. In another aspect, the image stabilization feature may be configured with, or may learn, cadences and/or gates of multiple users of wireless device 110.
In an aspect, and for example, in order to compensate for any instability at wireless device 110, the image stabilization feature may be configured to move wireless device 110 and/or camera 115, in a horizontal, vertical, and/or lateral direction that is opposite from any destabilizing movement induced by the movement (e.g., walking) of user 105. In an aspect, the destabilizing movement may be compensated by the image stabilization feature in one, two, or three dimensions. In an aspect, the image stabilization feature may be performed on camera 115 (e.g., adjust an angle or rotation of camera 115 to compensate for the movement of user 105), a lens of camera 115 (e.g., adjust an angle or rotation of the lens to compensate for the movement of user 105), and/or during the rendering of any actual images represented by environmental signals captured by camera 115.
Referring to
At 810, the method 800 includes receiving an environmental signal representing actual images from one or more cameras, wherein the one or more cameras are associated with a wireless device and the actual images are of a physical environment in proximity to a current location of the wireless device. For example, camera component 702 may be configured to receive environmental signals 721 representing actual images of physical environment 100, which is in proximity to a current location of wireless device 110, from camera 115. In an aspect, camera 115 may be associated with wireless device 110 (as described with respect to cameras 510, 520, 530, 540, 610, 620, and/or 630 of
In an aspect, camera component 702 may be configured to receive environmental signals 721 from one or more cameras mounted to wireless device 110 on a beveled edge (as shown, for example, in
In an aspect, camera component 702 may be configured to receive environmental signals 721 from one or more cameras mounted to wireless device 110 on one or more edges of the wireless device 110 that face forward relative to the screen 120 when wireless device 120 in either one of a portrait or a landscape orientation (as shown, for example, in
At 820, the method 800 includes receiving an application signal representing application renderings associated with an application currently executing at the wireless device. For example, application component 704 may be configured to receive application signals 720 associated with an application currently executing at wireless device 110. The application may be executed by a processor (e.g., processor 904 of
At 830, the method 800 includes simultaneously rendering the actual images and the application renderings on a screen associated with the wireless device, wherein the actual images and the application renderings are rendered as ordered layers on the screen. For example, camera component 702 may be configured to provide actual image 220 (which may be generated by camera component 702 based on environmental signals 721) to rendering component 706. For example, application component 704 may be configured to provide application rendering 210 (which may be generated by application component 704 based on application signals 720) to rendering component 706. Rendering component 706 may be configured to receive actual image 220 and application renderings 210.
Rendering component 706 includes opacity level module 708 configured to determine a level of opacity to be used when rendering actual image 220 based on at least one of a default setting, a previously-set user input, a newly-provided user input, and information related to the environment. In an aspect, the information related to the environment may be a trigger event (e.g., determination that a dangerous situation exists in physical environment 100), which may be detected by triggering event detector 716, which provides an indication that an increased risk of danger exists if actual image 220 is not rendered on screen 120 with at least a certain level of opacity.
Rendering component 706 includes image mixing module 710 configured to mix actual image 220 and application rendering 210 in order to prepare for rendering one, or both, of actual image 220 and application rendering 210, to screen 120 of wireless device 110. Rendering component 706 may include display module 712 configured to receive information related to opacity level from opacity level module 708 and a mixed image from image mixing module 710 and, based thereon, simultaneously render, via user interface 714, the actual image 220 and the application rendering 210 on screen 120 associated with wireless device 110 such that actual image 220 and application rendering 210 are rendered as ordered layers on screen 120.
In an aspect, rendering component 706 may be configured to render the actual image 220 as an ordered layer over a portion of the rendered application rendering 210. In an aspect, the portion is at least one of half of screen 120, less than half of screen 120, more than half but less than all of screen 120, a top portion of screen 120, a bottom portion of screen 120, a left portion of screen 120, a right portion of screen 120, and a center portion of screen 120. In an aspect, rendering component 706 may be configured to render actual image 220 with a level of opacity over the rendered application rendering 210 such that the ordered layers are ordered based on the level of opacity of the actual image 220. In an aspect, rendering component 706 may be configured to render actual image 220 and application rendering 210 as ordered layers on different parts of screen 120.
In an optional aspect (not shown), the method 800 may include providing multi-modal (e.g., visual, haptic, and/or audio) outputs to provide awareness of the physical environment to a user of the wireless device. For example, rendering component 706 may be configured to provide multi-modal outputs 725 to user interface 714 in order to provide an alternative, or additional, way for user 105 to be made aware of physical environment 100.
In an optional aspect (not shown), the method 800 may include determining that wireless device 110 is positioned at an angle that is within a range of angles related to providing the heads-down display and/or wireless device 110 is moving relative to a forward position of wireless device 110. For example, the heads-down display may be provided at wireless device 110 based on a determination that the wireless device 110 is being held by user 105 at an angle that may make it useful for the heads-down display feature to be enabled and/or that wireless device 110 is moving in a forward direction (e.g., user 105 is using wireless device 110 while walking down the street). In an aspect, triggering event detector 716 may be configured to make such a determination. In an aspect, and in response to the determination, rendering component 706 may be configured to simultaneously render the actual image 220 and the application renderings 210 based on the determination.
In an aspect, camera component 702, camera adjustment module 703, application component 704, rendering component 706, opacity level module 708, image mixing module 710, display module 712, user interface 714, and/or triggering event detector 716 may be hardware components physically included within wireless device 110. In another aspect, camera component 702, camera adjustment module 703, application component 704, rendering component 706, opacity level module 708, image mixing module 710, display module 712, user interface 714, and/or triggering event detector 716 may be software components (e.g., software modules), such that the functionality described with respect to each of the components and modules may be performed by a specially-configured computer, processor (or group of processors), and/or a processing system (e.g., processor 904 of
Referring to
In this example, the processing system 914 may be implemented with a bus architecture, represented generally by the bus 902. The bus 902 may include any number of interconnecting buses and bridges depending on the specific application of the processing system 914 and the overall design constraints. The bus 902 links together various circuits including one or more processors, represented generally by the processor 904 and a computer-readable media, represented generally by the computer-readable medium 906. The bus 902 may link camera component 702, camera adjustment module 703, application component 704, rendering component 706, opacity level module 708, image mixing module 710, display module 712, user interface 714, and triggering event detector 716. The bus 902 may also link various other circuits such as timing sources, peripherals, voltage regulators, and power management circuits, which are well known in the art, and therefore, will not be described any further. A bus interface 908 provides an interface between the bus 902 and a transceiver 910. The transceiver 910 provides a means for communicating with various other apparatus over a transmission medium. A user interface 912, which may the same as or similar to user interface 714, may be a keypad, display, speaker, microphone, joystick, and/or the like.
The processor 904 is responsible for managing the bus 902 and general processing, including the execution of software stored on the computer-readable medium 906. The software, when executed by the processor 904, causes the processing system 914 to perform the various functions described herein for any particular apparatus. More particularly, and as described herein, camera component 702, camera adjustment module 703, application component 704, rendering component 706, opacity level module 708, image mixing module 710, display module 712, user interface 714, and triggering event detector 716 may be software components (e.g., software modules), such that the functionality described with respect to each of the components or modules may be performed by processor 904.
The computer-readable medium 906 may also be used for storing data that is manipulated by the processor 904 when executing software, such as, for example, software modules represented by camera component 702, camera adjustment module 703, application component 704, rendering component 706, opacity level module 708, image mixing module 710, display module 712, user interface 714, and triggering event detector 716. In one example, the software modules (e.g., any algorithms or functions that may be executed by processor 904 to perform the described functionality) and/or data used therewith (e.g., inputs, parameters, variables, and/or the like) may be retrieved from computer-readable medium 906.
More particularly, the processing system further includes at least one of camera component 702, camera adjustment module 703, application component 704, rendering component 706, opacity level module 708, image mixing module 710, display module 712, user interface 714, and triggering event detector 716. The components and modules may be software modules running in the processor 904, resident and/or stored in the computer-readable medium 906, one or more hardware modules coupled to the processor 904, or some combination thereof.
As used in this application, the terms “component,” “module,” “system” and the like are intended to include a computer-related entity, such as but not limited to hardware, firmware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets, such as data from one component interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems by way of the signal.
Furthermore, various aspects are described herein in connection with a terminal, which can be a wired terminal or a wireless terminal. A terminal can also be called a system, device, subscriber unit, subscriber station, mobile station, mobile, mobile device, remote station, remote terminal, access terminal, user terminal, terminal, communication device, user agent, user device, or user equipment (UE). A wireless terminal may be a cellular telephone, a satellite phone, a cordless telephone, a Session Initiation Protocol (SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (PDA), a handheld device having wireless connection capability, a computing device, or other processing devices connected to a wireless modem. Moreover, various aspects are described herein in connection with a base station. A base station may be utilized for communicating with wireless terminal(s) and may also be referred to as an access point, a Node B, or some other terminology.
Moreover, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from the context, the phrase “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, the phrase “X employs A or B” is satisfied by any of the following instances: X employs A; X employs B; or X employs both A and B. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form.
The techniques described herein may be used for various wireless communication systems such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, TD-SCDMA, LTE, and other systems. The terms “system” and “network” are often used interchangeably. A CDMA system may implement a radio technology such as Universal Terrestrial Radio Access (UTRA), cdma2000, etc. UTRA includes Wideband-CDMA (W-CDMA) and other variants of CDMA. Further, cdma2000 covers IS-2000, IS-95 and IS-856 standards. A TDMA system may implement a radio technology such as Global System for Mobile Communications (GSM). An OFDMA system may implement a radio technology such as Evolved UTRA (E-UTRA), Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, etc. UTRA and E-UTRA are part of Universal Mobile Telecommunication System (UMTS). 3GPP Long Term Evolution (LTE) is a release of UMTS that uses E-UTRA, which employs OFDMA on the downlink and SC-FDMA on the uplink. UTRA, E-UTRA, UMTS, LTE and GSM are described in documents from an organization named “3rd Generation Partnership Project” (3GPP). Additionally, cdma2000 and UMB are described in documents from an organization named “3rd Generation Partnership Project 2” (3GPP2). Further, such wireless communication systems may additionally include peer-to-peer (e.g., mobile-to-mobile) ad hoc network systems often using unpaired unlicensed spectrums, 802.xx wireless LAN, BLUETOOTH and any other short- or long-range, wireless communication techniques.
Various aspects or features will be presented in terms of systems that may include a number of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. A combination of these approaches may also be used.
The various illustrative logics, logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Additionally, at least one processor may comprise one or more modules operable to perform one or more of the steps and/or actions described above.
Further, the steps and/or actions of a method or algorithm described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A storage medium may be coupled to the processor, such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. Further, in some aspects, the processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal. Additionally, in some aspects, the steps and/or actions of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a machine readable medium and/or computer readable medium, which may be incorporated into a computer program product.
In one or more aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored or transmitted as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection may be termed a computer-readable medium. For example, if software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs usually reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
While the foregoing disclosure discusses illustrative aspects and/or embodiments, it should be noted that various changes and modifications could be made herein without departing from the scope of the described aspects and/or embodiments as defined by the appended claims. Furthermore, although elements of the described aspects and/or embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated. Additionally, all or a portion of any aspect and/or embodiment may be utilized with all or a portion of any other aspect and/or embodiment, unless stated otherwise.