With the fast development of mobile devices equipped with high-speed network access, e.g., smartphones and tablets, mobile users enjoy an unprecedented, rich user experience with an increasing number of applications. Examples of such experiences include gaming, video creation, personal health management, audio capture and processing, etc. However, the mobile user experience is still limited, compared with higher end desktop and laptops, due to the following factors, among others: hardware limitation in terms of CPU (central processing unit) computation power and memory capacity, limited battery life, and potentially high communication cost.
With the fast development of cloud computing and high speed wireless technologies, it has become feasible to offload computing to the cloud infrastructure servers, e.g., remote cloud servers such as Amazon EC2® (elastic computer cloud) or local cloud servers such as nearby desktops. Recent research has proposed implementation approaches to offload certain mobile applications to remote servers. For example, the offloading inference engine (OLIE) makes intelligent offloading decisions. OLIE proposes a dynamic offloading engine to overcome the memory resources constraints of local malle devices. In “MAUI: Making Smartphones Last Longer with Code Offload”, Eduardo Cuervo, et. al. (2010), code execution is offloaded using Microsoft .NET Common Language Runtime (CLR) to remote servers to reduce energy consumption. However, progress is needed in the area of making optimal decisions based on comprehensive runtime dynamic information.
In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.
Smartphones or tablets with network access and multiple sensors that run various applications are becoming more and more popular. Many applications that provide rich user experiences demand high computing capabilities, e.g., fast processor speed and large memory size, and remain power-hungry, which negatively impacts battery life.
Embodiments provide optimized performance, energy, user experience and cost through cloud aware computing distribution by systematically evaluating the dynamic situations or conditions, e.g., devices, servers, and network conditions, and by making optimal decision on the computing distribution between local devices and remote servers. By making decisions based on the systematic evaluations of the dynamic conditions, an optimal mobile user experience may be achieved by taking advantage of the rapidly developed and widely available cloud computing technologies.
The increase in cloud computing, high end processors and platforms, and high speed wireless technologies further drive efforts offload processing to more powerful computing platforms. It is natural to take advantage of the trend and offload certain computing task from small mobile devices to backend cloud servers to improve the performance and energy.
Referring again to
The network conditions monitor 230 identifies many decision impact factors 270. In
Regarding energy 272, consumption by communication is vastly different with different network interface and channel conditions. Measurement and literature studies show that there is a potential 10× energy difference between Wi-Fi and 3G interfaces for data transmission. Even with the same interface, e.g., Wi-Fi, the energy for transmitting the same amount of data shows up to 5× difference due to different channel conditions.
Thus, the total energy impact by offloading may be determined using a comparison of the local resource saved by offloading the computing task versus the additional communication energy caused by uploading to/downloading the offloading related data from the remote servers.
Referring to
User preference 276 may influence where the job is executed. Different users may want to execute the job locally or remotely. For example, a user may want a certain application to be always executed at the local mobile device 220, or in-country server, e.g., server 777, for security reasons.
Monetary Costs 278 are also considered when making an offloading decision. For example, if only a 3G interface is available and the user is about to exceed the data plan limit, the cost of offloading will be much higher than the case when free Wi-Fi™ is available.
The dynamic profiler 210 collects the raw information, i.e., the decision impact factors 270, and converts them to corresponding parameters that may be used as input for the runtime offload decision making logic. Energy gain factor E is calculated as E=Ecompute−Ecomm. Ecompute is the energy saved by offloading, and Ecomm is the extra energy consumed for communication, considering network condition and amount of data need to be moved. Performance gain factor P is calculated as P=Pcompute−Pcomm. Pcompute is the performance speed up by running the application in a faster server; and Pcomm is the performance loss, e.g., extra time is used for communication. User preference, U, is gathered from user. Cost, C, is calculated by considering the network interface and server usage cost (if relevant), and calculate the monetary cost. For example, if free Wi-Fi™ is available and the server usage is free, then C=0.
The runtime offload decision making logic 240 implements the policy engine that takes the runtime information and makes a final offloading decision. Details of the policies are described herein below. It is worth noting that although in
The client interface 260 and the server interface 262 is to provide processing of data communicated between mobile device 220 and server 222 to enable offloading the execution of tasks from the mobile device 220. Once a decision being made, if it an offloading decision, the applications 250, 252 work with the client interface 260 and the server interface 262 to offload the computing to the cloud. Server 222 includes at least one application for support the offloading of computing from the mobile device. Many solutions are available for the implementation of the interface, e.g., client/server proxies.
However, those skilled in the art will recognize that there may be multiple policies that may be applied to determine the final offloading action. For example, a power saving policy only considers energy saving or gives more weight to energy saving aspect; in other words, it may give more weight to energy factor E. A performance policy puts more emphasis on the performance improvement. A cost effective policy: This policy puts more emphasis on the cost of offloading. With a balanced policy, the decision making logic tries to balance energy, performance and cost factors.
Accordingly, many applications may potentially benefit from the intelligent cloud aware computing distribution including image processing, such as facial and object recognition, audio processing including speech and audio content recognition and security including taint analysis and virus scans.
Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules are tangible entities (e.g., hardware) capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a machine readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
Accordingly, the term “module” is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software, the general-purpose hardware processor may be configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
Machine (e.g., computer system) 600 may include a hardware processor 602 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof), a main memory 604 and a static memory 606, some or all of which may communicate with each other via an interlink (e.g., bus) 608. The machine 600 may further include a display unit 610, an alphanumeric input device 612 (e.g., a keyboard), and a user interface (UI) navigation device 611 (e.g., a mouse). In an example, the display unit 610, input device 617 and UI navigation device 614 may be a touch screen display. The machine 600 may additionally include a storage device (e.g., drive unit) 616, a signal generation device 618 (e.g., a speaker), a network interface device 620, and one or more sensors 621, such as a global positioning system (GPS) sensor, compass, accelerometer, or other sensor. The machine 600 may include an output controller 628, such as a serial (e.g., universal serial bus (USB), or other wired or wireless (e.g., infrared (IR)) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).
The storage device 616 may include at least one machine readable medium 622 on which is stored one or more sets of data structures or instructions 624 (e.g., software) embodying or utilized by any one or more of the techniques or functions described herein. The instructions 624 may also reside, completely or at least partially, within the main memory 604, within static memory 606, or within the hardware processor 602 during execution thereof by the machine 600. In an example, one or any combination of the hardware processor 602, the main memory 604, the static memory 606, or the storage device 616 may constitute machine readable media.
While the machine readable medium 622 is illustrated as a single medium, the term “machine readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that configured to store the one or more instructions 624.
The term “machine readable medium” may include any medium that is capable of storing, encoding, or carrying instructions for execution by the machine 600 and that cause the machine 600 to perform any one or more of the techniques of the present disclosure, or that is capable of storing, encoding or carrying data structures used by or associated with such instructions. Non-limiting machine readable medium examples may include solid-state memories, and optical and magnetic media. In an example, a massed machine readable medium comprises a machine readable medium with a plurality of particles having resting mass. Specific examples of massed machine readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
The instructions 624 may further be transmitted or received over a communications network 626 using a transmission medium via the network interface device 620 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc.). Example communication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks ((e.g., channel access methods including Code Division Multiple Access (CDMA), Time-division multiple access (TDMA), Frequency-division multiple access (FDMA), and Orthogonal Frequency Division Multiple Access (OFDMA) and cellular networks such as Global System for Mobile Communications (GSM), Universal Mobile Telecommunications System (UMTS), CDMA 2000 1x* standards and Long Term Evolution (LTE)), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Institute of Electrical and Electronics Engineers (IEEE) 802 family of standards including IEEE 802.11 standards (Wi-Fi®), IEEE 802.16 standards (WiMaxt) and others), peer-to-peer (P2P) networks, or other protocols now known or later developed.
For example, the network interface device 620 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the communications network 626. In an example, the network interface device 620 may include a plurality of antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine 600, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
The behavior of the devices when running certain computation intensive workload is improved. Execution based on run time dynamics, such as network condition, available server resources, etc. is intelligently distributed. Mobile devices gather run-time information and user preference to make intelligent decision on the computing distribution. Multiple aspects of impacting factors are processed and optimal decision for performance, energy and cost are made collectively. Thus, the energy, performance and user experience is also significantly improved.
Example 1 includes subject matter (such as a device, apparatus or architecture for providing cloud aware computing distribution, comprising a network conditions monitor for observing and for identifying decision impact factors of tasks in a runtime environment, a dynamic profiler, coupled to the network conditions monitor, for receiving runtime information regarding the decision impact factors identified by the network conditions monitor and for producing a profile based on the decision impact factors, runtime offload decision making logic, coupled to the dynamic profiler, for processing the profile produced by the dynamic profiler based on the received decision impact factors according a predetermined policy and determining final offloading decisions based on the predetermined policy and the processed decision impact factors, wherein the runtime offload decision making logic is to provide the final offloading decisions to the applications on the device for executing the tasks locally or remotely based on the determined final offloading decision.
Example 2 may optionally include the subject matter of Example 1, wherein the dynamic profiler is to convert the received decision impact factors to parameters used as input to runtime offload decision making logic.
Example 3 may optionally include the subject matter of any one or more of Examples 1 and 2, wherein the dynamic profiler is to continuously monitor and collect comprehensive runtime information to produce a profile and the runtime offload decision making logic is to make optimal offloading decision based on multiple considerations associated with the profile.
Example 4 may optionally include the subject matter of any one or more of Examples 1-3, wherein the network conditions monitor is to observe network availability and channel conditions and identify energy impact factors, performance impact factors, user preference impact factors and cost impact factors.
Example 5 may optionally include the subject matter of any one or more of Examples 1-4, wherein the runtime offload decision making logic is to consider a subset of the decision impact factors provided in the profile according to the predetermined policy.
Example 6 may optionally include the subject matter of any one or more of Examples 1-5, wherein the decision impact factors are associated with network availability and channel conditions.
Example 7 may optionally include the subject matter of any one or more of Examples 1-6, wherein the architecture further includes a client interface for communicating with a server interface at the remote cloud server to offload a task by moving the execution of the task from the local device to the remote server.
Example 8 may optionally include the subject matter of any one or more of Examples 1-7, wherein the runtime offload decision making logic is disposed at the mobile device.
Example 9 may optionally include the subject matter of any one or more of Examples 1-8, wherein the runtime offload decision making logic is disposed at the remote cloud server.
Example 10 may optionally include the subject matter of any one or more of Examples 1-9, wherein the dynamic profiler is to process the runtime information by determining a cost and a benefit of executing tasks locally and at a remote cloud server.
Example 11 may include, or may optionally be combined with the subject matter of any one or more of Examples 1-10 to include, subject matter (such as a method or means for performing acts) including starting an application, obtaining an action for the application preferred by a user, determining whether the user prefers local execution, gathering runtime information for a task when the user is determined to prefers remote execution, obtaining the preferred policy and a decided weight on the runtime information based on the preferred policy, calculating a final combination of weights for the runtime information and executing the offloading of the task based on the calculated final combination of weights for the runtime information.
Example 12 may optionally be combined with the subject matter of any one or more of Examples 1-11 to include, wherein the runtime information comprises energy impact factors, performance impact factors, user preference impact factors and cost impact factors.
Example 13 may optionally be combined with the subject matter of any one or more of Examples 1-12 to include, executing the process locally when the user is determined to prefer local execution.
Example 14 may optionally be combined with the subject matter of any one or more of Examples 1-13 to include, continuously monitoring and collecting comprehensive runtime information to produce a profile and making an optimal offloading decision based on multiple considerations associated with the profile.
Example 15 may optionally be combined with the subject matter of any one or more of Examples 1-14 to include, wherein the gathering runtime information comprises observing network availability and channel conditions.
Example 16 may optionally be combined with the subject matter of any one or more of Examples 1-15 to include, wherein the executing the offloading of the task figther comprises considering only a subset of the runtime information according to the preferred policy.
Example 17 may optionally be combined with the subject matter of any one or more of Examples 1-16 to include, wherein the calculating a final combination of weights for the runtime information comprises determining a cost and a benefit of executing tasks locally and at a remote cloud server.
Example 18 may include, or may optionally be combined with the subject matter of any one or more of Examples 1-17 to include, subject matter (such as means for performing acts or machine readable medium including instructions that, when executed by the machine, cause the machine to perform acts) including starting an application, obtaining an action for the application preferred by a user, determining whether the user prefers local execution, gathering runtime information for a task when the user is determined to prefer remote execution, obtaining the preferred policy and a decided weight on the runtime information based on the preferred policy, calculating a final combination of weights for the runtime information and executing the offloading of the task based on the calculated final combination of weights for the runtime information.
Example 19 may optionally be combined with the subject matter of any one or more of Examples 1-18 to include, wherein the runtime information comprises energy impact factors, performance impact factors, user preference impact factors and cost impact factors.
Example 20 may optionally be combined with the subject matter of any one or more of Examples 1-19 to include, executing the process locally when the user is determined to prefer local execution.
Example 21 may optionally be combined with the subject matter of any one or more of Examples 1-20 to include, continuously monitoring and collecting comprehensive runtime information to produce a profile and making an optimal offloading decision based on multiple considerations associated with the profile.
Example 22 may optionally be combined with the subject matter of any one or more of Examples 1-21 to include, wherein the gathering runtime information comprises observing network availability and channel conditions.
Example 23 may optionally be combined with the subject matter of any one or more of Examples 1-22 to include, wherein the executing the offloading of the task further comprises considering only a subset of the runtime information according to the preferred policy.
Example 24 may optionally be combined with the subject matter of any one or more of Examples 1-23 to include, wherein the calculating a final combination of weights for the runtime information comprises determining a cost and a benefit of executing tasks locally and at a remote cloud server.
Example 25 may include, or may optionally be combined with the subject matter of any one or more of Examples 1-24 to include, subject matter (such a system for providing cloud aware computing distribution) including a mobile device coupled to a server through a network, wherein the mobile device comprises a network conditions monitor for observing and for identifying decision impact factors of tasks in a runtime environment, a dynamic profiler, coupled to the network conditions monitor, for receiving runtime information regarding the decision impact factors identified by the network conditions monitor and for producing a profile based on the decision impact factors, runtime offload decision making logic, coupled to the dynamic profiler, thr processing the profile produced by the dynamic profiler based on the received decision impact factors according a predetermined policy and determining final offloading decisions based on the predetermined policy and the processed decision impact factors, wherein the runtime offload decision making logic is to provide the final offloading decisions to the applications on the device for executing the tasks locally at the mobile device or remotely at the server based on the determined final offloading decision, and wherein the server comprises at least one application for executing the at least one task offloaded from the mobile device and a server interface for processing data associated with the at least one task communicated between the mobile device and the server.
Example 26 may optionally be combined with the subject matter of any one or more of Examples 1-25 to include, wherein the dynamic profiler is further to continuously monitor and collect comprehensive runtime information to produce a profile and to convert the received decision impact factors to parameters used as input to the runtime offload decision making logic, the dynamic profiler is to further process the runtime information by determining a cost and a benefit of executing tasks locally and at a remote cloud server, the dynamic profiler.
Example 27 may optionally be combined with the subject matter of any one or more of Examples 1-26 to include, wherein the runtime offload decision making logic is to further make optimal offloading decision based on multiple considerations associated with the profile including considering a subset of the decision impact factors provided in the profile according to the predetermined policy.
Example 28 may optionally be combined with the subject matter of any one or more of Examples 1-27 to include, wherein the network conditions monitor is to observe network availability and channel conditions and to identify energy impact factors, performance impact factors, user preference impact factors and cost impact factors.
Example 29 may optionally be combined with the subject matter of any one or more of Examples 1-28 to include, wherein the decision impact factors are associated with network availability and channel conditions.
Example 30 may optionally be corribined with the subject matter of any one or more of Examples 1-28 to include, wherein the architecture further includes a client interface for communicating with a server interface at the remote cloud server to offload a task by moving the execution of the task from the local device to the remote server.
The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, the present inventors also contemplate examples in which only those elements shown or described are provided. Moreover, the present inventors also contemplate examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more,” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.
The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure, for example, to comply with 37 C.F.R. §1.72(b) in the United States of America. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment. The scope of the embodiments may be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.