The innovation is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject innovation. It may be evident, however, that the innovation can be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the innovation.
As used in this application, the terms “component” and “system” are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.
As used herein, the term to “infer” or “inference” refer generally to the process of reasoning about or inferring states of the system, environment, user, and/or intent from a set of observations as captured via events and/or data. Captured data and events can include user data, device data, environment data, data from sensors, sensor data, application data, implicit and explicit data, etc. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic, that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
Referring initially to the drawings,
The novel activity-centric features of the subject innovation can make interaction with computers more natural and effective than conventional computing systems. In other words, the subject innovation can build the notion of activity into aspects of the computing experience thereby providing support for translating the “real world” activities into computing mechanisms. The system 100 can automatically and/or dynamically identify steps, resources and application functionality associated with a particular “real world” activity. The novel features of the innovation can alleviate the need for a user to pre-assemble activities manually using the existing mechanisms. Effectively, the subject system 100 can make the “activity” a focal point to drastically enhance the computing experience.
As mentioned above, the activity-centric concepts of the subject system 100 are directed to new techniques of interaction with computers. Generally, the activity-centric functionality of system 100 refers to a set of infrastructure that initially allows a user to tell the computer (or the computer to determine or infer) what activity the user is working on—in response, the computer can keep track of, monitor, and make available resources based upon the activity. Additionally, as the resources are utilized, the system 100 can monitor the particular resources accessed, people interacted with, websites visited, web-services interacted with, etc. This information can be employed in an ongoing manner thus adding value through tracking these resources. It is to be understood that resources can include, but are not to be limited to include, documents, data files, contacts, emails, web-pages, web-links, applications, web-services, databases, images, help content, etc.
At 202, an activity of a user can be determined. As will be described in greater detail below, the activity can be explicitly determined by a user. Similarly, the user can schedule activities for future use. Still further, the system can determine and/or infer the activity based upon user actions and other information.
By way of example, the system can monitor a user's current actions thereafter comparing the actions to historical data to determine or assess the current user activity. As well, the system can employ a user's context (e.g., state, location, etc.) and other information (e.g., calendar, personal information management (PIM) data) thereafter inferring a current and/or future activity.
Once the activity is determined, at 204, the system can identify components associated with the particular activity. For example, the components can include, but are not limited to include, application functionalities and the like. Correspondingly, additional component associated resources can be identified at 206. For instance, the system can identify files that can be used with a particular activity component.
At 208, the UI can be adapted and rendered to a user. Effectively, the innovation can evaluate the gathered activity components and resources thereafter determining and adapting the UI accordingly. It is a novel feature of the innovation to dynamically adapt the UI based upon the activity as well as surrounding information. In another example, the system can consider the devices being used together with the activity being conducted in order to dynamically adapt the UI.
Context factors can be determined at 304. By way of example, a user's physical/mental state, location, state within an application or activity, etc. can be determined at 304. As well, a device context can be determined at 304. For example, context factors related to currently employed user devices and/or available devices can be determined.
In accordance with the gathered contextual data, UI components can be retrieved at 306 and rendered at 308. In accordance with the novel aspects of the innovation, the UI can be modified and/or tailored with respect to a particular activity. As well, the UI can be tailored to other factors, e.g., device type, user location, user state, etc. in addition to the particular activity.
With reference to
Turning now to
The novel activity-centric system 500 can enable users to define and organize their work, operations and/or actions into units called “activities.” Accordingly, the system 500 offers a user experience centered on those activities, rather than pivoted based upon the applications and files of traditional systems. The activity-centric system 500 can also usually include a logging capability, which logs the user's actions for later use.
In accordance with the innovation, an activity typically includes or links to all the resources needed to perform the activity, including tasks, files, applications, web pages, people, email, and appointments. Some of the benefits of the activity-centric system 500 include easier navigation and management of resources within an activity, easier switching between activities, procedure knowledge capture and reuse, improved management of activities and people, and improved coordination among team members and between teams.
As described herein and illustrated in
The “activity logging” component 502 can log the user's actions on a device to a local (or remote) data store. By way of example, these actions can include, but are not limited to include, user interactions (for example, keyboard, mouse, and touch input), resources opened, files changed, application actions, etc. As well, the activity logging component 502 can also log current activity and other related information such as additional context data (e.g., user emotional/mental state, date, activity priority (e.g., high, medium low) as well as deadlines, etc. This data can be transferred to a server that holds the user's aggregated log information from all devices used. The logged data can later be used by the activity system in a variety of ways.
The “activity roaming” component 504 is responsible for storing each of the user's activities, including related resources and the “state” of open applications, on a server and making them available to the device(s) that the user is currently using. As well, the resources can be made available for use on devices that the user will use in the future or has used in the past. The activity roaming component 504 can accept activity data updates from devices and synchronize and/or collaborate them with the server data.
The “activity boot-strapping” component 506 can define the schema of an activity. In other words, the activity boot-strapping component 506 can define the types of items it can contain. As well, the component 506 can define how activity templates can be manually designed and authored. Further, the component 506 can support the automatic generation, and tuning of templates and allow users to start new activities using templates. Moreover, the component 506 is also responsible for template subscriptions, where changes to a template are replicated among all activities using that template.
The “user feedback” component 508 can use information from the activity log to provide the user with feedback on his activity progress. The feedback can be based upon comparing the user's current progress to a variety of sources, including previous performances of this or similar activities (using past activity log data) as well as to “standard” performance data published within related activity templates.
The “monitoring group activities” component 510 can use the log data and user profiles from one or more groups of users for a variety of benefits, including, but not limited to, finding experts in specific knowledge areas or activities, finding users that are having problems completing their activities, identifying activity dependencies and associated problems, and enhanced coordination of work among users through increased peer activity awareness.
The “environment management” component 512 can be responsible for knowing where the user is, the devices that are physically close to the user (and their capabilities), user state (e.g., driving a car, alone versus in the company of another), and helping the user select the devices used for the current activity. The component 512 is also responsible for knowing which remote devices might be appropriate to use with the current activity (e.g., for processing needs or printing).
The “workflow management” component 514 can be responsible for management, transfer and collaboration of work items that involve other users, devices and/or asynchronous services. The assignment/transfer/collaboration of work items can be ad-hoc, for example, when a user decides to mail a document to another user for review. Alternatively, the assignment/transfer of work items can be structured, for example, where the transfer of work is governed by a set of pre-authored rules. In addition, the workflow manager 514 can maintain an “activity state” for workflow-capable activities. This state can describe the status of each item in the activity, for example, who or what it is assigned to, where the latest version of the item is, etc.
The “UI adaptation” component 516 can support changing the “shape” of the user's desktop and applications according to the current activity, the available devices, and the user's skills, knowledge, preferences, policies, and various other factors. The contents and appearance of the user's desktop, for example, the applications, resources, windows, and gadgets that are shown, can be controlled by associated information within the current activity. Additionally, applications can query the current activity, the current “step” within the activity, and other user and environment factors, to change their shape and expose or hide specific controls, editors, menus, and other interface elements that comprise the application's user experience.
The “activity-centric recognition” component or “activity-centric natural language processing (NLP)” component 518 can expose information about the current activity, as well as user profile and environment information in order to supply context in a standardized format that can help improve the recognition performance of various technologies, including speech recognition, natural language recognition, desktop search, and web search.
Finally, the “application atomization” component 520 represents tools and runtime to support the designing of new applications that consist of services and gadgets. This enables more fine-grained UI adaptation, in terms of template-defined desktops, as well as adapting applications. The services and gadgets designed by these tools can include optional rich behaviors, which allow them to be accessed by users on thin clients, but deliver richer experiences for users on devices with additional capabilities.
In accordance with the activity-centric environment 500, once the computer understands the activity, it can adapt to that activity in order to assist the user in performing it. For example, if the activity is the review of a multi-media presentation, the application can display the information differently as opposed to an activity of the UI employed in creating a multi-media presentation. Although some existing applications attempt to hard code a limited number of fixed activities within themselves, the activity-centric environment 500 provides an platform for creating activities within and across any applications, websites, gadgets, and services. All in all, the computer can react and tailor functionality and the UI characteristics based upon a current state and/or activity. The system 500 can understand how to bundle up the work based upon a particular activity. Additionally, the system 500 can monitor actions and automatically bundle them up into an appropriate activity or group of activities. The computer will also be able to associate a particular user to a particular activity, thereby further personalizing the user experience.
All in all, the activity-centric concept of the subject system 500 is based upon the notion that users can leverage a computer to complete some real world activity. As described supra, historically, a user would outline and prioritize the steps or actions necessary to complete a particular activity mentally before starting to work on that activity on the computer. In other words, conventional systems do not provide for systems that enable the identification and decomposition of actions necessary to complete an activity.
The novel activity-centric systems enable automating knowledge capture and leveraging the knowledge with respect to previously completed activities. In other words, in one aspect, once an activity is completed, the subject innovation can infer and remember what steps were necessary when completing the activity. Thus, when a similar or related activity is commenced, the activity-centric system can leverage this knowledge by automating some or all of the steps necessary to complete the activity. Similarly, the system could identify the individuals related to an activity, steps necessary to complete an activity, documents necessary to complete, etc. Thus, a context can be established that can help to complete the activity next time it is necessary to complete. As well, the knowledge of the activity that has been captured can be shared with other users that require that knowledge to complete the same or a similar activity.
Historically, the computer has used the desktop metaphor, where there was effectively only one desktop. Moreover, conventional systems stored documents in a filing cabinet, where there was only one filing cabinet. As the complexity of activities rises, and as the similarity of the activities diverges, it can be useful to have many virtual desktops available that can utilize identification of these similarities in order to streamline activities. Each individual desktop can be designed to achieve a particular activity. It is a novel feature of the innovation to build this activity-centric infrastructure into the operating system such that every activity developer and user can benefit from the overall infrastructure.
The activity-centric system proposed herein is made up of a number of components as illustrated in
Referring now to
In operation, the system 600 can facilitate adjusting a UI in accordance with the activity of a user. As described supra, in one aspect, the activity detection component 102 can detect an activity based upon user action, current behavior, past behavior or any combination thereof. As illustrated the activity detection component 102 can include an adaptive UI rules engine 602 that enables developers to build adaptive UI components 606 using a declarative model that describes the capabilities of the adaptive UI 104, without requiring a static layout of UI components 606, inclusion or flow. In other words, the adaptive UI rules engine 602 can interact with a model (not shown) to define the UI components 602 thereafter determining the adaptive UI 104 layout, inclusion or flow with respect to an activity.
As well, the adaptive UI rules engine component 602 can enable developers and activity authors to build and define adaptive UI experiences using a declarative model that describes the user experience, the activities and tasks supported by the experience and the UI components 606 that should be consolidated with respect to an experience.
Referring now to
As shown, the adaptive UI rules engine 602 can evaluate activity-centric context data and an application UI model. In other words, the adaptive UI rules engine 602 can evaluate the activity-centric context data and the application UI model in view of pre-defined (or inferred) rules. Accordingly, an adapted UI model 702 can be established.
The UI generator component 604 can employ the adapted UI model 702 and device profile(s) 704 to establish the adapted UI interface 104. In other words, the UI generator component 604 can identify the adapted UI interface 104 layout, inclusion and flow.
The system 700 can provide a dynamically changing UI based upon an activity or group of activities. In doing so, the system 700 can consider both the context of the activity the user is currently working on, as well as environmental factors, resources, applications, user preferences, device types, etc. In one example, the system can adjust the UI of an application based upon what the user is trying to accomplish with a particular application. As compared to existing applications which generally have only a fixed, or statically changeable UI, this innovation allows a UI to dynamically adapt based upon what the user was trying to do, as well as other factors as outlined above.
In one aspect, the system 700 can adapt in response to user feedback and/or based upon a user state of mind. Using different input modalities, the user can express intent and feelings. For example, if a user is happy or angry, the system can adapt to express error messages in a different manner and can change the way options are revealed, etc. The system can analyze pressure on a mouse, verbal statements, physiological signals, etc. to determine state of mind. As well, user statements can enable the system to infer that there has not been a clear understanding of an error message, accordingly, the error message can be modified and displayed in a different manner in an attempt to rectify the misunderstanding.
In another aspect, the system can suggest a different device (e.g., cell phone) based upon an activity or state within the activity. The system can also triage multiple devices based upon the activity. By way of further example, the system can move the interface for a phone-based application onto a nearby desktop display thereby permitting interaction via a keyboard and mouse when it senses that the user is trying to do something more complex than is convenient or possible via the phone.
In yet another aspect, the system 700 can further adapt the UI based upon context. For example, the ringer can be modified consistent with the current activity or other contextual information such as location. Still further, system 700 can determine from looking at the user's calendar that they are in a meeting. In response, if appropriate, the ringer can be set to vibrate (or silent) mode automatically thus eliminating the possibility for interruption. Similarly, a different greeting and notification management can be modified based upon a particular context. The UI can also color code files within a file system based upon a current activity in order to highlight interesting and relevant files in view of a past, current or future activity.
As discussed above, today, the overall look and feel of an application graphical UI (GUI) is essentially always the same. For example, when a word processor is launched, the application experience is essentially the same with the exception to some personalized features (e.g., toolbars, size of view, colors, etc.). In the end, even if a user personalizes an application it is still essentially the same UI, optimized for the same activity, regardless of what the user is actually doing. However, conventional UIs cannot dynamically adapt to particulars of an activity.
One novel feature of the subject innovation is the activity-centric adaptive UI which can understand various types of activities and behave differently when a user is working on those activities. At a very high level, an application developer can design and the system 700 can dynamically present (e.g., render) rich user experiences customized to a particular activity. By way of example, a word processing application can behave differently in letter writing activity, a school paper writing activity and in a business plan writing activity.
In accordance with the innovation, computer systems and applications can adapt the UI in accordance with an activity being executed from a much more granular level. By way of particular example and continuing with the previous word processing example, when a user is writing a letter, there is a large amount of word processing functionality that is no longer needed to accomplish the activity. For instance, a table of contents or footnotes is not likely to be inserted into the letter. Therefore, this functionality is not needed and can be filtered. In addition to not being required, these functionalities can often confuse the simple activity of writing a letter. Moreover, these unused functionalities can sometimes occupy valuable memory and often slow processing speed of some devices.
Thus, in accordance with the novel functionality of the adaptive UI component 104, the user can inform the system of the activity or types of activities they are working on. Alternatively, the system can infer from a user's actions (as well as from other context factors) what activity or group of activities is being executed. In response thereto, the UI can be dynamically modified in order to optimize for that activity.
In addition to the infrastructure that enables supplying information to the system thereafter adapting the UI, the innovation can provide a framework that allows an application developer to retrieve this information from the system in order to effectuate the adaptive user experience. For example, the system can be queried in order to determine an experience level of a user, what activity they are working on, what are the capabilities of the device being used, etc.
Effectively, in accordance with aspects of the novel innovation, the application developer can establish a framework that effectively builds two types of capabilities into their applications. The first is a toolbox of capabilities or resources and the other is a toolbox of experiences (e.g., reading, editing, reviewing, letter writing, grant writing, etc.). Today, there is no infrastructure available for the developer that is defining the experiences to identify which the tools are necessary for each experience. As well, conventional systems are not able to selectively render tools based upon an activity.
Another way to look at the subject innovation is that the activity-centric adaptive UI can be viewed as a tool match-up for applications. For example, a financial planning application can include a toolbox having charting, graphing, etc. where the experiences could be managing finances for seniors, college funding, etc. As such, in accordance with the subject innovation, these experiences can pool the resources (e.g., tools) in order to adapt to a particular activity.
With respect to adaptation, the innovation can address adaptation in multiple levels, including, system level for things like desktop and favorites and “my documents”, cross-application levels and application levels. In accordance therewith, adaptation can work in real time. In other words, as the system detects a particular pattern of actions, it can make tools more readily available thereby adapting in real time. Thus, in one aspect, part of the adaptation can involve machine learning. Not only can developers take into account a user action, but the system can also learn from actions and predict or infer actions thereby adapting in real time. These machine learning aspects will be described in greater detail infra.
Further, the system can adapt to the individual as well as the aggregate. For instance, the system can determine that everyone (or a group of users) is having a problem in a particular area, thus adaptation is proper. As well, the system can determine that an individual user is having a problem in an area, thus adaptation is proper.
Moreover, in order to determine who the user is as well as an experience level of a user, the system can ask the user or alternatively, can predict via machine learning based upon some activity or pattern of activity. In one aspect, identity can be effected by requiring a user login and/or password. However, machine learning algorithms can be employed to infer or predict factors to drive automatic UI adaptation. It is to be understood and appreciated that there can also be an approach that includes a mixed initiative system, for example, where machine learning algorithms are further improved and refined with some explicit input/feedback from one or more users.
Referring now to
The rules can be employed by the adaptive UI rules engine 602 to decide how to adapt the UI. It is to be understood that the rules can included user rules, group rules, device rules or the like. Optionally, disparate activities and applications can also participate in the decision of how to adapt the interface. This system enables “total system” experiences for activities and activity-specialized experiences for applications and gadgets, allowing them all to align more closely with the user, his work, and his goals.
With continued reference to
As shown in
By way of example, and not limitation, the activity context data 902 includes the current activity the user is performing. It is to be understood that this activity information can be explicitly determined and/or inferred. Additionally, the activity context data 902 can include the current step (if any) within the activity. In other words, the current step can be described as the current state of the activity. Moreover, the activity context data 902 can include a current resource (e.g., file, application, gadget, email, etc.) that the user is interacting with in accordance with the activity.
In an aspect, the user context data 904 can include topics of knowledge that the user knows about with respect to the activity and/or application. As well, the user context data 904 can include an estimate of the user's state of mind (e.g., happy, frustrated, confused, angry, etc.). The user context can also include information about when the user most recently used the current activity, step, resource, etc.
It will be understood and appreciated that the user's state of mind can be estimated using different input modalities, for example, the user can express intent and feelings, the system can analyze pressure and movement on a mouse, verbal statements, physiological signals, etc. to determine state of mind. In another example, content and/or tone of a user statement can enable the system to infer that there has not been a clear understanding of the error message; accordingly, the error message can be modified and rendered in order to rectify the misunderstanding.
With continued reference to
As shown, the user rules 802 can reside in a user profile data store 1002 and can include an assessment of the user's skills, user policies for how UI adaptation should work, and user preferences. The group rules can reside in a group profile data store 1004 and can represent rules applied to all members of a group. In one aspect, the group rules can include group skills, group policies and group preferences.
The device rules can reside in a device profile data store 1006 and can define the device (or group of devices) or device types that can be used in accordance with the determined activity. For example, the identified device(s) can be chosen from known local and/or remote devices. Additionally, the devices can be chosen based upon the capabilities of each device, the policies for using each device, and the optimum and/or preferred ways to use each device.
The learned rules 1008 shown in
In the applications gadgets inventory 1104, data describing each gadget in the application is specified. This data can include a functional description of the gadget and data describing the composite and component UI for each of the gadgets.
In the activity templates inventory 1106, each activity template (or “experience”) that the application contains can be listed. Like gadgets, this data can include a description of the composite and component UI for the activity.
In addition to establishing the learned rules 1204, the MLR component 1202 can facilitate automating one or more novel features in accordance with the subject innovation. The following description is included to add perspective to the innovation and is not intended to limit the innovation to any particular MLR mechanism. The subject innovation (e.g., in connection to establishing learned rules 1204) can employ various MLR-based schemes for carrying out various aspects thereof. For example, a process for determining implicit feedback can be facilitated via an automatic classifier system and process.
A classifier is a function that maps an input attribute vector, x=(x1, x2, x3, x4, xn), to a confidence that the input belongs to a class, that is, f(x)=confidence(class). Such classification can employ a probabilistic, statistical and/or decision theoretic-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed.
A support vector machine (SVM) is an example of a classifier that can be employed. The SVM operates by finding a hypersurface in the space of possible inputs, which the hypersurface attempts to split the triggering criteria from the non-triggering events. Intuitively, this makes the classification correct for testing data that is near, but not identical to training data. By defining and applying a kernel function to the input data, the SVM can learn a non-linear hypersurface. Other directed and undirected model classification approaches include, e.g., decision trees, neural networks, fuzzy logic models, naïve Bayes, Bayesian networks and other probabilistic classification models providing different patterns of independence can be employed.
As will be readily appreciated from the subject specification, the innovation can employ classifiers that are explicitly trained (e.g., via a generic training data) as well as implicitly trained (e.g., via observing user behavior, receiving extrinsic information). For example, the parameters on an SVM are estimated via a learning or training phase. Thus, the classifier(s) can be used to automatically learn and perform a number of functions, including but not limited to determining according to a predetermined criteria how/if implicit feedback should be employed in the way of a rule.
Referring now to
Generally, program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.
The illustrated aspects of the innovation may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.
A computer typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media can comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
With reference again to
The system bus 1308 can be any of several types of bus structure that may further interconnect to a memory bus (with or without a memory controller), a peripheral bus, and a local bus using any of a variety of commercially available bus architectures. The system memory 1306 includes read-only memory (ROM) 1310 and random access memory (RAM) 1312. A basic input/output system (BIOS) is stored in a non-volatile memory 1310 such as ROM, EPROM, EEPROM, which BIOS contains the basic routines that help to transfer information between elements within the computer 1302, such as during start-up. The RAM 1312 can also include a high-speed RAM such as static RAM for caching data.
The computer 1302 further includes an internal hard disk drive (HDD) 1314 (e.g., EIDE, SATA), which internal hard disk drive 1314 may also be configured for external use in a suitable chassis (not shown), a magnetic floppy disk drive (FDD) 1316, (e.g., to read from or write to a removable diskette 1318) and an optical disk drive 1320, (e.g., reading a CD-ROM disk 1322 or, to read from or write to other high capacity optical media such as the DVD). The hard disk drive 1314, magnetic disk drive 1316 and optical disk drive 1320 can be connected to the system bus 1308 by a hard disk drive interface 1324, a magnetic disk drive interface 1326 and an optical drive interface 1328, respectively. The interface 1324 for external drive implementations includes at least one or both of Universal Serial Bus (USB) and IEEE 1394 interface technologies. Other external drive connection technologies are within contemplation of the subject innovation.
The drives and their associated computer-readable media provide nonvolatile storage of data, data structures, computer-executable instructions, and so forth. For the computer 1302, the drives and media accommodate the storage of any data in a suitable digital format. Although the description of computer-readable media above refers to a HDD, a removable magnetic diskette, and a removable optical media such as a CD or DVD, it should be appreciated by those skilled in the art that other types of media which are readable by a computer, such as zip drives, magnetic cassettes, flash memory cards, cartridges, and the like, may also be used in the exemplary operating environment, and further, that any such media may contain computer-executable instructions for performing the methods of the innovation.
A number of program modules can be stored in the drives and RAM 1312, including an operating system 1330, one or more application programs 1332, other program modules 1334 and program data 1336. All or portions of the operating system, applications, modules, and/or data can also be cached in the RAM 1312. It is appreciated that the innovation can be implemented with various commercially available operating systems or combinations of operating systems.
A user can enter commands and information into the computer 1302 through one or more wired/wireless input devices, e.g., a keyboard 1338 and a pointing device, such as a mouse 1340. Other input devices (not shown) may include a microphone, an IR remote control, a joystick, a game pad, a stylus pen, touch screen, or the like. These and other input devices are often connected to the processing unit 1304 through an input device interface 1342 that is coupled to the system bus 1308, but can be connected by other interfaces, such as a parallel port, an IEEE 1394 serial port, a game port, a USB port, an IR interface, etc.
A monitor 1344 or other type of display device is also connected to the system bus 1308 via an interface, such as a video adapter 1346. In addition to the monitor 1344, a computer typically includes other peripheral output devices (not shown), such as speakers, printers, etc.
The computer 1302 may operate in a networked environment using logical connections via wired and/or wireless communications to one or more remote computers, such as a remote computer(s) 1348. The remote computer(s) 1348 can be a workstation, a server computer, a router, a personal computer, portable computer, microprocessor-based entertainment appliance, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 1302, although, for purposes of brevity, only a memory/storage device 1350 is illustrated. The logical connections depicted include wired/wireless connectivity to a local area network (LAN) 1352 and/or larger networks, e.g., a wide area network (WAN) 1354. Such LAN and WAN networking environments are commonplace in offices and companies, and facilitate enterprise-wide computer networks, such as intranets, all of which may connect to a global communications network, e.g., the Internet.
When used in a LAN networking environment, the computer 1302 is connected to the local network 1352 through a wired and/or wireless communication network interface or adapter 1356. The adapter 1356 may facilitate wired or wireless communication to the LAN 1352, which may also include a wireless access point disposed thereon for communicating with the wireless adapter 1356.
When used in a WAN networking environment, the computer 1302 can include a modem 1358, or is connected to a communications server on the WAN 1354, or has other means for establishing communications over the WAN 1354, such as by way of the Internet. The modem 1358, which can be internal or external and a wired or wireless device, is connected to the system bus 1308 via the serial port interface 1342. In a networked environment, program modules depicted relative to the computer 1302, or portions thereof, can be stored in the remote memory/storage device 1350. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers can be used.
The computer 1302 is operable to communicate with any wireless devices or entities operatively disposed in wireless communication, e.g., a printer, scanner, desktop and/or portable computer, portable data assistant, communications satellite, any piece of equipment or location associated with a wirelessly detectable tag (e.g., a kiosk, news stand, restroom), and telephone. This includes at least Wi-Fi and Bluetooth™ wireless technologies. Thus, the communication can be a predefined structure as with a conventional network or simply an ad hoc communication between at least two devices.
Wi-Fi, or Wireless Fidelity, allows connection to the Internet from a couch at home, a bed in a hotel room, or a conference room at work, without wires. Wi-Fi is a wireless technology similar to that used in a cell phone that enables such devices, e.g., computers, to send and receive data indoors and out; anywhere within the range of a base station. Wi-Fi networks use radio technologies called IEEE 802.11(a, b, g, etc.) to provide secure, reliable, fast wireless connectivity. A Wi-Fi network can be used to connect computers to each other, to the Internet, and to wired networks (which use IEEE 802.3 or Ethernet). Wi-Fi networks operate in the unlicensed 2.4 and 5 GHz radio bands, at an 11 Mbps (802.11a) or 54 Mbps (802.11b) data rate, for example, or with products that contain both bands (dual band), so the networks can provide real-world performance similar to the basic 10BaseT wired Ethernet networks used in many offices.
Referring now to
The system 1400 also includes one or more server(s) 1404. The server(s) 1404 can also be hardware and/or software (e.g., threads, processes, computing devices). The servers 1404 can house threads to perform transformations by employing the innovation, for example. One possible communication between a client 1402 and a server 1404 can be in the form of a data packet adapted to be transmitted between two or more computer processes. The data packet may include a cookie and/or associated contextual information, for example. The system 1400 includes a communication framework 1406 (e.g., a global communication network such as the Internet) that can be employed to facilitate communications between the client(s) 1402 and the server(s) 1404.
Communications can be facilitated via a wired (including optical fiber) and/or wireless technology. The client(s) 1402 are operatively connected to one or more client data store(s) 1408 that can be employed to store information local to the client(s) 1402 (e.g., cookie(s) and/or associated contextual information). Similarly, the server(s) 1404 are operatively connected to one or more server data store(s) 1410 that can be employed to store information local to the servers 1404.
What has been described above includes examples of the innovation. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the subject innovation, but one of ordinary skill in the art may recognize that many further combinations and permutations of the innovation are possible. Accordingly, the innovation is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.
This application is related to U.S. patent application Ser. No. ______ (Attorney Docket Number MS315859.01/MSFTP1290US) filed on Jun. 27, 2006, entitled “LOGGING USER ACTIONS WITHIN ACTIVITY CONTEXT”, ______ (Attorney Docket Number MS315860.01/MSFTP1291US) filed on Jun. 27, 2006, entitled “RESOURCE AVAILABILITY FOR USER ACTIVITIES ACROSS DEVICES” ______ (Attorney Docket Number MS315861.01/MSFTP1292US) filed on Jun. 27, 2006, entitled “CAPTURE OF PROCESS KNOWLEDGE FOR USER ACTIVITIES”, ______ (Attorney Docket Number MS315862.01/MSFTP1293US) filed on Jun. 27, 2006, entitled “PROVIDING USER INFORMATION TO INTROSPECTION”, ______ (Attorney Docket Number MS315863.01/MSFTP1294US) filed on Jun. 27, 2006, entitled “MONITORING GROUP ACTIVITIES”; ______ (Attorney Docket Number MS315864.01/MSFTP1295US) filed on Jun. 27, 2006, entitled “MANAGING ACTIVITY-CENTRIC ENVIRONMENTS VIA USER PROFILES ______ (Attorney Docket Number MS315865.01/MSFTP1296US) filed on Jun. 27, 2006, entitled “CREATING AND MANAGING ACTIVITY-CENTRIC WORKFLOW” ______ (Attorney Docket Number MS315867.01/MSFTP1298US) filed on Jun. 27, 2006, entitled “ACTIVITY-CENTRIC DOMAIN SCOPING”, and ______ (Attorney Docket Number MS315868.01/MSFTP1299US) filed on Jun. 27, 2006, entitled “ACTIVITY-CENTRIC GRANULAR APPLICATION FUNCTIONALITY”. The entirety of each of the above applications is incorporated herein by reference.