The present invention pertains to computing systems and particularly to user interfaces of such systems. More particularly, it pertains to user interface improvements.
The invention is an extended concur task tree providing contextual information for a user interface during run-time of a computer.
There may be a number of approaches to build context aware UIs (user interfaces). One approach may generate multiple interfaces for different contexts of use starting from one task model. In contrast with that approach, the present invention does not focus on the design aspect as much as such approach, but the invention may emphasize the run-time framework necessary for accomplishing this. Another approach may provide designers and developers with various automatic support of development of nomadic applications on a variety of devices by some abstractions and transformations. But those transformations must be manually dealt within design time. Still another approach may define a plastic user interface with the capability of adapting to different contexts of use while preserving usability. To process context at runtime, it may introduce an adaptive process, which allows creating user interfaces for the running systems according to different contextual information. Although at several stages in the user interface design process (task specification, abstract user interface, concrete user interface, runtime environment), a translation may take place between two systems. The designer may have to change the task specification manually in the process if the context has an influence on the tasks that can be performed. Another approach may define a specification language and communication protocol to automatically generate user interfaces for remotely controlled appliances. Such language may describe the functionalities of the target appliance and contain enough information to render the user interface. In this case, the context may be secured by the target appliance represented by its definition.
Due to the rapid development of information technology, one may be increasingly surrounded by various devices with the capability of computation and communication in the present living or working environment. Computing seems to have become rather pervasive in such an environment. This trend may engender new requirements for various computing entities such as the ability of user interfaces to adapt to different contexts of use. A context of use may be defined as a set of values of variables that characterizes a computational device used for interacting with a system as well as the physical and social environment where the interaction takes place.
A model based approach may be used to automate the rendering a user interface for different contexts. Of the relevant models, task models may play a particularly important role because they indicate the logical activities that an application should support. A task may be an activity that should be performed in order to reach a goal. A goal is either a desired modification of state or an inquiry to obtain information on current state.
Although a task model approach may work well in the design time, it does not take consideration of contextual information at runtime. To overcome this limitation, one may introduce an approach to apply contextual information to the task at runtime. The approach may have two contributions. The first one is to introduce task activation criteria based on contextual information, and the other is to apply contextual information to optimize interaction quality.
Several points that may be improved include task activation criteria based on contextual information introduced into the present extended concur task tree (ConcurTaskTree) to provide context awareness support in a dynamic situation, and use contextual information to optimize interaction quality. These two points may lead the solving on the issues which the user roles and context are static for the task models. The dynamic and context awareness may be introduced to enhance the task models for any adaptive user interface application.
A task model may become the glue between the functional core of an application and the actual user interface. LOTOS [ISO, IS8807] may be an example of notation which may be used in formal specifications. The concur task tree (CTT), derived from this notation, may be a graphical and very intuitive notation for specifying a task model. Its main purpose is to be an easy-to-use notation, which may support the design of real industrial applications. The CTT task model may be based on several major points. First, the user action oriented approach may be based on hierarchical structure of tasks represented by a tree-like structure. Second, it may require identification of temporal relations, by LOTOS, among tasks at the same level. Third, it may allow the identification of the objects associated to each task and of the actions which allow them to communicate with each other.
In addition, CTT may include several categories of tasks depending on the allocation of their performance, which include user, application, interaction and abstract tasks. User tasks may be performed entirely by the user. They require cognitive or physical activities without interacting with the system. An instance may be deciding what to do, or reading the information presented on a display.
Application tasks may be completely executed by the system. The tasks can receive information from the system and supply information to the user. An example may be processing information from a previously executed interaction task, and presenting the results to the user. Interaction tasks may be performed by user interactions within the system. An example of these may be filling in a form.
Abstract tasks may be complex tasks on an abstract level so that they can not be assigned to any of the three previous cases. An abstract task usually has descendants of different types of tasks; for example, a task that requires user interaction as well as application feedback.
Although one may adopt CTT to represent application's behavior and adopt a TERESA (Transformation Environment for inteRactivE Systems representations) tool to transform a UI for different platform, this “one model, many interface” mechanism may be only suitable for the design time. It does not necessarily take consideration of contextual information at runtime. However, contextual information may play an important role to improve user's interaction with computer, especially in a ubiquitous computing environment, where many devices and applications, automatically adapted to changes in their surrounding physical and electronic environment, can lead to enhancement of the user experience. To overcome this limitation, one may introduce an approach to apply contextual information to the task at runtime.
The invention relates to an approach to improve a user's interaction with computer system which may include building a context aware user interface by extending a concur task tree (CTT).
Because the “user tasks” may include a user's cognitive or physical activities without interacting with the system and the “abstract tasks” may be refined to a concrete task, just two types of tasks should be considered in the present approach. In
If the decision of symbol 26 is yes, then a block 29 shows “set t activated”. The output of block 29 may go to a decision symbol 31 which asks whether “t is an interaction task and its next temporal relationship is enabling without information (EnablingWithInfo)”. If the answer is no, then the answer may lead to block 32 which indicates “continue ETS computing for t” and after block 32, the approach is terminated at end place 28. If the answer at symbol 31 is yes, the answer may lead to block 32, which indicates “assert the content of context buffer 14 into t's IQO RB 15”. An output from block 32 may go to a decision symbol 33 which asks the question whether there is any rule activated in IQO RB 15. If the answer is no, then the answer may lead to block 32 which indicates “continue ETS computing for t”. After block 32 the approach may be terminated at end place 28. If the answer is yes, then the output may go to block 34, which indicates “set t disabled”, and be terminated at end place 28.
One contribution of the invention may be to introduce task activation criteria based on contextual information, and another contribution may be to apply contextual information to optimize interaction quality.
A unified interaction task may be noted. In CTT, interaction tasks may be performed just by user interactions. Different from the present definition, the concept may be extended to considering environment interaction (such as wired or wireless network information, location, time, and so forth). From a data flow perspective, both the end user and the environment could input data/information into the system. But the environment could do this automatically, and the environment may do it without any user interface. In a present extended task engine, one does not necessarily distinguish the sender of an interaction, for the result of both user interaction and environment interaction may update the context buffer. Before activating an interaction task, one may check the context buffer. If one finds a related context object, then one may draw a conclusion that the interaction task has been done by the environment. Then one may skip the activation for the interaction task. One may also skip the UI generation for the interaction task.
Task activation criteria may be based on contextual information. Different tasks have different criteria to activate. It may be difficult to define a uniform activation criterion. When a task is defined, the developer should give some activation rules. At runtime, the contextual information related to those rules may be queried or notified. According to those rules and contextual information, a task may be evaluated to be activated or inactivated. For example, there may be an “application task” which may only display a secret document for the manager or the employee authorized by manager. When the developer defines the task, the developer may add the rules to TAC (task activation criteria) rule base (RB) 16 like the following.
Condition: Context.User.Role=“Manager”
Condition: Context.User.Authorized=True
When the task is executed, it may query the information of current user instance. Only when its condition is satisfied, the task may be activated.
An interaction quality optimizer 18 may be based on contextual information. According to CTT's definition, the contextual information can not be adopted by the task. However, contextual information may be adopted to improve the UI's quality. The present approach may redefine CTT. So when a user interacts with system, context may also be an important element. When a task is defined, the developer needs to provide some rules about the UI's generation.
A combination context with the task may lead to an enhancement of the user's experience. An example may be to display states of damaged devices on site. If there is an adoption of the CTT's approach, the user needs to manually select his current location and damaged devices. And worse is that it may be difficult for the user to determine which device is damaged. However, if one adopts the present approach, then the developer may define the following rule.
Condition1: Context.User.Location=Context.Site
Condition2: Context.Device.Containedby=Context.Site
Condition3: Context.Device.Damaged=True
When a task is executed, the task may query the states of current user, site and device to judge if the condition is satisfied. For the above condition 1, the present approach may be processed like the following.
Define a site: site.location=<x, y, z>
One may note context acquisition. Contextual information may play an important role in a context aware application. Context acquisition may be the premise of the use of context in the application. Context-aware applications need to take advantage of the sensors and sensing techniques available to acquire the contextual information. One may look at two situations in which context is handled, that is, connecting sensor drivers directly into applications and using servers to hide sensor details.
In the first situation, application designers may be forced to write code that deals with the sensor details, using whatever protocol the sensors dictate. There may be several issues with this approach. The first issue may be that the approach makes the task of building a context-aware application very burdensome by requiring application builders to deal with the potentially complex acquisition of context. The second issue is that approach does not support good software engineering practices. The approach does not necessarily enforce separation between application semantics and the low-level details of context acquisition from individual sensors. This may lead to a loss of generality, making the sensors difficult to reuse in other applications and difficult to use simultaneously in multiple applications.
Ideally, one would like to handle context in the same manner as one handles user input. By separating how context is acquired from how it is used, applications may now use contextual information without concern about the details of a sensor and how to acquire context from it. One may designate a server to support context event management either through the use of a querying mechanism, a notification mechanism, or both to acquire context from sensors. A querying mechanism may be appropriate for one-time context needs. Once an application receives the context, it needs to then determine whether the context has changed and whether resultant changes are interesting or useful to it. The notification or publish/subscribe mechanism may be appropriate for repetitive context needs, where an application may want to set conditions on when the application wants to be notified.
Due to focuses on the use of context in applications instead of the context acquisition, how to acquire the contextual information appears out range of the present invention. But the developers can refer to the following context infrastructures in Dey's Context Toolkit at Georgia Institute of Technology, Hong's Context Fabric at UC Berkeley, Jonsson's Context Shadow at KTH, Judd's CIS (Context Information Server) at CMU, and so on. The developer also may refer to the Me-Centric Domain Server at HP, or CoBrA (Context Broker Architecture) at UMBC.
Implementation may be noted. One may now demonstrate how the present approach is used to build applications. How the location aware guide and role aware access can be built may be described. One scenario may be a location aware guide. For instance, when one visits a large royal park, e.g., Summer Palace, one may be given a hand held device, such as PDA. So whichever scenic spot one visits, one may get its introduction information by just clicking the “Play” button on the PDA.
For an instance relating to
Another scenario may be the role aware access. Different task access permissions could be assigned to each task. According to this information, some tasks may be disabled for some role. As the result, a different user may get a different user interface.
Then one may go to the “Permissions OnSitMaintenaner Surveillant” represent by symbol 51 for access. One may get a device at symbol 52, select room at symbol 53 and select device at symbol 54. The temporal operator [ ]>> between symbols 53 and 54 may indicate enabling or enabling with information exchange. Then one may go to use the device at symbol 55. The temporal operator between symbols 52 and 55 may be the same as that between symbols 53 and 54. If one is to configure the device at symbol 56, then “Permissions OnSitMaintenaner” would be needed. If one is to go to view the device at symbol 57, then “Permissions Surveillant” would be needed. The temporal operator [ ] between symbols 56 and 57 may indicate choice.
By involving context information in task computation, one may optimize current enabled task set according to current interaction context. This may provide a foundation for the high efficient user interface generation. Introducing a rule-based approach for task activation criterion may bring a flexible mechanism for context adaptation configuration.
The following is about a model-based solution for a system which may be referred to as a system. The system may have goals, a model-based solution, a modeling approach, a task model, UI generation, and issues and future directions.
Goals may include situation-aware, adaptive UI, platform-independence, a system and components update at run-time, FR application deployment, and a wireless handheld device used in wireless LAN and WSN environment. There may be a context-aware UI. The context may include those of a user, environment and platform. The context-aware UI (situation-aware and platform-independent) may involve intelligent UI which is adaptive for a user and environment, and pervasive UI which is adaptive for the platform.
A modeling language may involve a “model” and the modeling approach. OWL-DL may be used as a model exchange representation. The OWL (web ontology language) is designed for use by applications that need to process the content of information instead of just presenting information to humans. OWL appears to facilitate greater machine interpretability of web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. OWL has three increasingly-expressive sublanguages—OWL Lite, OWL DL, and OWL Full. The web ontology language has been an Official World Wide Web Consortium (W3C) standard since February 2004. It is based on predecessors such as (DAML+OIL). OWL-DL is the subset of OWL-Full that is optimized for reasoning and knowledge modeling. OWL DL is an ontology language based on logic (viz., description logic). Description logic may be considered the most important knowledge representation formalism unifying and giving a logical basis to the well known traditions of frame-based systems, semantic networks and KL-ONE-like languages, object-oriented representations, semantic data models, and type systems.
OWL DL is a platform-independent extensible Language based on RDF(S) (resource description framework-scheme). The RDF may integrate a variety of applications from library catalogs and world-wide directories to syndication and aggregation of news, software, and content to personal collections of music, photos, and events using XML as an interchange syntax. The RDF specifications may provide a lightweight ontology system to support the exchange of knowledge on the web.
SWRL FOL may be used as a “knowledge base” exchange representation. It may be a semantic web rule language (SWRL) first order logic (FOL) language. SWRL may be a semantic web rule language combining OWL and RuleML. It has been a member submission by W3C since May 2004. It may be based on a combination of the OWL and RuleML. A rule markup initiative has taken steps towards defining a shared rule markup language (RuleML), permitting both forward (bottom-up) and backward (top-down) rules in XML for deduction, rewriting, and further inferential-transformational tasks. SWRL-FOL may be based on RDF/XML. The resource description framework (RDF) may integrate a variety of applications from library catalogs and world-wide directories to syndication and aggregation of news, software, and content to personal collections of music, photos, and events using XML as an interchange syntax. The RDF specifications may provide a lightweight ontology system to support the exchange of knowledge on the Web. The extensible markup language (XML) may be a simple, very flexible text format derived from SGML (ISO 8879). Originally designed to meet the challenges of large-scale electronic publishing, XML may also play an increasingly important role in the exchange of a wide variety of data on the web and elsewhere. SWRL-FOL may be a rule language based on first order logic.
The modeling approach may use modeling and reasoning tools (i.e., a component). A Protégé OWL Plugin may be used as a modeling tool (for both OWL and SWRL). The Protégé-OWL editor is an extension of Protégé that supports the web ontology language (OWL). OWL is a very recent development in standard ontology languages, endorsed by the World Wide Web Consortium (W3C) to promote the semantic web vision.
A modeling tool should be built for the present system if it is necessary. Leverage may be made to the existing OWL Reasoner at modeling-time, such as Racer which is integrated with Protégé. RACER, or RacerPro as it subsequently is called, was an early OWL Reasoner on the market. These appeared in 2002 and have been continuously improved. While others have tried hard to achieve comparable speed, RacerPro appears to be one of the fastest OWL reasoning systems available. Many users have contributed to the stability that the Reasoner currently demonstrates in many application projects around the world.
With the exception of nominals, which appear difficult to optimize, RacerPro may support the full OWL standard (indeed, nominals may be supported with an approximation). Protege may support an extended version of OWL (namely OWL with qualified cardinality restrictions) that is already supported by RacerPro with certain algorithms and optimization techniques.
Protégé is generally a free, open source ontology editor and knowledge-base framework. The Protégé platform may support several main ways of modeling ontologies via the Protégé-Frames and Protégé-OWL editors. Protégé ontologies can be exported into a variety of formats including RDF(S), OWL, and XML Schema. Protégé may be based on Java, be extensible and provide a plug-and-play environment that makes it a flexible base for rapid prototyping and application development.
A Reasoner may be used at run-time and may be ported from Rule Engine of BRF (business rule framework). It may be fit for an embedded computation environment and be support for DL ABox reasoning. It may be built upon a reasoning result of the OWL Reasoner.
The modeling approach may also include a “model compiler” and KB compiling. The model compiler may be leveraged to XML/XSLT technology. The model and KB compiling may be converted from an exchange format to a platform-dependent format. The compiler may be optimized for an embedded platform.
One may note the syntax and semantics, as defined by the present specification, of XSLT (XSL transformations), which is a language for transforming XML documents into other XML documents. XSLT may be designed for use as part of XSL, which is a stylesheet language for XML. In addition to XSLT, XSL may include an XML vocabulary for specifying formatting. XSL may specify the styling of an XML document by using XSLT to describe how the document is transformed into another XML document that uses the formatting vocabulary. XSLT may also be designed to be used independently of XSL. However, XSLT is not necessarily intended as a completely general-purpose XML transformation language. Rather it may be designed primarily for the kinds of transformations that are needed when XSLT is used as part of XSL.
ConcurTaskTree may be used as a task model. ConcurTaskTree may be a notation for task model specifications (apparently developed at least in part by another) to overcome limitations of notations previously used to design interactive applications. Its main purpose is to be an easy-to-use notation that can support the design of real industrial applications, which usually means applications with medium-large dimensions.
ConcurTaskTree used as a task model may have features including hierarchical structure, graphical syntax, concurrent notation, expressive and flexible notation, compact, understandable representation, and widely used in related works.
The features of a concur task tree (ConcurTaskTree) may be more specifically noted. A hierarchical structure may be something very intuitive. In fact, often when people have to solve a problem, they tend often to decompose it into smaller problems, while still maintaining the relationships among the smaller parts of the solution. The hierarchical structure of this specification may have two advantages. It may provide a large range of granularity allowing large and small task structures to be reused, and it may enable reusable task structures to be defined at both a low and a high semantic level.
A graphical syntax often (not always) may be easier to interpret. In this case, it should reflect a logical structure and so it should have a tree-like form.
Concurrent notation may include operators for temporal ordering which are used to link subtasks at the same abstraction level. This sort of aspect is usually implicit, but expressed informally in the outputs of a task analysis. Having the analyst use these operators is a substantial change to normal practice. The reason for this innovation is that after an informal task analysis, one may want designers to express clearly the logical temporal relationships. A reason for this is because such ordering should be taken into account in the user interface implementation to allow the user to perform at any time the tasks that should be active from a semantic point of view.
A focus on activities may allow designers to concentrate on the most relevant aspects when designing interactive applications that encompass both user and system-related aspects avoiding low level implementation details which at the design stage would only obscure the decisions to take.
This notation may show two positive results. One is an expressive and flexible notation able to represent concurrent and interactive activities, and also have the possibility to support cooperation among multiple users and possible interruptions. The other is a compact, understandable representation. A key aspect in the success of a notation may be an ability to provide much information in an intuitive way without requiring excessive efforts from the users of the notation. The ConcurTaskTree may be able to support this as it has been demonstrated also by its use by people working in industries without a background in computer science.
The task model of the ConcurTaskTree category may include user tasks, application tasks, interaction tasks, and abstract tasks. User tasks may performed by the user (cognitive activities), e.g., making a decision, answering the telephone, and so on. Application tasks may be completely performed by the application, e.g., checking a login/password, giving an overview of documents, and so on. Interaction tasks may be performed by the user interacting with the system by some interaction technique, e.g., editing a picture, filling in a form, and so on. Abstract tasks may require complex activities, e.g., a user session with the system, and the like.
A set of ConcurTaskTree temporal operators may be shown as the following.
The task model may involve a ConcurTaskTree enabled task set (ETS). A very important advantage of the CTT formalism may be a generation of enabled task sets (ETS) out of the specification. An ETS may be defined as a set of tasks that are logically enabled to start their performance during the same period of time. All tasks in an ETS may be presented together. The ETSs calculated from the model may include the following items.
ETS1={Select Read SMS, Select, Shut Down}
ETS2={Select SMS,Close, Shut Down}
ETS3={Show SMS,Close, Shut Down}
ETS4={Select,Close, Shut Down}
UI generation from a task model to AUI may involve using “canonical abstract components” to link the task model and the AUI. Interactive functions with examples are listed in a table of
From abstraction to realization, abstract prototypes may be based on canonical components. A powerful new form of abstract prototype is described that speeds and simplifies the transition from an abstract task model to a realistic paper prototype by using a standardized set of user interface abstractions. Such canonical components may specify function, size, and position in the emerging design but leave unspecified the appearance and detailed behavior. In this way, canonical prototypes may facilitate high-level design and architectural decision making without the burden of resolving or addressing numerous concrete details. Abstract prototypes based on canonical components may also promote design innovation and facilitate recognition of common user interface design patterns.
A “calculate current enabled task set (ETS)” block 211 of task engine 203 may have an output to a decision symbol 212. Symbol 212 may ask whether there is no next ETS element. If the answer is yes, then the next step may be at terminal 213. If the answer is no, then a next ETS element may be obtained according to block 214. The output according to block 214 may go to a decision symbol 209 which asks the question whether the ETS element task is assigned for a current user. If the answer is no, then that indication is provided to and processed by the decision symbol 212. If the answer to the question of symbol 209 is yes, then the output may be “check the precondition” according to block 215 of reasoner 202, which goes to a decision symbol 216 of task engine 203, that asks whether the result is true. If the answer is no, then that indication is provided to and processed by the decision symbol 212. If the answer is yes, then an output may be get the temporal operator of the ETS element according to block 217. The step according to block 217 may go to a decision symbol 218 which asks whether it is the enabling information. If the answer is no, then a step “query objects by filter” according to block 219 of reasoner 202 may occur. The indication of block 219 may go to generate AIO block 220 with an output abstract interaction object as indicated by block 221 of task engine 203. Also, an indication of block 220 may go to the decision symbol 212 for processing. If the answer of symbol 218 is yes, then the response may go to block 208 and on to decision symbol 210. It may be noted that after block or step 208, optimized tasks/ETS may be obtained. If an answer from symbol 210 is no, then block 219 may be effected. If the response is yes, then a response may go to decision symbol 212.
UI generation may be from AUI to CUI (adaptive for a platform). CIOs may be extracted from AIO based on a domain object. Right widgets may be selected for CIOs based on a target widget library. Layout information may be calculated for current DialogCIO based on the current platform model. If a current DialogCIO can not be laid out, then a new DialogCIO may be generated with a navigation CIO, and Goto 3. Appropriate event handlers may be attached for certain CIOs. Then the first DialogCIO may be presented.
Other considerations may be noted. Models and the knowledge base may be enriched. Human factors may be integrated into some knowledge bases, such as UI patterns, design guidelines, and the like. Models and KBs may be improved to improve the quality of a generated UI. The interaction mode should be improved for better UI generation. An enabled task set (ETS) may be used to generate an AUI in a current stage. An integrated interaction design platform may be developed. A making of plug-ins may be considered for some platform framework, such as Protégé, Eclipse, VS.Net, and so forth. An autonomic UI may involve user behavior modeling and user intent reasoning.
Two parts of the present description may be noted. One is task optimization based on context information. The other is a model-based UI generation solution. The relationship between these two parts may be emphasized. After getting the optimized tasks, one may generate the final UI from them by the model-based solution.
In the present specification, some of the matter may be of a hypothetical or prophetic nature although stated in another manner or tense.
Although the invention has been described with respect to at least one illustrative example, many variations and modifications will become apparent to those skilled in the art upon reading the present specification. It is therefore the intention that the appended claims be interpreted as broadly as possible in view of the prior art to include all such variations and modifications.