CONTEXT AWARE USER INTERFACE PARTS

Abstract
A model for displaying multiple user interface elements such that each control includes a container that includes standard functionality across at least a majority of the user interface elements. For instance, such standard functionality might include a part status indication, a title, a content status indication, a command invocation function, a part resizing function, and so forth. The model may also provide for standardization of resizing of user interface elements. For a given user interface element, there would be a predetermined number of possible size and shapes, each corresponding to a different projection of data. For instance, all of the user interface elements on a screen may fall within the predetermined number of possible size and shapes, thereby allowing more functional layout of the user interface on the display.
Description
BACKGROUND

A current paradigm for navigating through various information contexts is windows based. A classic example of this is the web browser experience. A user might begin with a home page that occupies the entire browser space. The user might then select a hyperlink, whereupon a new window appears. However, the previous window either disappears or, in the case of exercising an option to open the new page in a new window, the previous window is fully, or at least partially, hidden.


The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced.


SUMMARY

At least some embodiments described herein relate to a model for displaying multiple user interface elements that have controls projecting data thereinto. The model displays the multiple user interface elements such that each control includes a container that includes standard functionality across at least a majority of the user interface elements. For instance, such standard functionality might include a part status indication, a title, a content status indication, a command invocation function, a part resizing function, and so forth.


At least some embodiments described herein relate to the standardization of resizing of user interface elements. In particular, for a given user interface element, there would be a predetermined number of possible size and shapes, each corresponding to a different projection of data. For instance, a smaller size user interface element may project data in a different way and perhaps have a smaller amount of data projected than a larger incarnation of the user interface element. In some embodiments, all of the user interface elements on a screen may fall within the predetermined number of possible size and shapes, thereby allowing more functional layout of the user interface on the display.


This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.





BRIEF DESCRIPTION OF THE DRAWINGS

In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:



FIG. 1 abstractly illustrates an example computing system in which the principles described herein may be employed;



FIG. 2A illustrates several part states manifest by a standard functionality of a container, including a non-blocking part status, a blocking part status, and a processing indication status;



FIG. 2B illustrates an error part status expressed by a standard functionality of a container.



FIG. 3 illustrates a part displaying a title and subtitle, which appear consistent throughout all (or a majority of) parts as provided by a standard functionality of a container;



FIG. 4 illustrates a part with various content states provided by a container of a part, the content states including a success content status, a warning content status, and an error content status;


For instance, FIG. 5 shows two instances of an endpoint monitoring part, which shows that different data is projected differently depending on the size of the part;



FIG. 6 illustrates a part having a context menu as part of a standard functionality of a container;



FIG. 7 illustrates a part that represents a web site;



FIG. 8 illustrates a command menu provided as part of a standard functionality provided by a container;



FIG. 9 illustrates a part in which the user may interact with elements in the part in a complex way;



FIG. 10 illustrates a part as it morphs into various states during the creation of a resource;



FIG. 11 illustrates an example management user interface for managing a website and database together, and in which up-to-date status of the deployments of the website may be evaluated; and



FIG. 12 shows two different contexts for a part and illustrates that the standard functionality remains the same.





DETAILED DESCRIPTION

At least some embodiments described herein relate to a model for displaying multiple user interface elements that have controls projecting data thereinto. The model displays the multiple user interface elements such that each control includes a container that includes standard functionality across at least a majority of the user interface elements. For instance, such standard functionality might include a part status indication, a title, a content status indication, a command invocation function, a part resizing function, and so forth.


At least some embodiments described herein relate to the standardization of resizing of user interface elements. In particular, for a given user interface element, there would be a predetermined number of possible size and shapes, each corresponding to a different projection of data. For instance, a smaller size user interface element may project data in a different way and perhaps have a smaller amount of data projected than a larger incarnation of the user interface element. In some embodiments, all of the user interface elements on a screen may fall within the predetermined number of possible size and shapes, thereby allowing more functional layout of the user interface on the display.


The principles described herein may be implemented using a computing system. For instance, the users may be engaging with the system using a client computing system. The executable logic supporting the system and providing visualizations thereon may also be performed using a computing system. The computing system may even be distributed. Accordingly, a brief description of a computing system will now be provided.


Computing systems are now increasingly taking a wide variety of forms. Computing systems may, for example, be handheld devices, appliances, laptop computers, desktop computers, mainframes, distributed computing systems, or even devices that have not conventionally been considered a computing system. In this description and in the claims, the term “computing system” is defined broadly as including any device or system (or combination thereof) that includes at least one physical and tangible processor, and a physical and tangible memory capable of having thereon computer-executable instructions that may be executed by the processor. The memory may take any form and may depend on the nature and form of the computing system. A computing system may be distributed over a network environment and may include multiple constituent computing systems. An example computing system is illustrated in FIG. 1.


As illustrated in FIG. 1, in its most basic configuration, a computing system 100 typically includes at least one processing unit 102 and memory 104. The memory 104 may be physical system memory, which may be volatile, non-volatile, or some combination of the two. The term “memory” may also be used herein to refer to non-volatile mass storage such as physical storage media. If the computing system is distributed, the processing, memory and/or storage capability may be distributed as well. As used herein, the term “executable module” or “executable component” can refer to software objects, routines, or methods that may be executed on the computing system. The different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads).


In the description that follows, embodiments are described with reference to acts that are performed by one or more computing systems. If such acts are implemented in software, one or more processors of the associated computing system that performs the act direct the operation of the computing system in response to having executed computer-executable instructions. For example, such computer-executable instructions may be embodied on one or more computer-readable media that form a computer program product. An example of such an operation involves the manipulation of data. The computer-executable instructions (and the manipulated data) may be stored in the memory 104 of the computing system 100. Computing system 100 may also contain communication channels 108 that allow the computing system 100 to communicate with other message processors over, for example, network 110.


The computing system 100 also includes a display 112 on which a user interface, such as the user interfaces described herein, may be rendered. Such user interfaces may be generated in computer hardware or other computer-represented form prior to rendering. The presentation and/or rendering of such user interfaces may be performed by the computing system 100 by having the processing unit(s) 102 execute one or more computer-executable instructions that are embodied on one or more computer-readable media. Such computer-readable media may form all or a part of a computer program product.


Embodiments described herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed in greater detail below. Embodiments described herein also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are physical storage media. Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, embodiments of the invention can comprise at least two distinctly different kinds of computer-readable media: computer storage media and transmission media.


Computer storage media includes RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.


A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links which can be used to carry or desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.


Further, upon reaching various computer system components, program code means in the form of computer-executable instructions or data structures can be transferred automatically from transmission media to computer storage media (or vice versa). For example, computer-executable instructions or data structures received over a network or data link can be buffered in RAM within a network interface module (e.g., a “NIC”), and then eventually transferred to computer system RAM and/or to less volatile computer storage media at a computer system. Thus, it should be understood that computer storage media can be included in computer system components that also (or even primarily) utilize transmission media.


Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.


Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, pagers, routers, switches, and the like. The invention may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.


Embodiments described herein relate to a user interface element (also called herein a “part”) that has a number of benefits. A “part” is a computer-recognized artifact that that projects controls into the screen (e.g., into the display 112 of the computing system 100). A control is an executable component for projecting a user interface element in the screen. A part is a composition of one or more controls that are configured to execute with a particular computing execution context called herein a “system”.


As will be described herein, a system presents a user interface (also called herein a “portal” on a display (such as the display 112 of FIG. 1). The system may have one or more applications running therein. The system may be thought of as an operating system, in which case the one or more applications are simply applications running in the operating system. On the other hand, the system may also itself be just an application (such as a web browser), in which case the applications that run within the system may be thought of as “extensions” to the application. In the description and in the claims, such an “extension” will be termed an “application”.


Application developers can create their own parts (extrinsic parts) or use the parts (intrinsic parts) provided by the system. The intrinsic parts are included within a library of available parts. The extrinsic parts may also use the intrinsic system controls (such as display in multiple sizes, deal with errors, show progress, and so forth) and combined multiple of the existing controls. The parts have standard rich functionality (such as display in multiple sizes, error handling, progress indication, and so forth).


The parts can be combined into “lenses” that are group of parts that can be treated as a unit. Lenses are defined by the extension developers and are group of parts. The lenses provide a way of grouping and moving around sets of parts as a single unit.


A part provides a wide set of capabilities to application developers to surface user experience in a consistent manner. These capabilities may include responsiveness (the ability to react and adapt to multiple display sizes), automatic snapping to layout, surfacing of different states (e.g. executing, loading, progress bars, etc.) in a consistent way, blocking so no further interaction can be done, surfacing consistent error information and conditions, being associated with an asset in the system and then react to life-cycle events for the assets (e.g. do the right thing if the asset is deleted), status reporting, mutating to a different part, consistency of the user experience across the system, shared parts between extension, sanitization of code, and so forth.


The parts provide a convenient model for creating units of user interface. Developers can provide a template and a view model that will take advantage of these available capabilities. In addition, all the code provided by the application developer will be consistent by construction as the code will be sanitized (e.g. will only allow safe script and styles).


The parts provide a programming model that application developers can use to project units of user interface into the portal. Every part follows certain coding patterns to be able to be part of that model, which ensures uniformity in the code, reuse of core functionality and consistent and predictable results in the user experience.


In one embodiment, a part may be authored as a combination of a template (e.g., regular markup, such as HTML5 and knockout.js), style (CSS3), and code (a view model in TypeScript). There are rules that ensure that authors provide consistent and predictable user experience. For the markup, css, and knockout bindings the system has a “sanitization” component, which is basically a compile-time processing step that removes all code or styles that can be potentially harmful. The rules for the sanitization are owned by the core portal (this is the portal team). By doing this application developers can use familiar tools to write the parts (e.g. HTML5, knockout, and CSS) and do not need to learn new languages.


The other aspect of the part is the code (also called herein the “the view model”) that can be executed to setup/construct the part or as the result of a user interaction. That code is executed using the portal isolation system (so it does not affect the overall portal). In that view model application, authors can write any valid TypeScript/JavaScript (e.g. compute, retrieve data, transform data, etc.). The template reacts to changes in the view model, enabling rich and reactive patterns (e.g. parts that are updated as new data is available). Even though part developers have a lot of freedom as to how to write their parts, there are certain patterns that they need to follow.


Every part has a “container” (which is passed to the actual part implementation at the constructor) that is its tie to the shell where they can influence standard behavior with any other existing part. For example, using the container the extension developers can move from one state of the part to the other. This means that they can provide a consistent “loading state” (e.g. the part shows a progress bar and does not allow interaction) just by calling a method in that container and without the need of writing that code themselves (which is prone to errors and UX inconsistencies). Some of the capabilities include showing loading state, moving to failure state (e.g. errors), executing operations, block all UI interactions, display title and subtitle, and show overall state visually (to call some of the main capabilities).


For instance, FIG. 2A illustrates several part states manifest by the part in one example. The left part shows a part have a regular status. The middle part shows a blocked part (no user interaction) with a non-deterministic progress bar at the bottom. The right part shows a part with a deterministic progress bar at the bottom. FIG. 2B illustrates a part in a failed state. Such part status indication may be a standard function of the container, and thus be provided consistently across all (or at least a majority of) parts.



FIG. 3 illustrates a part displaying a title and subtitle, which appear consistent throughout all (or a majority of) parts. For instance, the title may have the same font, size, color, and position for all parts. Likewise, the subtitle may have the same font, size, color, and position for all parts. Accordingly, the title and subtitle presentation may be part of the container, and thus part of the standard functionality offered across all (or a majority of) parts.



FIG. 4 illustrates parts with various content statuses. Note that the content status often refers to the status of its underlying resource. For instance, error does not mean that the part is in error (an error state for a part itself is shown in FIG. 2). Instead, the error can mean that the underlying website represented by the part is in an error state.


In addition to the consistency mentioned above (surfacing status and information), the part infrastructure is designed to fit well with the portal layout model. Parts automatically display correctly in a grid. As we mentioned above, parts support also multiple sizes. Application developers can decide what sizes they want to support and can provide different user experiences for each size. In other words, depending on the part, different data may be projected in different ways within the part.


For instance, FIG. 5 shows two instances of an endpoint monitoring part. In the left instance 500A, the part is small, and thus a smaller mount of data is projected into the part. In the right instance 500B, the part is larger, and thus a larger amount of data is projected into the part, and in a way that allows more detail to be viewed. Thus, resizing an user interface element is much more complex that simply making a user interface element larger or smaller.


The supported parts are provided by the portal (so it is a consistent list designed to be fit well with the layout engine). As a result of this, extension developers (the portal extension authors) can create their blades using parts in different sizes based on the available real state and the importance of the information that they want to convey. Users can later decide if they want to resize the part, and if so, they can change the size of the part to any of the supported sizes (see FIG. 5).


In FIG. 6, the context menu allows user to select one of the available sizes for the selected part. Once the size is selected the part is rendered in that size. Note that each part has a predetermined number of sizes. For instance, all parts might be subject to a predetermined set of selectable sizes, thereby allowing the parts to fit well within a grid pattern.


Parts can be associated with assets in the portal. For instance, a part that represents a web site is illustrated in FIG. 7. Note how the name, type of resource, type of icon, and status are surfaced in the part.


This means that the part can surface live information about that resource (e.g. status) and offer the users the commands associated with the asset (e.g. stop, start, delete the website, as in FIG. 8). Accordingly, command invocation functions may also be part of the standard functionality offered by the container to each part.


In addition, if the resource has been deleted the part can be removed or show a “resource” deleted user experience (this is all done by the portal without the need of part author intervention). Those parts also can initiate the main flow associated to the resource, so they are both an actual representation of that resource and a way of starting the journey associated to the resource.


Part authors can create complex interactions within their parts. For example, a part author may decide to have a chart that hovered or clicked updates the value of labels below the chart (see FIG. 9) or a chart that when a single bar is clicked goes to another blade.


Part authors can define “hot spots” in their parts (that are interactive regions within the part) that can result in changes to the underlying view model that can change what is surfaced in the part (remember that the parts are reactive, so changes in the view model are observably pushed to the template). The portal also offers a rich selection model, which parts can leverage (resulting again in consistent behavior across the board and less code to write for the part authors)


In some cases, part are to change their visual representation to something else. For example, a part that represents a resource being provisioned will change to the actual resource once the provisioning is completed (see FIG. 10).


Another example is setting continuous deployment in a website: if it is not set, the part to “set continuous deployment” is displayed, but once it is set the actual continuous deployment is displayed. The set of available changes are scenario specific are controlled and limited by the system.


As part of the portal, parts are also participants in the binding system. This means that they can offer/receive data from other parts or user interface elements. As a result of this, parts can also react to changes or events that happen in other parts of the portal. For example, we can have a blade with several charts that show requests over a period of time and a part at the top where a user can set the time period. When the time period is changed (e.g. from 1 day to 7 days) all the parts react to that change and their charts display the last seven days. As a result of this, parts can be part of a rich ecosystem and support/enable rich experiences that go beyond the confines of its actual widget.


Applications can expose their parts to be used by other applications. By a rich ecosystem may be provided where different applications can contribute to provide rich experiences. For example, we can create a user interface element with a set of website parts (provided by the Websites application), a database part that shows the database associated with the site (provided by the database application) and a continuous deployment part (provided by TFS). For instance, in the user interface element of FIG. 11, the website and database may be managed together, and up-to-date status of the deployments of the website may be evaluated.


Parts may behave and display uniformly in different contexts. For example, a part can be pinned to the startboard (one context) and preserve all the behaviors that they have in a blade (except for the ones that no longer make sense like ‘pin to start’). FIG. 12 shows two different contexts 1201 and 1202. The part 1210 is shown with the same command menu (and other standard functionality) regardless of whether the part 1210 is within the context 1201 or the context 1202.


Accordingly, the principles described herein provide a model for presenting user interface elements with standard functionality, regardless of the context in which the parts are displayed. This allows consistent functionality even in composable systems in which multiple applications provide user interface parts to the system. Furthermore, resizing may be part of the standard functionality and may be limited to a predetermined number of sizes and shapes that allow for rational layout of the parts. In addition, resizing of the part affects the data that is projected into the part, and how that data is projected. Thus, the resizing is more intelligent than just making things larger and smaller.


The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims
  • 1. A computer program product comprising one or more computer-readable storage media having thereon computer-executable instructions that are structured such that, when executed by one or more processors of a computing system, cause the computing system to instantiate and/or operate the following: a plurality of user interface elements, each user element representing a visualized container on a screen, each of at least some of the user interface elements associated with one or more controls that project data into the user interface elements, each user interface element also including a container that includes standard functionality across at least a majority of the user interface elements.
  • 2. The computer program product in accordance with claim 1, the standard functionality comprising a part status indication.
  • 3. The computer program product in accordance with claim 2, the part status indication being a blocking or non-blocking part status.
  • 4. The computer program product in accordance with claim 2, the part status indication being a progress indication part status.
  • 5. The computer program product in accordance with claim 2, the part status indication being a failed part status indication.
  • 6. The computer program product in accordance with claim 1, the standard functionality comprising a title displayed in a predetermined place.
  • 7. The computer program product in accordance with claim 6, the predetermined place being a first predetermined place, the standard functionality comprising a subtitle displayed in a second predetermined place.
  • 8. The computer program product in accordance with claim 1, the standard functionality comprising content status indication.
  • 9. The computer program product in accordance with claim 8, the content status indication being a success content status indication.
  • 10. The computer program product in accordance with claim 8, the content status indication being a warning content status indication.
  • 11. The computer program product in accordance with claim 8, the content status indication being an error content status indication.
  • 12. The computer program product in accordance with claim 1, the standard functionality comprising a command invocation function.
  • 13. The computer program product in accordance with claim 1, the standard functionality comprising a part resizing function.
  • 14. The computer program product in accordance with claim 1, at least one of the user interface elements being selectable to generate another user interface element.
  • 15. The computer program product in accordance with claim 1, wherein each of the parts may be moved to different contexts within a user interface without changing the standard functionality.
  • 16. The computer program product in accordance with claim 1, wherein interaction with one portion of the user interface element affects data projected at another portion of the user interface element
  • 17. A method for visualizing a user interface element on a screen, the method comprising: an act of placing the user interface element configured in a first size and shape on a screen, the user interface element having predetermined number of possible size and shapes, each corresponding to a different projection of data;an act of providing a first projection of data through the user interface element having the first size and shape while placed on the screen.
  • 18. The method in accordance with claim 17, further comprising: an act of detecting a user-issued resize command for the user interface element to change the user interface element to be configured in a second size and shape.
  • 19. The method in accordance with claim 18, further comprising the following in response to the act of detecting: an act of placing the user interface element configured in a second size and shape on the screen; andan act of providing a second projection of data through the user interface element having the second size and shape while placed on the screen, the second projection of data including at least some different data than the first projection of data.
  • 20. A computer program product comprising one or more computer-readable storage media having thereon computer-executable instructions that are structured such that, when executed by one or more processors of a computing system, cause the computing system to instantiate and/or operate the following: a plurality of user interface elements, each user element representing a visualized container on a screen, each of at least some of the user interface elements associated with one or more controls that project data into the user interface elements, each user interface element also including a container that includes standard functionality across at least a majority of the user interface elements, the standard functionality including at least one of the following: a part status indication, a content status indication, a command invocation function, a part resizing function,wherein each of the parts may be moved to different contexts within a user interface without changing the standard functionality.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of each of the following provisional patent applications, and each of the following provisional patent applications are incorporated herein by reference in their entirety: 1. U.S. Provisional Application Ser. No. 61/905,111, filed Nov. 15, 2013;2. U.S. Provisional Application Ser. No. 61/884,743, filed Sep. 30, 2013;3. U.S. Provisional Application Ser. No. 61/905,243, filed Nov. 17, 2013;4. U.S. Provisional Application Ser. No. 61/905,114, filed Nov. 15, 2013;5. U.S. Provisional Application Ser. No. 61/905,116, filed Nov. 15, 2013;6. U.S. Provisional Application Ser. No. 61/905,129, filed Nov. 15, 2013;7. U.S. Provisional Application Ser. No. 61/905,105, filed Nov. 15, 2013;8. U.S. Provisional Application Ser. No. 61/905,247, filed Nov. 17, 2013;9. U.S. Provisional Application Ser. No. 61/905,101, filed Nov. 15, 2013;10. U.S. Provisional Application Ser. No. 61/905,128, filed Nov. 15, 2013; and11. U.S. Provisional Application Ser. No. 61/905,119, filed Nov. 15, 2013.

Provisional Applications (11)
Number Date Country
61905128 Nov 2013 US
61884743 Sep 2013 US
61905111 Nov 2013 US
61905243 Nov 2013 US
61905114 Nov 2013 US
61905116 Nov 2013 US
61905129 Nov 2013 US
61905105 Nov 2013 US
61905247 Nov 2013 US
61905101 Nov 2013 US
61905119 Nov 2013 US