Computer systems are currently in wide use. Some computer systems run applications that are used for creating, editing and displaying content.
Some examples of such computer systems include word processing applications that can be used to create, edit and display word processing documents. Spreadsheet applications can be used to create, edit and display spreadsheets. Presentation applications (such as slide presentation applications or other presentation applications) can be used to create, edit and display various types of presentations.
The content that is created, edited and displayed, can include individually selectable objects. For instance, a word processing document may include selectable text portions, along with graphs, charts, tables, etc. Similarly, spreadsheets may include rows, columns, individual cells, groups of cells, etc. Presentations can include individual slides, each of which can include a wide variety of different types of objects, that are displayed on a given slide. When a user wishes to edit an individual object, the applications provide edit functionality which enables the user to actuate user actuatable input mechanisms in order to perform editing functions on the objects.
Mobile devices are also currently in wide use. Mobile devices, can include cell phones, smart phones, tablets, or other relatively small computing devices. Mobile devices often have display screens that are small relative to those on other computing devices, such as on desktop computing devices, laptop computing devices, presentation display devices, etc. This can make it difficult for a user to interact with small objects that are displayed on the display screen of a mobile device. Not only is it difficult for a user to interact with an object, such as by editing the object, but it can also be difficult for the user to even view such objects. For instance, in a spreadsheet, it can be difficult to read or edit text inside individual spreadsheet cells. In a table in a word processing document, it can also be difficult to see or edit information displayed inside the table cells. Thus, it can be difficult for a user to view, edit, or otherwise interact with such objects on small screen devices, such as mobile devices.
The discussion above is merely provided for general background information and is not intended to be used as an aid in determining the scope of the claimed subject matter.
An item of content is displayed on a display screen. A user interaction with an object within the item of content is detected, and the object is expanded, to full screen size, and displayed on a destination display screen, in editable form.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the background.
In one example, user 116 interacts with user input mechanisms 112-114 on one or more of the visualizations 108-110, in order to control and manipulate computing system 102. Computing system 102 can illustratively be used to create, edit and display content.
Therefore, in the example shown in
Application component 118 illustratively runs one or more content creation applications, such as word processing applications, slide presentation applications, spreadsheet applications, etc. The applications illustratively include their own content creation and editing systems 120 so that a user interacting with the running application can create, edit and display content. For example, user 116 can use application component 118 to run a word processing application in order to create word processing documents 126. It can run a slide presentation application in order to create slide presentations 128, and it can run a spreadsheet application in order to create spreadsheets 130. Of course, it can interact with other applications in order to create other content 132 as well.
The content may illustratively have individually selectable objects. For instance, word processing documents 126 may include not only selectable text portions, but also graphs, charts, clip art, tables, among a wide variety of other objects. Each of those objects may be selectable by user 116, using the content creation and editing system 120 for the word processing application. The user can select the object in order to edit it, delete it, or modify it in other ways.
The same is true of slide presentations 128 and spreadsheets 130. For instance, slide presentations 128 may include a plurality of different slides. Each slide may have individually selectable objects displayed thereon. For example, a slide can have individual text objects, graphic objects, etc. Each of those may have selectable animations applied to them. User 116 may use the content creation and editing system 120 in the slide presentation application in order to select one or more of the individual objects on the slide so that the user can edit the object. Spreadsheets 130 may have individual cells or groups of cells (or other objects) that can be selected for editing as well.
Multiple monitor control system 148 can provide functionality so that user 116 can display various items of content using multiple monitors (or display devices) 104-106. For instance, one of the display devices 104 may be on a smart phone or other mobile device of user 116. The other display device may be a large screen display device for a presentation system. The two devices may be paired. In that scenario, display pairing and control component 150 illustratively generates user input mechanisms that can be actuated by user 116 in order to control which items are displayed on which display device. For instance, the user may launch a presentation application using his or her mobile device, but have the presentation displayed on the display device for the presentation system. Of course, this is only one example of how paired devices can be controlled.
In another example, the display devices are two separate monitors that are used by the same user in a dual monitor or multi-monitor display mode. In that case, other multi-monitor control components 152 illustratively provide functionality so that user 116 can move the display generated by various applications or components from one monitor to the other. There are also a variety of other multi-monitor control components 152 that can be deployed.
As mentioned in the background, user 116 may be viewing an item of content (such as a word processing document 126) on a mobile device. In that case, it may be difficult for the user to view, select, edit or otherwise modify an individually selectable object within the word processing document. Thus, object interaction detector 136 in visualization system 134 detects that the user is interacting with, or attempting to interact with, an individual object or a group of objects. Orientation calculator 138 illustratively calculates whether the object, once it is expanded, will better be displayed in the portrait or landscape orientation. Once that is calculated, object expansion component 140 automatically calculates the expanded dimensions of the selected object, and visualization generator 142 automatically generates a full screen, expanded view of the selected object, in editable form, without any further action by the user. The full screen view can be displayed on the same display device that the user was initially using, or it can be displayed on a separate display device. This is described in greater detail below.
In any case, the full screen, expanded view of the object is displayed in editable form. Therefore, user 116 can use the content creation and editing system 120 of the application that is used to create and display the content, in order to edit the full screen view of the object. Once user 116 is finished viewing the full screen view of the object, the user can provide a dismiss input, and the display reverts to its previous form, where the object is displayed in the word processing document in its non-expanded form.
In the example shown in
It may be that user 116 wishes to modify the text 160 or one of the objects 162-164. For instance, assume that object 164 is a table with a plurality of different table cells displayed therein. However, it may be difficult for user 116 to edit an individual table cell within the table that comprises object 164. Therefore, in one example, the user can select object 164 and it automatically expands to a full screen display on a destination display screen. For instance, display device 106 shows display screen 166 as the destination display screen with object 164 expanded to full screen size.
Again, it may be that the destination display screen 166 is the same as display screen 158 on a mobile device being used by the user. In that case, once object 164 is expanded to full screen size, it temporarily inhibits viewing of any of the other portions of content 126. Alternatively, destination display screen 166 may be a display screen on a separate device, (e.g., a separate monitor) such as a large display screen of a presentation device, etc. In any case, the object 164 is expanded to full screen, in editable form, and displayed on the destination display screen 166.
The user can then illustratively provide some type of dismiss input (such as a touch gesture, a “back” input, a “cancel” input, etc.) to move backward in the progression of expanded views shown in
Visualization system 134 then displays the accessed content on the display screen. This is indicated by block 188. In one example, for instance, the content is a word processing document 126, and it is displayed full screen on the particular display device that the user is using. This is indicated by block 190. In one example, the full screen display shows the content along with user input mechanisms that can be actuated or invoked, such as mechanisms on command bars, ribbons, edit panes or other mechanisms. The content can include multiple selectable objects, as indicated by block 192. Those objects are illustratively displayed within the full screen display of the content. The content can be displayed on the display screen in other ways as well, and this is indicated by block 194.
Object interaction detector 136 then detects user interaction with, or the user's intention to interact with, one or more objects. This is indicated by block 196. This can be done in a wide variety of ways. For instance, when the content is displayed on a touch sensitive screen, the user can touch an individual object. In that case, object interaction detector 136 detects that the user wishes to interact with that object. Touching an individual object is indicated by block 198.
In another example, the user can group objects together. The user can do this by using a touch and drag gesture, or by selecting individual objects, independently of one another, or in other ways. Grouping objects is indicated by block 200. In that case, detector 136 detects that the user wishes to interact with the group of objects.
In another example, object interaction detector 136 is an ocular detector which detects the user's eye focus on an object on the display. This is indicated by block 202.
In another example, the user is using a point and click device, such as a mouse or track ball, and the user clicks on an object. In that case, object interaction detector 136 again detects that the user is interacting with the object. This is indicated by block 204.
In another example, the user simply hovers the cursor over the object for a sufficient period of time. Object interaction detector 136 can detect this as the user's intention to interact with the object as well, and this is indicated by block 206.
It will also be noted that the user can interact with, or indicate an intention to interact with, an object on the display in a wide variety of other ways. Object interaction detector 136 can illustratively detect these ways as well. This is indicated by block 208.
Visualization system 134 then determines whether the object expansion feature is enabled. In one example, the object expansion feature can be disabled by the user, by an administrator, or otherwise. In another example, it is always enabled. However, in the example illustrated in
However, if, at block 210, it is determined that the object expansion feature is enabled, then orientation calculator 138 identifies the destination display, where the expanded object (which has been identified as an object that the user is interacting with, or intends to interact with) is to be displayed, full screen. This is indicated by block 212. By way of example, it may be that the user is simply using a single device, and the destination display screen will therefore be the same display screen on which the user is interacting with the object. In another example, the user may be operating in multi-monitor mode, in which case the destination display screen may be on a different display device. Identifying the destination display screen based on operation in the multiple-monitor mode is indicated by block 214.
In another example, the device that the user is using may be paired with another device. For instance, if the user is using a smart phone and wishes to expand the object to work on it on the user's desktop computer, the smart phone may be paired with the desktop computer, so that the display screen used on the desktop computer is identified as the destination display screen. Having the destination display screen be on a paired device is indicated by block 216.
In another example, the user may be giving a presentation. In that case, the user may be using a smart phone, but the user wishes the expanded object to be displayed on the display screen of a presentation system. In that case, the destination display screen will be the presentation display screen. This is indicated by block 218. Of course, the destination display screen can be other display screens as well, and this is indicated by block 220.
Orientation calculator 138 then calculates the orientation in which the object will be displayed, once expanded. Determining the orientation for the expanded view is indicated by block 222 in
Object expansion component 140 then calculates the expanded view for the identified object. For instance, it can calculate how big the outer periphery of the object will be, so that it can fit, full screen, on the destination display. Calculating the expanded view is indicated by block 224 in
Visualization generator 142 then generates the visualization of the expanded object in the determined orientation, for the destination display screen, and visualization system 134 then displays the visualization on the destination display screen. This is indicated by blocks 226 and 228 in
While the object is displayed in the expanded view, application component 118 (and specifically content creation and editing system 120) illustratively detects whether the user has provided any edit inputs to edit the object. This is indicated by block 236 in
The system also determines whether the user has provided any other interactions with the object. This is indicated by block 240 in
At some point, visualization system 134 receives a dismiss input. For instance, the user can actuate a “cancel” button on the user interface display. The user can also use a pinch gesture or another type of gesture on a touch sensitive display. The user can actuate a “back” user input mechanism, or the user can provide another type of input that is intended to dismiss the expanded view display. Determining whether such an input is received is indicated by block 250 in
It can thus be seen that, by simply interacting with, or reflecting an intention to interact with, an object on a display, the object is automatically expanded to full screen view, with editing functionality. Therefore, a user can easily reflect an intention to interact with an object, have that object automatically expanded to full screen view, and then edit it. The user can then dismiss the expanded view and return to the normal view, with the edits reflected in the object. This significantly improves performance. It improves the performance of the user, because the user is more efficient. The user can edit objects even on a relatively small screen device, by simply interacting with the object, having it automatically expanded to full screen view, and then editing the object. The computing system on which the present system is deployed is also improved. Because the object is enlarged to full screen, the user can more quickly and accurately edit it. This enables the user to avoid a scenario where the user attempts to make an edit, but because the object is so small, makes the wrong edit, and then erases that edit and attempts to make another edit. All of these erroneous interactions increase the processing overhead of the computing system and therefore affect its performance. However, by automatically expanding the desired object to full screen view, the edits can be made more precisely using gestures or editing inputs, so that the system need not repeatedly make erroneous inputs, erase them and make other inputs. This saves processing overhead, and it also saves user time, rendering both the computer system and the user more efficient.
The present discussion has mentioned processors and servers. In one embodiment, the processors and servers include computer processors with associated memory and timing circuitry, not separately shown. They are functional parts of the systems or devices to which they belong and are activated by, and facilitate the functionality of the other components or items in those systems.
Also, a number of user interface displays have been discussed. They can take a wide variety of different forms and can have a wide variety of different user actuatable input mechanisms disposed thereon. For instance, the user actuatable input mechanisms can be text boxes, check boxes, icons, links, drop-down menus, search boxes, etc. They can also be actuated in a wide variety of different ways. For instance, they can be actuated using a point and click device (such as a track ball or mouse). They can be actuated using hardware buttons, switches, a joystick or keyboard, thumb switches or thumb pads, etc. They can also be actuated using a virtual keyboard or other virtual actuators. In addition, where the screen on which they are displayed is a touch sensitive screen, they can be actuated using touch gestures. Also, where the device that displays them has speech recognition components, they can be actuated using speech commands.
A number of data stores have also been discussed. It will be noted they can each be broken into multiple data stores. All can be local to the systems accessing them, all can be remote, or some can be local while others are remote. All of these configurations are contemplated herein.
Also, the figures show a number of blocks with functionality ascribed to each block. It will be noted that fewer blocks can be used so the functionality is performed by fewer components. Also, more blocks can be used with the functionality distributed among more components.
The description is intended to include both public cloud computing and private cloud computing. Cloud computing (both public and private) provides substantially seamless pooling of resources, as well as a reduced need to manage and configure underlying hardware infrastructure.
A public cloud is managed by a vendor and typically supports multiple consumers using the same infrastructure. Also, a public cloud, as opposed to a private cloud, can free up the end users from managing the hardware. A private cloud may be managed by the organization itself and the infrastructure is typically not shared with other organizations. The organization still maintains the hardware to some extent, such as installations and repairs, etc.
In the example shown in
It will also be noted that architecture 100, or portions of it, can be disposed on a wide variety of different devices. Some of those devices include servers, desktop computers, laptop computers, tablet computers, or other mobile devices, such as palm top computers, cell phones, smart phones, multimedia players, personal digital assistants, etc.
Under other embodiments, applications or systems are received on a removable Secure Digital (SD) card that is connected to a SD card interface 15. SD card interface 15 and communication links 13 communicate with a processor 17 (which can also embody processor 122 from
I/O components 23, in one embodiment, are provided to facilitate input and output operations. I/O components 23 for various embodiments of the device 16 can include input components such as buttons, touch sensors, multi-touch sensors, optical or video sensors, voice sensors, touch screens, proximity sensors, microphones, tilt sensors, and gravity switches and output components such as a display device, a speaker, and or a printer port. Other I/O components 23 can be used as well.
Clock 25 illustratively comprises a real time clock component that outputs a time and date. It can also, illustratively, provide timing functions for processor 17.
Location system 27 illustratively includes a component that outputs a current geographical location of device 16. This can include, for instance, a global positioning system (GPS) receiver, a LORAN system, a dead reckoning system, a cellular triangulation system, or other positioning system. It can also include, for example, mapping software or navigation software that generates desired maps, navigation routes and other geographic functions.
Memory 21 stores operating system 29, network settings 31, applications 33, application configuration settings 35, data store 37, communication drivers 39, and communication configuration settings 41. Memory 21 can include all types of tangible volatile and non-volatile computer-readable memory devices. It can also include computer storage media (described below). Memory 21 stores computer readable instructions that, when executed by processor 17, cause the processor to perform computer-implemented steps or functions according to the instructions. Similarly, device 16 can have a client system 24 which can run various business applications or embody parts or all of system 102. Processor 17 can be activated by other components to facilitate their functionality as well.
Examples of the network settings 31 include things such as proxy information, Internet connection information, and mappings. Application configuration settings 35 include settings that tailor the application for a specific enterprise or user. Communication configuration settings 41 provide parameters for communicating with other computers and include items such as GPRS parameters, SMS parameters, connection user names and passwords.
Applications 33 can be applications that have previously been stored on the device 16 or applications that are installed during use, although these can be part of operating system 29, or hosted external to device 16, as well.
Additional examples of devices 16 can be used as well. Device 16 can be a feature phone, smart phone or mobile phone. The phone can include a set of keypads for dialing phone numbers, a display capable of displaying images including application images, icons, web pages, photographs, and video, and control buttons for selecting items shown on the display. The phone can include an antenna for receiving cellular phone signals such as General Packet Radio Service (GPRS) and 1Xrtt, and Short Message Service (SMS) signals. In some examples, the phone also includes a Secure Digital (SD) card slot that accepts a SD card.
The mobile device can also be a personal digital assistant (PDA) or a multimedia player or a tablet computing device, etc. (hereinafter referred to as PDA). The PDA includes an inductive screen that senses the position of a stylus (or other pointers, such as a user's finger) when the stylus is positioned over the screen. This allows the user to select, highlight, and move items on the screen as well as draw and write. The PDA can also include a number of user input keys or buttons which allow the user to scroll through menu options or other display options which are displayed on the display, and allow the user to change applications or select user input functions, without contacting the display. Although not shown, the PDA can include an internal antenna and an infrared transmitter/receiver that allow for wireless communication with other computers as well as connection ports that allow for hardware connections to other computing devices. Such hardware connections are typically made through a cradle that connects to the other computer through a serial or USB port. As such, these connections are non-network connections.
Note that other forms of the devices 16 are possible.
Computer 810 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 810 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media is different from, and does not include, a modulated data signal or carrier wave. It includes hardware storage media including both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 810. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 830 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 831 and random access memory (RAM) 832. A basic input/output system 833 (BIOS), containing the basic routines that help to transfer information between elements within computer 810, such as during start-up, is typically stored in ROM 831. RAM 832 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 820. By way of example, and not limitation,
The computer 810 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only,
Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
The drives and their associated computer storage media discussed above and illustrated in
A user may enter commands and information into the computer 810 through input devices such as a keyboard 862, a microphone 863, and a pointing device 861, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 820 through a user input interface 860 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A visual display 891 or other type of display device is also connected to the system bus 821 via an interface, such as a video interface 890. In addition to the monitor, computers may also include other peripheral output devices such as speakers 897 and printer 896, which may be connected through an output peripheral interface 895.
The computer 810 is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 880. The remote computer 880 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 810. The logical connections depicted in
When used in a LAN networking environment, the computer 810 is connected to the LAN 871 through a network interface or adapter 870. When used in a WAN networking environment, the computer 810 typically includes a modem 872 or other means for establishing communications over the WAN 873, such as the Internet. The modem 872, which may be internal or external, may be connected to the system bus 821 via the user input interface 860, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 810, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
It should also be noted that the different embodiments described herein can be combined in different ways. That is, parts of one or more embodiments can be combined with parts of one or more other embodiments. All of this is contemplated herein.
Example 1 is a computing system, comprising:
an application component that runs an application to display an item of content on a computing system display screen, the item of content including a selectable object;
an object interaction detector that detects user activity corresponding to the object; and
a visualization generator that automatically generates an expanded view of the object in editable form, based on the detected user activity.
Example 2 is the computing system of any or all previous examples wherein the visualization system generates the expanded view to show a full screen view of the object on a destination display screen.
Example 3 is the computing system of any or all previous examples wherein the destination display screen is the computing system display screen.
Example 4 is the computing system of any or all previous examples wherein the destination display screen comprises a display screen that is different from the computing system display screen.
Example 5 is the computing system of any or all previous examples wherein the computing system display screen comprises a display screen on a mobile device.
Example 6 is the computing system of any or all previous examples wherein the destination display screen comprises a display screen on a paired device that is paired with the mobile device.
Example 7 is the computing system of any or all previous examples wherein the computing system display screen comprises a display screen on a first monitor and wherein the destination display screen comprises a display screen on a second monitor, and further comprising:
a multi-monitor component that controls displays on the display screens of the first and second monitors.
Example 8 is the computing system of any or all previous examples and further comprising:
an object expansion component that calculates a size of the expanded view of the object for generation by the visualization component.
Example 9 is the computing system of any or all previous examples and further comprising:
a view orientation calculator that determines which orientation, of a plurality of different orientations, the expanded view of the object is generated in, based on a geometry of the destination display screen and of the object.
Example 10 is the computing system of any or all previous examples wherein the application component comprises:
a content creation and editing system that displays edit user input mechanisms that are actuated to edit the object in the expanded view, and that receives user actuation of the edit user input mechanisms, performs corresponding edits on the object, in the expanded view, and saves the edits to the object.
Example 11 is the computing system of any or all previous examples wherein the application component runs one of a word processing application, a spreadsheet application and a slide presentation application.
Example 12 is a method, comprising:
displaying an item of content on a display screen;
detecting user interaction with a displayed object in the item of content; and
automatically displaying a full screen display of the object, in editable form, based on the detected user interaction.
Example 13 is the method of any or all previous examples and further comprising:
receiving an edit user input through the expanded view;
performing an edit on the object based on the edit user input; and
upon exiting the expanded view, saving the edit to the object.
Example 14 is the method of any or all previous examples wherein automatically displaying a full screen display of the object comprises:
identifying a destination display screen; and
automatically displaying the full screen display of the object on the destination display screen.
Example 15 is the method of any or all previous examples wherein detecting user interaction comprises:
detecting user selection of the displayed object.
Example 16 is the method of any or all previous examples wherein detecting user interaction comprises detecting user selection of a group of objects, and wherein automatically displaying a full screen display comprises:
automatically displaying a full screen display of the group of objects.
Example 17 is a computing system, comprising:
a first display screen;
a content creation and editing system that generates user input mechanisms that are actuated to create an item of content with a selectable object;
a visualization generator that generates a first display, on the first display screen, of the item of content with the object; and
an object interaction detector that detects a user interaction input interacting with the object, the visualization system automatically displaying a second display comprising a full screen, editable view of the object, based on the detected user interaction input.
Example 18 is the computing system of any or all previous examples and further comprising:
a second display screen, the visualization system displaying the first display on the first display screen and the second display on the second display screen.
Example 19 is the computing system of any or all previous examples and further comprising:
an orientation calculator component that calculates an orientation that the second display is to be displayed in, based on a geometry of a display screen used to display the second display and based on a geometry of the object.
Example 20 is the computing system of any or all previous examples wherein the content creation and editing system comprises a part of at least one of a word processing application, a slide presentation application or a spreadsheet application.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.