Generally described, computing devices and communication networks can be utilized to exchange information. In a common application, a computing device can request content from another computing device via the communication network. For example, a user at a personal computing device can utilize a software browser application to request a page from a server computing device via the Internet or another network. In such embodiments, the user computing device can be referred to as a client computing device and the server computing device can be referred to as a content provider.
From the perspective of a user utilizing a client computing device, a user experience can be defined in part in terms of the performance and latencies associated with obtaining network content over a communication network, such as obtaining a Web page, processing embedded resource identifiers, generating requests to obtain embedded resources, and rendering content on the client computing device. Latencies and performance limitations of a particular client computing device or network may diminish the user experience. Additionally, latencies and inefficiencies may be especially apparent on computing devices with limited resources, such as limited processing power, memory or network connectivity, which may occur on a mobile computing device like a tablet or smartphone. The user experience on certain mobile devices when viewing a given page may also be adversely affected by a limited screen size and/or limited input options (e.g., user interactions being limited to touches on a touchscreen instead of access to a keyboard and multi-button mouse).
For the above and other reasons, website operators or other network content providers will often design different versions of their pages for display on client mobile devices than for display on display monitors of traditional desktop or laptop computers. Pages designed specifically for mobile devices are often referred to as mobile friendly or mobile optimized pages. Relative to a standard version of a given page, a mobile optimized page may include, for example, a rearranged content layout, larger selectable options to account for imprecise touch gestures, and/or other changes. Developers and/or designers often spend substantial time designing mobile optimized pages and testing the usability and appearance of such pages on various mobile devices.
Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate embodiments of the inventive subject matter described herein and not to limit the scope thereof.
Generally described, aspects of the present disclosure relate to automated approaches for determining the portions of one or more user interfaces or pages, such as webpages, that are likely to be of interest to a user who is accessing a page from a mobile computing device, and providing for automated generation of an optimized page that includes only a relevant subset of the original page content. In one embodiment, an intermediary system is positioned in a network between client computing devices and an organization's content sever. In one example, the organization may be a company, the page content available via the organization's content server may be corporate intranet pages available only to employees or other members of the organization, and the client computing devices may be used by the organization's employees or other members to view the pages. The intermediary system may receive a variety of user interaction data (including, for example, scroll requests, zoom requests, text highlighting, and others) beyond what would typically be captured and returned to a server in a traditional browsing environment that does not utilize an intermediary system as described herein. The intermediary system may analyze the interaction data and the series of actions followed by a potentially large number of users to identify the portions of one or more pages that are typically of interest to one or more groups of users (such as page portions that are most often accessed or most often interacted with) when accessing pages from their mobile computing devices, and may generate a mobile-optimized page template that is then used to generate graphical page representations for delivery to mobile client computing devices in response to subsequent page requests from users in the given user group.
In one embodiment, when a client computing device requests a page associated with the organization's content server, the page may first be generated in a graphical representation form with associated control information (discussed further below) by the intermediary system, then delivered from the intermediary system for display in the graphical form to the client computing device. The graphical representation of the page may be sent to the client computing device along with code that enables the user's browsing experience to mimic a traditional browsing experience (for example, interactions with the page may not seem different from the user's perspective than if the original page had been sent to the client computing device from the organization's content server rather than a graphical representation of the page having been sent from the intermediary system). In order for the client computing device to respond properly to user interactions with the graphical page representation, code and control information associated with the graphical page representation may instruct the client computing device to send various interaction data identifying user actions taken with respect to the page (such as scroll requests, zoom requests, text highlighting, and others) in order for the intermediary system to determine whether additional content should be sent to the client computing device in response to each interaction.
Aspects of the present disclosure include leveraging this rich interaction data to at least semi-automate the creation of mobile-optimized versions of pages that are available as part of a company's internal website or other set of pages or user interfaces that are available to the organization's employees or other members. This is particularly beneficial in the context of a company's internal pages that are not accessed by people outside of the organization's employees or members because organizations often do not invest as many resources in manually optimizing these pages for access from mobile computing devices as the same organizations may invest in optimizing their publicly-accessible pages that are typically accessed by a much larger number of people, including by customers.
As discussed above, one method for reducing page load times is to offload some of the processing (including rendering) to an intermediary system (e.g., a proxy system) that sits logically between the user's computing device and a system that hosts the network page or content page. For example, instead of the user's device accessing the host system that hosts the network page (e.g., webpage) to access or retrieve each content resource (e.g., images, text, audio, video, scripts, etc.) of the network page, the intermediary system can retrieve each of the content resources, render the page, and can generate a graphical representation (e.g., digital photos, images, or snapshots) of all or a portion of the page. The intermediary system can then provide the graphical representation to the user's device for presentation to the user. This approach often reduces page load times experienced by end users. This reduction is due in part to the reduced need for the user device to handle computation-intensive page rendering tasks. Thus, presenting a graphical representation of the page to a user can result in noticeable performance improvements.
However, for content pages that include interactive elements (e.g., search fields, drop-down boxes, hyperlinks, etc.), presenting a graphical representation of the page without more may be unacceptable for some use-cases. One solution to this problem involves emulating one or more interactive elements, or controls, of a content page on a user device. The emulated interactive elements or emulated controls may be included as part of an overlay layer positioned above a graphical representation of the content page. Information or control metadata for each control included in the content page may be provided to a user device along with the graphical representation of the content page. The user device, or systems therein, may use the control metadata to identify types of controls to emulate. Further, the user device may identify where to position the emulated control with respect to the graphical representation of the content page based at least partially on the received control metadata. Systems and methods for providing such emulated controls are discussed in more detail in co-owned U.S. Pat. No. 9,037,975, titled “Zooming Interaction Tracking and Popularity Determination,” filed Feb. 10, 2012 and issued May 19, 2015.
Interactions with the emulated controls may be provided to the intermediary system for processing. Further, an identifier of the control being simulated by the emulated control that received the user input may be provided to the intermediary system. Using the identifier of the control to identify the control, the intermediary system may interact with the control of the content page at a host system based on the user input received by the emulated controls. In some cases, interactions with the content page may result in the content page being modified and/or a new content page being retrieved. In some such cases, the intermediary system may generate a new graphical representation associated with the updated content page and/or the new content page. This new graphical representation may be provided to the user device for display to the user. Thus, in certain embodiments, the processing of interactions with a content page may be offloaded from a user device to an intermediary system. Further, in some embodiments, by offloading the processing to the intermediary system, page load times perceived by end users are reduced without loss of interactivity with the page. Aspects of the present disclosure include further optimizing such content pages for display on mobile computing devices based on an analysis of aggregated user interactivity data for various user groups.
As will be appreciated by those of skill in the relevant art, a network environment may include any number of distinct user devices 102 and/or content sources 106, 108. In addition, multiple (e.g., two or more) intermediary systems 104 may be used. For example, separate intermediary systems 104 may be located so that they are close (in either a geographical or networking sense) to groups of current or potential user devices 102 or content sources 106, 108. In such a configuration, a user device 102 may request content via the intermediary system 104 to which it is closest, rather than all user devices 102 requesting content via a single intermediary system 104. In some embodiments, the intermediary system 104 may operate in association with only a single organization's content server(s) 106, while in other embodiments the intermediary system 104 may provide page rendering and optimization services to a number of different organizations that each operate different content servers.
The user devices 102 can include a wide variety of computing devices, including personal computing devices, terminal computing devices, laptop computing devices, tablet computing devices, electronic reader devices, mobile devices (e.g., mobile phones, media players, handheld gaming devices, etc.), wearable devices with network access and program execution capabilities (e.g., “smart watches” or “smart eyewear”), wireless devices, set-top boxes, gaming consoles, entertainment systems, televisions with network access and program execution capabilities (e.g., “smart TVs”), kiosks, and various other electronic devices and appliances. Individual user devices 102 may execute a browser application 120 to communicate via the network 110 with other computing systems, such as the intermediary system 104 or content sources 106 and 108, in order to request and display content.
Illustratively, a user may use a browser application 120, or other application capable of accessing a network site, to request network-accessible content (e.g., content pages, images, video, etc.) hosted or provided by a content source, such as an organization's content server 106 or a CDN server 108. The user device 102 or browser application 120 may be associated with the intermediary system 104 or otherwise configured to request the content through, and receive content display commands from, the intermediary system 104 rather than communicating directly with the content source. The browser application 120 may include a remote graphics module 122 that receives remotely-generated display commands, such as those generated by the intermediary system 104. The remote graphics module 122 (or some other module of the browser application 120 or user device 102) can execute the remote-generated display commands to display a representation of the requested content on the user device 102. Advantageously, the remote graphics module 122 may facilitate the display of graphical representations of requested content on the user device 102 without requiring the user device 102 to receive content files (e.g., HTML files, JPEG images, etc.) from content sources 106 and 108.
In some embodiments, the browser 120 may be a conventional web browser or network-site browser that is not specifically designed or configured to execute remotely-generated graphics commands and other display commands. For example, the browser 120 may use or otherwise be associated with a remote graphics module 122 that may or may not be integrated with the browser 120, such as a browser add-in or extension. In some embodiments, applications other than a browser 120 may include or use a remote graphics module 122 (or some similar module) to execute graphics commands generated by an intermediary system 104. For example, content aggregators or other specialized content display applications for mobile devices may utilize a remote graphics module 122.
The browser 120 may include a controls emulator 124, which may be configured to emulate, or generate representations of, one or more controls of a content page. The controls emulator 124 may use control metadata received from the intermediary system 104 to determine the number of controls to emulate, the type of controls to emulate, and the location of the controls with respect to a content page. Using the control metadata, the controls emulator 124 can emulate one or more controls and position the emulated controls over a graphical representation of the content page on the user device. Advantageously, in certain embodiments, by positioning emulated controls over the graphical representation of the content page a user can interact with the content page despite being presented with the graphical representation of the content page in place of the content page. In other words, in some cases, although a user may be presented with an image or snapshot of the content page, the user may interact with the content page using the emulated controls that are positioned or layered over the image of the content page.
In certain embodiments, the emulated controls include the same or similar functionality as the controls they mimic. When a user interacts with the emulated controls, the interaction and/or input to the emulated controls may be provided to the intermediary system 104. Intermediary system 104 may replicate the interaction on a system that hosts the content page (e.g., the organization's content server 106) and/or may provide the input to the host system of the content page. In certain embodiments, intermediary system 104 may access and/or retrieve a modified version of the content page that is responsive to the interaction with and/or input provided to the host system of the content page. The intermediary system 104 may generate a graphical representation of the modified version of the content page and provide the graphical representation to the user device 102. Thus, in certain embodiments, the user may interact with a content page via emulated controls and through the intermediary system 104 despite being presented with an image or graphical representation of the content page at the user device 102 in place of the content page itself. In some embodiments, the functionality of the remote graphics module 122 and/or controls emulator 124 may be implemented by a typical browser application operating on the user device 102 as a result of the browser executing code received from the intermediary system 104 when receiving the graphical representation of a page, without necessarily requiring any specialized browser, software installation, or a browser plug-in on the user device.
The intermediary system 104 can be a computing system configured to retrieve content on behalf of user devices 102 and generate display commands for execution by the user devices 102. For example, the intermediary system 104 can be a physical server or group of physical servers that may be accessed via the network 110. In some embodiments, the intermediary system 104 may be a proxy server, a system operated by an internet service provider (ISP), and/or some other device or group of devices that retrieves content on behalf of user devices 102.
The intermediary system 104 may include various modules, components, data stores, and the like to provide the content retrieval and processing functionality described herein. For example, the intermediary system 104 may include a server-based browser application or some other content rendering application to process content retrieved from content sources. Such a content rendering application may be referred to as a “headless browser” 140. Generally described, a headless browser 140 does not (or is not required to) cause display of content by a graphical display device of the server on which the headless browser 140 is executing. Instead, the headless browser 140 provides display commands, graphical representations, images, or other data or commands to separate user devices 102 that can cause the presentation of content accessed by the headless browser 140 on one or more of the separate user devices 102. Illustratively, the headless browser 140 may obtain requested content from an organization's content server 106 and/or CDN server 108, obtain additional items (e.g., images and executable code files) referenced by the requested content, execute code (e.g., JavaScript) that may be included in or referenced by the content, generate graphics commands to display a graphical representation of the content, and transmit the graphics commands to the user device 102. Further, in some cases, the headless browser 140 may create graphical representations of a content page or a network page, or one or more content resources of the content page, and provide the graphical representations to the user device 102. By performing some or all of these operations at the intermediary system 104, the substantial computing resources and high-speed network connections typically available to network-based server systems may be leveraged to perform the operations much more quickly than would be possible on a user device 102 with comparatively limited processing capability.
The headless browser 140 may include various modules to provide the functionality described above and in greater detail below. For example, the headless browser 140 may include a content processing module 150, a graphics processing module 152, and an interaction processing module 154. The content processing module 150 may include any system that can parse content files and generate a document object model (“DOM”) or similar representation of the content. Further, in some cases, the content processing module 150 may include logic for determining how to divide a content page into a set of tiles to be provided to the browser 120 and/or the remote graphics module 122.
The graphics processing module 152 may include any system that can receive the DOM representation and generate display commands (e.g., SKIA commands) to render a graphical representation of the content on a user device 102. In some cases, the graphics processing module 152 may further receive definitions or metadata for each tile from the set of tiles determined by the content processing module 150. The graphics processing module 152 may use the tile definitions to generate the display commands to render the graphical representation of the content at the user device 102. For instance, each tile may be associated with its own display command or set of commands for displaying the tile on the user device 102. In some embodiments, the graphics processing module 152 instead of, or in addition to, the content processing module 150 may determine how to divide the content page into the set of tiles. The interaction processing module 154 may include any system that communicates with the browser 120 to receive information regarding interactions with the content on the user device 102 and to update the graphical representation of the content, if necessary. Further, the interaction processing module 154 may provide the tiles and/or display commands to the user device 102. In some embodiments, a headless browser 140 may include additional or fewer modules than those shown in
The intermediary system 104 may include additional modules, components, data stores, and the like to provide the features described above and in greater detail below. For example, the intermediary system 104 may include a cache 142 that stores content items retrieved from content sources 106 and/or 108, graphics commands generated by the headless browser 140, graphical representations of content resources or portions of the content page, and the like. The intermediary system 104 may also include a logged user behaviors data store 144 that stores information about user requests and interactions with content.
In some embodiments, the cache 142 may store graphical representations of content pages generated by the headless browser 140, together with any controls metadata for emulating one or more controls included in the content pages, for a predetermined period of time after the content page request or after connection between the user device and the intermediary system has terminated. In some embodiments, interactions stored in the logged user behaviors data store 144 can be used to deliver a graphical representation of the content page and controls metadata reflecting previous user interactions with the page. The logged user behaviors data store 144 may, in some embodiments, store an activity stream or set of actions performed by each user, including data associating the activity with a particular user and optionally with a particular browsing session, as will be described further below.
As further illustrated in
Although in the examples described herein the intermediary system 104 is configured to communicate between the organization's content servers 106 and user devices 102 to execute the processes described herein, in some embodiments the organization's content servers 106 can be configured to generate graphical representations of content pages and to provide controls metadata to enable the user devices 102 to emulate controls in the content pages and to send the graphical representation of the content pages and the controls metadata directly to a user device. For example, the capability to perform the graphical representation generation processes and the controls metadata determination processes can be provided to organization's content servers 106 in the form of an add-in, plug-in, or extension. Accordingly, any of the graphical representation generation and controls metadata determination processes described herein as being performed by the intermediary system 104 can, in some embodiments, be performed additionally or exclusively by the organization's content servers 106, in which case the intermediary system may be omitted. In such cases, the organization's content server 106 may include a mobile optimization subsystem 141 that enables the organization's content server 106 to optimize its pages for display on mobile devices without the use of an intermediary system.
As illustrated in
The user identifiers illustrated in table 200 may represent, for example, identifiers of user accounts maintained by the organization's content server 106. For example, if the organization is a company, the various user identifiers may each be an account number or account name for a different employee of the company. While the user identifiers appear as numbers in the illustrated embodiment, the user identifiers for some organizations may instead be in the form of an alphanumeric name (such as “john.smith”), which may also be the user name that a given employee enters, along with a password, in order to login or otherwise authenticate himself as eligible to access a corporate website of his employer (e.g., the organization that operates content server 106).
The activity stream (which may also be referred to as user activity data) associated with each session in table 200 includes identification of each of a series of actions taken by the given user in the given session. For example, as illustrated in the first row of table 200, a user having the user identifier “8192” accessed three pages (identified as “Page1,” “Page9,” and “Page2”) during browsing session number “1001.” In the illustrated embodiment, each action is identified by both an action type and an object of the action, which are separated by an underscore (e.g., “Scroll_Section2” may indicate that the action was a scroll action, and that the user scrolled to a section or portion of the page identified as “Section2”). While interacting with the page identified as “Page1,” the activity stream data in the first row of table 200 represents that the user performed four actions, identified in order as “Select_Widget2” (which may represent that the user clicked, tapped or otherwise selected at least a portion of content that was generated by a particular code module named “Widget2”), “Scroll_Section2,” “Highlight_Text7” (which may indicate that the user used a cursor or touch gesture to highlight a portion of text on the page, where that text portion has been labeled “Text?”), and “SelectLink_Page9” (which may indicate that the user select a hyperlink or other option on the page to request a uniform resource identifier (“URI”) of a page identified as “Page9”). The format of actions illustrated in table 200 is meant for illustrative purposes according to one embodiment, and it will be appreciated that other formats may be used to identify an action type, action target, and/or other associated data in other embodiments.
In addition to the scrolling, selection and highlighting action types mentioned above, table 200 includes other types of actions, such as a user entering text in a text field presented on the page (e.g., “EnterText_Field2”), pressing or otherwise selecting a button presented on the page (e.g., “Select_button3”), and requesting to expand the amount of content presented by a given code module (e.g., “Expand_Widget9”). As will be appreciated by one of skill in the art, these actions include types of actions that would not typically be reported from a client device back to a server in a traditional browsing environment (e.g., an environment in which the client device receives HTML or similar pages from a server rather than receiving graphical page representation with accompanying emulated control data). However, such rich interaction data is received by the intermediary system 104 in order to properly respond to emulated controls and otherwise properly respond to page interactions with respect to a graphical page representation that has been sent to a user device for display. As an example, in a typical client-server environment, a client device accessing a typical web page would not typically report user actions such as scrolling or text highlighting. Accordingly, the activity stream data received by the intermediary system 104 for each user session may go well beyond the typical link selection and similar limited actions that a typical server may have access to in its “clickstream” data of user actions, and may include many other types of actions beyond even those represented in table 200. Example additional actions may include, for example, various types of zooming, scrolling, highlighting, checkbox or radio button selections prior to form submission, selecting to expand the options in a pull-down menu, hovering of a cursor or similar action for interacting with a tooltip object, other touch or click actions that cause additional content to be displayed, a long press gesture or right-click action (which may cause options to appear such as saving a file), taking a screenshot, etc.
The activity tree representation in
Links illustrated between any two actions in
The method 300 begins at block 302, where the intermediary system 104 may receive, from mobile devices operated by users who are members of an organization (such as from user devices 102), page interaction data with respect to visual representations of an organization's server-rendered pages. In one embodiment, as discussed above, each of the pages may originally be retrieved from the organization's content server 106 by the intermediary system 104 in response to a page request from a user device 102, then may have been rendered by the intermediary system 104 in a visual representation form (along with appropriate control data, as discussed above), for display on a particular user device 102 that requested the page. In one example, the organization may be a company that has selected to use the intermediary system 104 for page rendering in order to provide a more secure and/or more controlled browsing environment for sensitive corporate data that may be accessed by remotely located employees (e.g., employees accessing the company's internal website from personal mobile computing devices outside of the company's internal intranet or network).
Users utilizing user devices 102 may, for example, be required to complete a login or authentication process with the organization's content server 106 (e.g., via one or more login pages generated in visual representation form by the intermediary system, which is in communication with the organization's content server 106) in order to access the given user's internal company account, then may be able to browse and interact with a number of pages of the company's internal corporate website via the headless browser 140 of the intermediary system, as discussed above. As discussed in detail above, the intermediary system 104 receives a wide variety of user interaction data during browsing sessions that is used by the intermediary system to render and send additional responsive content in a graphical form (with associated emulated control data) to the respective user device.
At block 304, the intermediary system 104 may store, for each user and/or session, interaction data representing a series of actions taken by the user across one or more pages. The interaction data associated with the various users' browsing sessions may be stored by the intermediary system 104 in user behaviors data store 144. Various examples and forms of the stored interaction data are discussed above with reference to
Next, at block 306, the intermediary system 104 may determine common action paths in the interaction data for each of two or more subsets or groups of the users who have accessed the company's pages. In some embodiments, the intermediary system 104 may first filter the browsing sessions to only include those sessions in which a user was utilizing a mobile computing device or a particular class of mobile device (such as a mobile phone or a tablet), as opposed to a desktop computer or other device type. In some embodiments, the user groups may no be predefined, but may each be a dynamically determined subset of users based on a clustering or other grouping method applied to the interaction data by the intermediary system 104. For example, the intermediary system 104 may identify certain users that frequently follow a first action path, and a second set of users that frequently follow a second action path. In such a case, the intermediary system 104 may group the former set of users into a first group and the second set of users into a second group, even if there is no indication to the intermediary system 104 that the users within the first group have anything in common beyond their similar action stream data. In other embodiments, the intermediary system 104 may have access to other account data fro the organization's content server 106 that may be used in whole or in part to group users, such as information regarding each user's role, title, job function or work group at the company. For example, the intermediary system 104 may determine common action paths followed by members of the company's information technology (“IT”) department as a first group, and may determine common action paths followed by members of the company's accounting department as a second group.
The common action paths for a given user group, in some embodiments, may include each path or path step that meets a minimum threshold for popularity or commonality among the group members. As one example, for a first group, the intermediary system 104 may identify that there have been a large number of different unique paths followed by users of the first group, but may exclude a portion of the actions along at least some of these paths as not being considered part of a common action path for the first group at block 306 if only a small subset of the users in the first group performed the actions (e.g., these actions may be considered outlier actions due to their frequency among the first group falling below a threshold level). Depending on the embodiment, the thresholds for determining common actions as opposed to outlier actions for a given user group may be set relative to a user count (e.g., the number of unique users who performed the action), relative to a percentage (e.g., the percentage of users in the group who performed the action), the total times the action was performed (e.g., including counting multiple performances of the action by a single user), or an average number of performances of the action per user in the group.
At block 308, the intermediary system 104 may determine page portions that are commonly accessed from mobile devices used by members of each user group based on the common action paths determined for that user group. In some embodiments, some or all of the actions may be stored in a manner whereby the page portion associated with the action is directly evident from the stored action data. For example, the fact that a user interacted with a certain section of a page identified in the company's original page code as “CompanyNavigationBar” may be directly indicated in the stored action data (e.g., a stored action may be identified as “Selected_Item2_CompanyNavigationBar”).
In other instances according to some embodiments, the intermediary system 104 may refer back to a graphical representation of a page that was displayed to the user in combination with the action data to determine the portion(s) of the page with which the user interacted. For example, the action data for a given action may be stored with reference to the coordinates, tile or other location information within the graphical representation of the page that was presented to that particular user, in which case the intermediary system 104 may replay the graphical representation-relative action on a cached copy of the graphical page representation previously presented to the user in order to identify a portion identified in the organization's original underlying page (e.g., in an original HTML page) in which the action took place. For example, in some embodiments, the intermediary system 104 may determine a specific portion of text that was highlighted by a user (which the user may have highlighted in order to copy the text to an operating system clipboard) based at least in part on an indication of the coordinates or relative position of a cursor or touch gesture during one or more selection events (e.g., taps on a touchscreen).
In some instances, an individual page stored on the organization's content server 106 may include imbedded code or references to external code that, when executed or interpreted by a browser, dynamically determines or selects content to display in a given portion of a page. For example, a given page may include six different code modules (or calls to code modules that operate on the organization's content server 106), where each code module executes to provide output that indicates displayable content to present in a different section of the page (e.g., six different sections of the page may each be associated with a different code module). Common use of code modules in this manner may be for content that is likely to change frequently (such as company status updates, news, etc.) or content that is customized for the particular user (e.g., incoming messages from other users, user-specific recommendations, etc.). In some embodiments, determining the page portions commonly accessed at block 308 may include identifying the specific code module responsible for generating the content with which the user interacted (instead of or in addition to identification of the page section).
For example, the intermediary system 104 may identify that users frequently interacted with content display by a code module named “LiveNewsUpdates,” and may aggregate the interaction counts for this code module's content across different pages for which the code module selected content for display (e.g., on one page the LiveNewsUpdates module may have presented content in the first section of a page, but may have also presented content in a fifth section of a different page). Accordingly, a page “portion” as used herein is intended to be a broad term that may refer to a specific identified section of a page (e.g., “Section 5”), the content output by a specific code module for inclusion in the page, a specific identified control (e.g., a certain button), a portion of text (e.g., a sentence highlighted by a user), any other object within a DOM representation of the page, and/or other discrete or otherwise identifiable page portion, depending on the context.
At block 310, the intermediary system 104 may generate, for at least one subset of users, a mobile-optimized template that includes identifiers of the commonly accessed page portions of one or more pages and that excludes other portions of the one or more pages. As discussed above, a given template generated for a given user group may include references to identifiers of portions of one or more pages that were frequently interacted with by members of the group, and may exclude (e.g., may not include any reference to) other page portions that were less frequently interacted with by members of the group, particularly when accessing the page(s) from mobile devices.
The intermediary system 104 may generate a separate template for each distinct user group for which at least a minimum threshold volume of interaction data has been received. As discussed above, the templates may be specifically intended for use in generating pages for display on mobile computing devices, such that the intermediary system 104 only considers user interactions from users utilizing mobile computing devices in the above-discussed blocks 306 and 308. In some embodiments, the templates may be further customized for a specific class of device. For example, in one embodiment, the intermediary system 104 may generate two mobile-optimized page templates for a first user group, where the first template is for use in generating a page representation for display on a mobile phone and the second template is for use in generating a page representation for display on a tablet computer.
In some embodiments, each page template may be specific to a given page accessible from the organization's content server 106. In other embodiments, a page template may not have a one-to-one correspondence to an original page authored by the organization, but may instead combine commonly-access portions of two or more of the organization's pages. For example, if the intermediary system 104 determined that for a given group of users, users frequently interacted with a first section of a first page, then selected to view a certain second page, then interacted with a second section of the second page, a template may be generated that combines the first section of the first page and the second section of the second page into a single page. In other instances, the intermediary system 104 may separate out portions of a single original page into multiple templates. For example, in the case of a crowded and/or long original page that contains many popular section or portions, the intermediary system 104 may determine that the page should be split into two or three shorter pages. In such a case, the intermediary system 104 may generate templates for each of these shorter pages, along with optionally generating a landing page or other new page that enables a user to select which of the shorter pages to view.
As will be appreciated, the templates may be generated and stored in a variety of file formats and may identify content to be included in a page in a variety of ways, depending on the embodiment. For example, a template file may utilize HTML, JavaScript, XML, CSS, and/or other markup content or code. A given template file may refer to specific portions or objects identified in the underlying original page content (e.g., may reference objects identified in the DOM representation of the underlying page), along with markup tags or code that instruct a browser or other page interpreter to assemble a page at least in part by inserting the referenced objects from the original page into the template. The templates may arrange the objects based on the objects' order or layout in the original page, the popularity of the objects (e.g., the most frequently interacted—with objects first), the order that users accessed each object (e.g., placing the objects in an order that they appeared in user action paths), and/or considering best practices in mobile webpage design.
Each template may be stored by the intermediary system 104 in a data store along with an indication of the specific user group, the specific device class (which may be optional in some embodiments) and the specific page(s) of the organization's content server for which the template should be used. Depending on the embodiment and the extent of user account information to which the intermediary system 104 has access, the user group may be identified as a list of user identifiers, as a user attribute (e.g., users associated with a given value in an account field, such as a value of “accounting” for the field “work group”), as a named group assigned by the organization (e.g., “Paralegals”), and/or in another manner.
After a mobile-optimized page template has been generated by the intermediary system 104, the intermediary system 104 may begin using this template when rendering graphical page representations in response to subsequent page requests from a mobile device used by a member of the relevant user group. For example, at block 312, the intermediary system 104 may, in response to a page request from a mobile device used by a user in a given group, render a graphical page representation based on a mobile-optimized template (including retrieving the objects referenced therein from an original page of the organization) and send the resulting graphical page representation to the mobile device for display.
In other embodiments, the intermediary system 104 may not automatically begin using the mobile-optimized templates after generation of the templates (e.g., may not perform block 312), but may instead provide one or more templates to the organization (e.g., by sending the templates to the content server 106, by email, or when a organization administrator accesses an account associated with the operator of the intermediary system 104) as a suggestion of content and format to include in given pages when designing a mobile-optimized page set. The intermediary system 104 may provide the organization with the list of user identifiers associated with a given template, such that an administrator at the organization can determine what those users have in common (e.g., the administrator may recognize that a given user group dynamically assembled by the intermediary system 104 based on identifying action similarities appears to include mostly customer service personnel of the organization).
As illustrated, the graphical representation in
As illustrated, the mobile-optimized page rendered by the intermediary system 104 for display on the mobile device in
It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.
Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.
The various illustrative logical blocks, modules, and algorithm elements described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and elements have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
The elements of a method, process, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module stored in one or more memory devices and executed by one or more processors, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of non-transitory computer-readable storage medium, media, or physical computer storage known in the art. An example storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The storage medium can be volatile or nonvolatile. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor and the storage medium can reside as discrete components in a user terminal.
Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Further, the term “each,” as used herein, in addition to having its ordinary meaning, can mean any subset of a set of elements to which the term “each” is applied.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Unless otherwise explicitly stated, articles such as “a”, “an”, or “the” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B, and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain embodiments described herein can be implemented within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5634064 | Warnock et al. | May 1997 | A |
5872850 | Klein et al. | Feb 1999 | A |
5961593 | Gabber et al. | Oct 1999 | A |
6049812 | Bertram et al. | Apr 2000 | A |
6108637 | Blumenau | Aug 2000 | A |
6138156 | Fletcher et al. | Oct 2000 | A |
6195679 | Bauersfeld et al. | Feb 2001 | B1 |
6430624 | Jamtgaard et al. | Aug 2002 | B1 |
6549941 | Jaquith et al. | Apr 2003 | B1 |
6560620 | Ching | May 2003 | B1 |
6625624 | Chen et al. | Sep 2003 | B1 |
6704024 | Robotham et al. | Mar 2004 | B2 |
6785864 | Te et al. | Aug 2004 | B1 |
6871236 | Fishman et al. | Mar 2005 | B2 |
6944665 | Brown et al. | Sep 2005 | B2 |
6963850 | Bezos | Nov 2005 | B1 |
7003442 | Tsuda | Feb 2006 | B1 |
7051084 | Hayton et al. | May 2006 | B1 |
7054952 | Schwerdtfeger et al. | May 2006 | B1 |
7085736 | Keezer et al. | Aug 2006 | B2 |
7159023 | Tufts | Jan 2007 | B2 |
7171478 | Lueckhoff et al. | Jan 2007 | B2 |
7191211 | Tuli | Mar 2007 | B2 |
7353252 | Yang et al. | Apr 2008 | B1 |
7373313 | Kahle et al. | May 2008 | B1 |
7543059 | Johnson et al. | Jun 2009 | B2 |
7792944 | DeSantis et al. | Sep 2010 | B2 |
7831582 | Scofield et al. | Nov 2010 | B1 |
7975000 | Dixon et al. | Jul 2011 | B2 |
7996912 | Spalink et al. | Aug 2011 | B2 |
8010545 | Stefik et al. | Aug 2011 | B2 |
8015496 | Rogers | Sep 2011 | B1 |
8060463 | Spiegel | Nov 2011 | B1 |
8073850 | Hubbard et al. | Dec 2011 | B1 |
8103742 | Green | Jan 2012 | B1 |
8185621 | Kasha | May 2012 | B2 |
8249904 | DeSantis et al. | Aug 2012 | B1 |
8271887 | Offer et al. | Sep 2012 | B2 |
8316124 | Baumback et al. | Nov 2012 | B1 |
8336049 | Medovich | Dec 2012 | B2 |
8989718 | Ramer | Mar 2015 | B2 |
9037975 | Taylor et al. | May 2015 | B1 |
9137210 | Joglekar et al. | Sep 2015 | B1 |
9183258 | Taylor | Nov 2015 | B1 |
9563928 | Sokolowski et al. | Feb 2017 | B1 |
9563929 | Sokolowski et al. | Feb 2017 | B1 |
9720888 | Jain et al. | Aug 2017 | B1 |
9723053 | Pallemulle et al. | Aug 2017 | B1 |
10142272 | Chakra | Nov 2018 | B2 |
20010039490 | Verbitsky et al. | Nov 2001 | A1 |
20020030703 | Robertson et al. | Mar 2002 | A1 |
20020040395 | Davis et al. | Apr 2002 | A1 |
20020099829 | Richards et al. | Jul 2002 | A1 |
20020194302 | Blumberg | Dec 2002 | A1 |
20030023712 | Zhao et al. | Jan 2003 | A1 |
20030041106 | Tuli | Feb 2003 | A1 |
20040083294 | Lewis | Apr 2004 | A1 |
20040139208 | Tuli | Jul 2004 | A1 |
20040181613 | Hashimoto et al. | Sep 2004 | A1 |
20040205448 | Grefenstette et al. | Oct 2004 | A1 |
20040220905 | Chen et al. | Nov 2004 | A1 |
20040243622 | Morisawa | Dec 2004 | A1 |
20040267723 | Bharat | Dec 2004 | A1 |
20050010863 | Zernik | Jan 2005 | A1 |
20050060643 | Glass et al. | Mar 2005 | A1 |
20050138382 | Hougaard et al. | Jun 2005 | A1 |
20050183039 | Revis | Aug 2005 | A1 |
20050246193 | Roever et al. | Nov 2005 | A1 |
20060085766 | Dominowska et al. | Apr 2006 | A1 |
20060095336 | Heckerman et al. | May 2006 | A1 |
20060122889 | Burdick et al. | Jun 2006 | A1 |
20060168510 | Bryar et al. | Jul 2006 | A1 |
20060184421 | Lipsky et al. | Aug 2006 | A1 |
20060248442 | Rosenstein et al. | Nov 2006 | A1 |
20060277167 | Gross et al. | Dec 2006 | A1 |
20060294461 | Nadamoto et al. | Dec 2006 | A1 |
20070022072 | Kao et al. | Jan 2007 | A1 |
20070027672 | Decary et al. | Feb 2007 | A1 |
20070094241 | Blackwell et al. | Apr 2007 | A1 |
20070124693 | Dominowska et al. | May 2007 | A1 |
20070139430 | Korn et al. | Jun 2007 | A1 |
20070226044 | Hanson | Sep 2007 | A1 |
20070240160 | Paterson-Jones et al. | Oct 2007 | A1 |
20070288589 | Chen et al. | Dec 2007 | A1 |
20070288855 | Rohrabaugh et al. | Dec 2007 | A1 |
20080028334 | De Mes | Jan 2008 | A1 |
20080086264 | Fisher | Apr 2008 | A1 |
20080104502 | Olston | May 2008 | A1 |
20080183672 | Canon et al. | Jul 2008 | A1 |
20080184128 | Swenson et al. | Jul 2008 | A1 |
20080222273 | Lakshmanan | Sep 2008 | A1 |
20080320225 | Panzer et al. | Dec 2008 | A1 |
20090012969 | Rail et al. | Jan 2009 | A1 |
20090164924 | Flake et al. | Jun 2009 | A1 |
20090177538 | Brewer et al. | Jul 2009 | A1 |
20090204478 | Kaib et al. | Aug 2009 | A1 |
20090217199 | Hara et al. | Aug 2009 | A1 |
20090248680 | Kalavade | Oct 2009 | A1 |
20090254867 | Farouki et al. | Oct 2009 | A1 |
20090282021 | Bennett | Nov 2009 | A1 |
20090287698 | Marmaros et al. | Nov 2009 | A1 |
20090327914 | Adar et al. | Dec 2009 | A1 |
20100036740 | Barashi | Feb 2010 | A1 |
20100057639 | Schwarz et al. | Mar 2010 | A1 |
20100094878 | Soroca et al. | Apr 2010 | A1 |
20100125507 | Tarantino, III et al. | May 2010 | A1 |
20100131594 | Kashimoto | May 2010 | A1 |
20100138293 | Ramer et al. | Jun 2010 | A1 |
20100218106 | Chen et al. | Aug 2010 | A1 |
20100293190 | Kaiser et al. | Nov 2010 | A1 |
20100306335 | Rios et al. | Dec 2010 | A1 |
20100312788 | Bailey | Dec 2010 | A1 |
20100318892 | Teevan et al. | Dec 2010 | A1 |
20100332513 | Azar et al. | Dec 2010 | A1 |
20110022957 | Lee | Jan 2011 | A1 |
20110029854 | Nashi et al. | Feb 2011 | A1 |
20110055203 | Gutt et al. | Mar 2011 | A1 |
20110078140 | Dube et al. | Mar 2011 | A1 |
20110078705 | Maclinovsky et al. | Mar 2011 | A1 |
20110119661 | Agrawal et al. | May 2011 | A1 |
20110161849 | Stallings et al. | Jun 2011 | A1 |
20110173177 | Junqueira et al. | Jul 2011 | A1 |
20110173637 | Brandwine et al. | Jul 2011 | A1 |
20110178868 | Garg et al. | Jul 2011 | A1 |
20110185025 | Cherukuri et al. | Jul 2011 | A1 |
20110191327 | Lee | Aug 2011 | A1 |
20110197121 | Kletter | Aug 2011 | A1 |
20110212717 | Rhoads et al. | Sep 2011 | A1 |
20110214082 | Osterhout et al. | Sep 2011 | A1 |
20110246873 | Tolle et al. | Oct 2011 | A1 |
20110289074 | Leban | Nov 2011 | A1 |
20110296341 | Koppert | Dec 2011 | A1 |
20110302510 | Harrison | Dec 2011 | A1 |
20120072821 | Bowling | Mar 2012 | A1 |
20120084644 | Robert et al. | Apr 2012 | A1 |
20120096365 | Wilkinson et al. | Apr 2012 | A1 |
20120110017 | Gu et al. | May 2012 | A1 |
20120137201 | White et al. | May 2012 | A1 |
20120143944 | Reeves et al. | Jun 2012 | A1 |
20120150844 | Lindahl et al. | Jun 2012 | A1 |
20120166922 | Rolles | Jun 2012 | A1 |
20120198516 | Lim | Aug 2012 | A1 |
20120210233 | Davis et al. | Aug 2012 | A1 |
20120215834 | Chen et al. | Aug 2012 | A1 |
20120215919 | Labat | Aug 2012 | A1 |
20120284629 | Peters et al. | Nov 2012 | A1 |
20120290950 | Rapaport et al. | Nov 2012 | A1 |
20120317295 | Baird et al. | Dec 2012 | A1 |
20120331406 | Baird et al. | Dec 2012 | A1 |
20130007101 | Trahan et al. | Jan 2013 | A1 |
20130031459 | Khorashadi | Jan 2013 | A1 |
20130031461 | Hou et al. | Jan 2013 | A1 |
20130066673 | Rose | Mar 2013 | A1 |
20130080611 | Li | Mar 2013 | A1 |
20130103764 | Verkasalo | Apr 2013 | A1 |
20130198641 | Brownlow et al. | Aug 2013 | A1 |
20130275889 | O'Brien-Strain | Oct 2013 | A1 |
20140136942 | Kumar et al. | May 2014 | A1 |
20140136951 | Kumar et al. | May 2014 | A1 |
20140136971 | Kumar et al. | May 2014 | A1 |
20140136973 | Kumar et al. | May 2014 | A1 |
Number | Date | Country |
---|---|---|
WO 2013003631 | Jan 2013 | WO |
Entry |
---|
Bango, Rey “How JS & Ajax work in Opera Mini 4”, Nov. 2, 2007, XP055050107, Retrieved from the Internet. |
Baumann, A., et al., Enhancing STEM Classes Using Weave: A Collaborative Web-Based Visualization Environment, Integrated Stem Education Conference, Apr. 2, 2011, Ewing, New Jersey, pp. 2A-1-2A-4. |
Brinkmann, M, “Record and Share your browser history with Hooeey,” ghacks.net, Feb. 26, 2008, 6 pages, printed on Jan. 25, 2013. |
Chen, H., et al., “Bringing Order to the Web: Automatically Categorizing Search Results,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Apr. 1-6, 2000, pp. 145-152. |
Close ‘n’ Forget Firefox add on, Evilfantasy's blog, http://evilfantasy.wordpress.com/2009/03/24/close-%E2%80%98n%E2%80%99-forget-firefox-add-on/, retrieved Mar. 24, 2009, 1 page. |
Considine, A, “The Footprints of Web Feet,” The New York Times, Mar. 4, 2011, 3 pages, printed on Jan. 25, 2013. |
De Carvalho, L.G., et al., Synchronizing Web Browsing Data With Browserver, Proceedings of the IEEE Symposium on Computers and Communications, Jun. 22-25, 2010, Riccione, Italy, pp. 738-743. |
EyeBrowse: Record, Visualize and Share your Browser History, Information Aesthetics, Sep. 18, 2009, 2 pages, printed on Jan. 25, 2013. |
Feuerstein, Adam, “Flyswat Takes Aim,” San Francisco Business Times, printed from http://www.bizjournals.com/sanfrancisco/stories/1999/10/25/story2.html?t=printable, Oct. 22, 1999, 2 pages. |
Gabber et al., “How to Make Personalized Web Browsing Simple, Secure, and Anonymous,” Financial Cryptography, 16 pages (1997). |
Gingerich, Jason, “Keycorp Making Site Into Portal,” KRTBN Knight-Ridder Tribune Business News (South Bend Tribune, Indiana), Oct. 25, 1999, 2 pages. |
Hopper, D. Ian, “Desktops Now Have Power to Comparison-Shop,” Oct. 18, 1999, printed from http://www.cnn.com/TECH/computing/9910/18/r.u.sure/index.html, 3 pages. |
Rao, H.C.-H.,et al., “A Proxy-Based Personal Web Archiving Service,” Operating Systems Review, 35(1):61-72, 2001. |
Teevan, J., et al., “Changing How People View Changes on the Web,” 2009, Proceedings of the 22nd Annual ACM Symposium on User Interface Software and Technology, New York, 2009, pp. 237-246. |
Van Kleek, M, Introducing “Eyebrowse”—Track and share your web browsing in real time, Haystack Blog, Aug. 28, 2009, 3 pages, printed on Jan. 25, 2013. |
Web page titled “RSS Ticker: Add-ons for Firefox,” https://addons.mozilla.org/en-US/firefox/addon/rss-ticker/, 3 printed pages, printed on Feb. 7, 2013. |
Web page titled “What Internet Users Do on a Typical Day, Trend Data (Adults), Pew Internet & American Life Project,” printed from http://pewinternet.org/Static-Pages/Trend-Data-(Adults)/Online-Activities-Daily.aspx on Nov. 29, 2012, 4 pages. |
U.S. Appl. No. 14/285,317, filed May 22, 2014. |
U.S. Appl. No. 14/285,492, filed May 22, 2014. |
U.S. Appl. No. 14/285,200, filed May 22, 2014. |
U.S. Appl. No. 14/285,060, filed May 22, 2014. |
U.S. Appl. No. 14/285,275, filed May 22, 2014. |
U.S. Appl. No. 14/285,300, filed May 22, 2014. |
U.S. Appl. No. 14/285,477, filed May 22, 2014. |
U.S. Appl. No. 14/285,442, filed May 22, 2014. |
U.S. Appl. No. 14/302,261, filed Jun. 11, 2014. |
U.S. Appl. No. 14/285,334, filed May 22, 2014. |
U.S. Appl. No. 13/371,314, filed Feb. 10, 2012. |
U.S. Appl. No. 15/281,926, filed Sep. 30, 2016. |
U.S. Appl. No. 15/276,663, filed Sep. 26, 2016. |