Computer systems are currently in wide use. They are deployed for use in a wide variety of environments. One environment is for performing on-line research.
Research is the process of obtaining information to become informed on any subject. For instance, research can be done for a school project or a work project. It can also be done to compare products in order to buy a given brand of product. Similarly, one can do research when planning a vacation or an event (such as a wedding or a birthday, etc.) or when simply following a personal interest or hobby. Research can even be used when looking for a good article or a good book to read or even when trying to find a good restaurant. Performing on-line research in these and other areas can present some challenges.
Even after the information is obtained through the research process, collection, sifting through, and organizing different sources of information can be quite time consuming. It is very unlikely that a single source will contain all the desired information. Instead, information from different sources often overlaps or forms an incomplete picture of the subject being researched. This can cause the user to have to sort through many redundant sources. In addition, the sources are often presented in a way in which there is no logical order to consume the information. Instead, content items are simply provided to the user as a string of independent items of content.
In addition, the time available for consuming the located content can be a factor as well. If the user wishes to become informed on a certain subject matter area in an hour, the content located and returned to the user might be quite different than if the user has a week, or even a year, within which to become informed on the subject matter area.
Some current systems allow a user to declare an area of interest. These systems then provide a stream of reading material that is hopefully related to the declared subject matter of interest. However, the problems discussed above with respect to research, organization, and consumption of the content are not addressed.
The discussion above is merely provided for general background information and is not intended to be used as an aid in determining the scope of the claimed subject matter.
A content collection system receives a natural language input and identifies a type of content to be collected based on the natural language input. Items of content from multiple different digital media types are collected from a plurality of different sources and organized in an order.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the background.
In the embodiment shown in
The user input mechanisms 124 can take a wide variety of different forms. For instance, they can be links, buttons, icons, tiles (which not only operate as links to content but dynamically display information about the underlying content as well), text boxes, dropdown menus, check boxes, or any of a wide variety of different user input mechanisms. In addition, the user input mechanisms 124 can be actuated in a wide variety of different ways. For instance, they can be actuated using a point and click device (such as a mouse or track ball), using a hard or soft keyboard or keypad, a thumbpad, buttons, joysticks, etc. Similarly, where the particular user device 106 on which the user input mechanisms 124 are displayed include speech recognition components, the user input mechanisms 124 can be actuated using speech commands. Also, where the user interface displays 122 are displayed on a touch sensitive screen, the user input mechanisms 124 can be actuated using touch gestures, such as with the user's finger, a stylus, or another mechanism.
User device 106 illustratively includes a processor 130. Processor 130 is illustratively a computer processor with associated memory and timing circuitry (not separately shown). It is a functional part of user device 106 and is activated by various components on the user device to facilitate the functionality of the user device 106. Multiple processors can be used as well.
Content collection system 102 illustratively includes data store 132 that includes stacks (or collections of content) 134, user information 136, published forms of stacks 134 (designated by numeral 138) domain names 140, and other information 142. Data store 132 is shown as a single data store. It will be noted, however, that multiple different data stores can be used. They can be local to system 102 or remote from system 102, and accessible by system 102. Similarly, some can be local while others are remote.
Content collection system 102 is also shown having processor 144, automated stack generator 146, search engine 148, publishing component 150, display generator 152, sorting/grouping component 154, stack manager 156, prioritizing component 158 that accesses priority metrics 160, domain name manger 162 and natural language understanding (NLU) component 164. Of course, system 102 can have fewer or other components as well.
Processor 144 is illustratively a computer processor with associated memory and timing circuitry (not separately shown). It is illustratively a functional part of content collection system 102 and is activated by, and facilitates the functionality of, other items in content collection system 102. In addition, while only a single processor 144 is shown, multiple processors can be used as well, and they can be located within system 102 or external to system 102.
In addition,
Prior to providing a more detailed discussion of the operation of architecture 100, a brief overview will be provided for the sake of clarity. In the embodiment discussed, user 104 wishes to begin a collection (or stack) of content. User 104 can illustratively provide a natural language user input query through one of user input mechanisms 124 to begin data collection (of items of content) to fill the stack. Natural language understanding component 164 interprets the query and automated stack generator 146 uses search engine 148 to identify content related to the query, and accesses the content from a variety of different sources. Sorting/grouping component 154 organizes the items of content in the stack into a given order and the stack 134 is filled with the content items, so they can be presented to user 104, shared by user 104 or published in a variety of forms by user 104.
The stacks can be output simply as a list of links or tiles 143, each link or tile representing an item in the stack. They can be output according to a timeline 145 that shows when the items of content were authored, published or another date. They can be formatted as a book 147 with a table of contents or as a magazine 149. They can also be formatted as blog 151, as another type of webpage 153, they can be shown along a graph 155, the contents can be clustered into clusters 157 or they can be output in other forms 159. These are described in greater detail below.
In order to begin generating a stack, automated stack generator 146 illustratively displays a user interface display with user input mechanisms that receive user inputs to define the stack to be created. This is indicated by block 170 in
A natural language query can also contain all of this information. The following are examples of four natural language queries that illustrate this.
“What's new in renewable energy in the last 3 years?”
“I'm going to Denver this weekend and I am looking for activities that my 6 year old son would love.”
“I have a week to become an expert in molecular biology and I haven't taken a biology class since 10th grade.”
“I want an overview of the different areas of biotech in 1 hour.”
These examples show that the query can have a timing component that requires some knowledge about the age of documents, what if means to be a 6 year old, how long it would take to consume content, etc. These are all processed by NLU component 164 to obtain an interpretation that can be used to identify relevant items of content.
System 102 then illustratively receives a user input that automated stack generator 146 is to generate the new stack. In the embodiment shown with respect to
Automated stack generator 146 then accesses user information 136 in data store 132 in order to enhance the natural language query. For instance user information 136 may include profile information indicative of a user's interests. It may include lists of items the user has already reviewed, etc. In any case, accessing the user information is indicated by bock 206 in
It should be noted that, in one embodiment, the content for the stack can be items of multiple different digital media types. For instance, they can be documents, videos, websites or website addresses, images, digital books, periodicals, free content, links to paid content, or overviews of paid content, etc.
Once the items of content have been located, automated stack generator 146 illustratively uses prioritizing component 158 in order to prioritize the items as indicated by block 210 in
Automated stack generator 146 can then calculate the consumption time corresponding to each of the items of content. This can be done if, for example, the user has specified a consumption time at block 174 above, that indicates how much time the user has to consume the information. For instance, if the user only has an hour to consume the information prior to a meeting, then the consumption time of each item of content can be considered in identifying the particular content items that are to be included in the stack. Calculating the consumption time is indicated by block 212 in
Automated stack generator 146 then uses sorting/grouping component 154 to order the items of content (or organize them) in a given order. This is indicated by block 214 in
The content can also be arranged according to difficulty 220. For instance, again using natural language understanding component 164, the technical difficulty of an item of content can be identified so that the less difficult material is presented first, and the material is presented in order of increasing difficulty.
Of course, the content can be arranged in other ways as well. This is indicated by block 222 in
Automated stack generator 146 can then automatically generate a table of contents to the items of content in the stack 134. This is indicted by block 226 in
Automated stack generator 146 then illustratively uses display generator 152 to generate a display of the items of content in the stack. This is indicated by block 228.
Display 230 also illustratively includes a selectable tile section 234. Section 234 illustratively includes a set of tiles that not only display information corresponding to the underlying subject matter, but which also operate as links. Therefore, when user 104 actuates one of the tiles 234, the user is navigated to the underlying material which the tile represents. In the embodiment shown in display 230 in
In user interface display 250, the stack is displayed generally in a magazine format, that includes an interactive table of contents 258 as well as scrollable section 260 that includes tiles 262, each of which includes a summary of a corresponding item of content. When the user actuates one of tiles 262 (such as by tapping on it on a touch sensitive screen) display generator 152 generates a view of the full version of the corresponding item of content.
Referring again to
Referring again to
Task bar 252 also illustratively includes an emphasis button 294. When the user actuates the emphasis button, the user can illustratively revise the subject matter emphasized in identifying content items for the stack. In order to do this, stack manager 156 illustratively generates a user interface display, such as display 296 shown in
Referring again to
User interface display 330 is similar to user interface display 322 shown in
Pop-up display 332 also illustratively includes a “publish to” section 342. This allows user 104 to identify a location where the book 334 is to be published. For instance, user 104 can illustratively publish the book to his/her personal library indicated in textbox 344, or to a marketplace 112 as indicated in block 346. The user can publish their stack or collection as a digital or physical book, magazine, website, document, application, video, or otherwise. The user can publish it to another stack indicated by block 348 that allows the user to identify the particular stack using a dropdown menu 348. The user can then actuate publish button 350 in order to actually publish the stack (e.g., the electronic book 334). The user can do this by placing a cursor 352 over the publish button and actuating the button, or by actuating the button using touch gestures, etc. Displaying the pop-up display 332 to receive the user input indicating a destination for the publication is indicated by block 354 in
In addition, instead of publishing the stack as an electronic book or in another form, the user can simply share the stack as well. For instance,
In the embodiment shown in
Display 366 allows the user to indicate the type of format that the stack is to be shared in, such as by selecting it from (or typing it into) box 368. The user can identify the particular stack in box 370, and choose a thumbnail representing the stack to be shared by using user input mechanism 372. The user can also choose a series of different thumbnails. Thumbnail previews can show different sizes and aspect ratios, previews or give the option to have a dynamic tile or slideshow thumbnail that rotates through multiple thumbnails. The user can scroll through different thumbnail representations using arrows 374, when they are displayed in thumbnail display section 376. When the user locates a desired thumbnail to represent the stack being shared, the user can illustratively actuate button 378 to select that thumbnail representation. Then, the user can share the stack by actuating button 380. When this occurs, publishing component 150 illustratively shares the stack out to the destination selected in pane 362.
Referring again to
Displaying the user interface to receive the user input indicating the particular format of the publication or stack to be shared is indicated by block 392. The user can illustratively choose the stack to be shared as a book 394, as a magazine 396, simply as a stack (such as a list of tiles) 398, as a webpage 400, or in another format 402.
In one embodiment, content collection system 102 also allows user 104 to purchase and customize a domain name when the stack is to be published out as a webpage (or accessed through a webpage). By way of example, display generator 152 illustratively generates a display, such as display 404 shown in
User interface display 408 allows the user to make the webpage public or private by selecting from a dropdown menu actuator 410. It also illustrates the URL assigned to the stack in box 412. User interface display 408 also allows the user to associate a custom domain name with the URL by actuating user input mechanism 414. This can be done, again, by placing cursor 416 over button 414 and actuating it, by actuating it using a touch gesture or otherwise. The user actuates custom domain actuator 414, domain name manager 162 (from
If the user does not like any of the options automatically selected by domain name manager 162, the user can illustratively type in a desired name and have domain name manager 162 search for its availability, or actuate search button 425. When that occurs, domain name manager 162 navigates the user to a domain name search interface where the user can find, select and purchase a desired domain name.
When the user actuates the confirm purchase button 422, the newly purchased domain name is associated with the URL shown in box 412.
When the stack is published as a webpage,
The description is intended to include both public cloud computing and private cloud computing. Cloud computing (both public and private) provides substantially seamless pooling of resources, as well as a reduced need to manage and configure underlying hardware infrastructure.
A public cloud is managed by a vendor and typically supports multiple consumers using the same infrastructure. Also, a public cloud, as opposed to a private cloud, can free up the end users from managing the hardware. A private cloud may be managed by the organization itself and the infrastructure is typically not shared with other organizations. The organization still maintains the hardware to some extent, such as installations and repairs, etc.
In the embodiment shown in
It will also be noted that architecture 100, or portions of it, can be disposed on a wide variety of different devices. Some of those devices include servers, desktop computers, laptop computers, tablet computers, or other mobile devices, such as palm top computers, cell phones, smart phones, multimedia players, personal digital assistants, etc.
Under other embodiments, applications or systems (like a client portion of system 100) are received on a removable Secure Digital (SD) card that is connected to a SD card interface 15. SD card interface 15 and communication links 13 communicate with a processor 17 (which can also embody processors 130 or 144 from
I/O components 23, in one embodiment, are provided to facilitate input and output operations. I/O components 23 for various embodiments of the device 16 can include input components such as buttons, touch sensors, multi-touch sensors, optical or video sensors, voice sensors, touch screens, proximity sensors, microphones, tilt sensors, and gravity switches and output components such as a display device, a speaker, and or a printer port. Other I/O components 23 can be used as well.
Clock 25 illustratively comprises a real time clock component that outputs a time and date. It can also, illustratively, provide timing functions for processor 17.
Location system 27 illustratively includes a component that outputs a current geographical location of device 16. This can include, for instance, a global positioning system (GPS) receiver, a LORAN system, a dead reckoning system, a cellular triangulation system, or other positioning system. It can also include, for example, mapping software or navigation software that generates desired maps, navigation routes and other geographic functions.
Memory 21 stores operating system 29, network settings 31, applications 33, application configuration settings 35, data store 37, communication drivers 39, and communication configuration settings 41. Memory 21 can include all types of tangible volatile and non-volatile computer-readable memory devices. It can also include computer storage media (described below). Memory 21 stores computer readable instructions that, when executed by processor 17, cause the processor to perform computer-implemented steps or functions according to the instructions. The items in data store 132, for example, can reside in memory 21. Similarly, device 16 can have a client business system 24 which can run various business applications or embody parts or all of user device 106. Processor 17 can be activated by other components to facilitate their functionality as well.
Examples of the network settings 31 include things such as proxy information, Internet connection information, and mappings. Application configuration settings 35 include settings that tailor the application for a specific enterprise or user. Communication configuration settings 41 provide parameters for communicating with other computers and include items such as GPRS parameters, SMS parameters, connection user names and passwords.
Applications 33 can be applications that have previously been stored on the device 16 or applications that are installed during use, although these can be part of operating system 29, or hosted external to device 16, as well.
The mobile device of
Note that other forms of the devices 16 are possible.
Computer 810 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 810 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media is different from, and does not include, a modulated data signal or carrier wave. It includes hardware storage media including both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 810. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
The system memory 830 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 831 and random access memory (RAM) 832. A basic input/output system 833 (BIOS), containing the basic routines that help to transfer information between elements within computer 810, such as during start-up, is typically stored in ROM 831. RAM 832 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 820. By way of example, and not limitation,
The computer 810 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only,
Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
The drives and their associated computer storage media discussed above and illustrated in
A user may enter commands and information into the computer 810 through input devices such as a keyboard 862, a microphone 863, and a pointing device 861, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 820 through a user input interface 860 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A visual display 891 or other type of display device is also connected to the system bus 821 via an interface, such as a video interface 890. In addition to the monitor, computers may also include other peripheral output devices such as speakers 897 and printer 896, which may be connected through an output peripheral interface 895.
The computer 810 is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 880. The remote computer 880 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 810. The logical connections depicted in
When used in a LAN networking environment, the computer 810 is connected to the LAN 871 through a network interface or adapter 870. When used in a WAN networking environment, the computer 810 typically includes a modem 872 or other means for establishing communications over the WAN 873, such as the Internet. The modem 872, which may be internal or external, may be connected to the system bus 821 via the user input interface 860, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 810, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
It should also be noted that the different embodiments described herein can be combined in different ways. That is, parts of one or more embodiments can be combined with parts of one or more other embodiments. All of this is contemplated herein.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Number | Name | Date | Kind |
---|---|---|---|
6065001 | Ohkubo | May 2000 | A |
7502831 | Macias et al. | Mar 2009 | B1 |
8275880 | Allard et al. | Sep 2012 | B2 |
9218392 | Zgraggen | Dec 2015 | B1 |
20040012627 | Zakharia et al. | Jan 2004 | A1 |
20060041562 | Paczkowski | Feb 2006 | A1 |
20070033169 | Friedman | Feb 2007 | A1 |
20070033171 | Trowbridge | Feb 2007 | A1 |
20070061755 | Taboada et al. | Mar 2007 | A1 |
20070162424 | Jeh | Jul 2007 | A1 |
20070203898 | Carmona | Aug 2007 | A1 |
20070235528 | Spencer | Oct 2007 | A1 |
20070288247 | Mackay | Dec 2007 | A1 |
20080235608 | Prabhu | Sep 2008 | A1 |
20080281794 | Mathur | Nov 2008 | A1 |
20090164433 | R. | Jun 2009 | A1 |
20090282021 | Bennett | Nov 2009 | A1 |
20090292696 | Shuster | Nov 2009 | A1 |
20100031169 | Jang et al. | Feb 2010 | A1 |
20100049802 | Raman | Feb 2010 | A1 |
20110153605 | Silverman | Jun 2011 | A1 |
20110154183 | Burns | Jun 2011 | A1 |
20110191314 | Howes | Aug 2011 | A1 |
20110202827 | Freishtat et al. | Aug 2011 | A1 |
20120047025 | Strohman | Feb 2012 | A1 |
20120266057 | Block et al. | Oct 2012 | A1 |
20130060763 | Chica | Mar 2013 | A1 |
20130159823 | Ri et al. | Jun 2013 | A1 |
20130198632 | Hyman | Aug 2013 | A1 |
20130275892 | Li et al. | Oct 2013 | A1 |
20130318177 | Tan | Nov 2013 | A1 |
20130326350 | Roberts et al. | Dec 2013 | A1 |
20130335455 | Rooke et al. | Dec 2013 | A1 |
20140122990 | Puppin | May 2014 | A1 |
Number | Date | Country |
---|---|---|
WO2012139200 | Oct 2012 | WO |
Entry |
---|
PCT Demand for International Application No. PCT/US2014/035058, date of filing: Apr. 23, 2014, 21 pages. |
Zite., “Zite Under the Hood”, Retrieved at <<http://blog.zite.com/2012/01/11/zite-under-the-hood/>>, Jan. 27, 2013, pp. 5. |
Corner, Matthew., “Tumblr: An Introduction Guide for Microblogging Part 1”, Retrieved at http://www.1stwebdesigner.com/design/tumblr-introduction-guide-microblogging/>>, Jan. 29, 2013, pp. 20. |
“Oracle Syndication Server User's and Administrator's Guide”, Retrieved at <<http://docs.oracle.com/cd/A97630—01/appdev.920/a88787.pdf>>, Jun. 2001, pp. 78. |
Ingram, Mathew., “Storify Wants to Pull Stories From the Stream”, Retrieved at <<http://gigaom.com/2010/09/29/storify-wants-to-pull-stories-from-the-stream/>>, Sep. 29, 2010, pp. 6. |
Paris, et al., “Focused and Aggregated Search: A Perspective from Natural Language Generation”, Retrieved at <<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.174.6807&rep=rep1&type=pdf>>, In Journal of Information Retrieval, vol. 13, Issue 5, Oct. 2010, pp. 26. |
“Techniques for Aggregating List and Site Information”, Retrieved at <<http://msdn.microsoft.com/en-us/library/ff649417.aspx>>, May 24, 2010, pp. 3. |
Second Written Opinion for International Application No. PCT/US2014/035058, dated May 27, 2015, date of filing: Apr. 23, 2014, 7 pages. |
International Search Report and Written Opinion from International Application No. PCT/US2014/035058, dated Aug. 27, 2014, Date of Filing: Apr. 23, 2014. 11 pages. |
Mayu Iwata et al.: “AspecTiles:Tile-based Visualization of Diversified Web Search Results”, Research and Development in Information Retrieval from Aug. 12, 2012, pp. 85-94(10 pages). |
Fabian Abel et al.: “Leveraging the Semantics of Tweets for Adaptive Faceted Search on Twitter”, The Semantic Web A ISWC 2011, Oct. 23, 2011, pp. 1-17 (17 pages). |
Final Office Action for U.S. Appl. No. 13/870,975 dated Dec. 3, 2015, 38 pages. |
Number | Date | Country | |
---|---|---|---|
20140324902 A1 | Oct 2014 | US |