This disclosure relates generally to the field of content item searching. More particularly, the disclosure relates to searching content items in response to a natural language query.
Many traditional tools exist to search for various types of content within different environments. For example, in an online environment, a search engine may employ various algorithms for ranking search results such as websites. However, when searches are initiated in a local (e.g. offline) environment, traditional tools often rely on search capabilities of a file management application (e.g. file explorer). For example, these file management applications often allow a user to search for files often limited to a particular set of file types. Moreover, such file management applications often search through the same information regardless of the type of file, which typically includes the file name, the type of file, the date of creation, and certain other basic parameters that are maintained for the file.
Current devices, however, often store vast amounts of content including content items not typically accessed by users from a file management application. For example, certain types of content items (although saved as a file in some cases) are typically accessed directly from one or more applications, and not organized in directories traditionally navigated by file management applications. Accordingly, current devices often have interfaces that include a system-wide search mechanism (e.g. a “finder” program) that users initiate as a primary source to access contents items. These search mechanisms, however, often rely on searches based solely on a filename or explicit attributes that are defined by a creator of a content item (e.g. title, author, etc.). Accordingly, there is a continued need to improve mechanisms for searching for content items in an intuitive manner.
Described is a system (and method) for searching for content items in response to a query such as a voice-based natural language query. For example, the query may be provided as part of an interaction with a voice-based digital assistant.
In a first aspect, a user may perform a search for content items associated with various types of user actions performed with a content item such as sending or receiving a document, sharing a content item, printing a document, or other types of user actions. In order to provide such search capabilities, the system may store implicit attributes associated with a content item in response to detecting such user actions. For example, these attributes may store information characterizing a type of action performed with a content item. For example, these attributes may identify the application used to perform the action, a recipient or sender of a content item, the time the action was performed, and other characteristics. The system may store these attributes as metadata associated with a content item including metadata that may be stored as part of the content item itself and/or metadata that is stored as part of a searchable index. In order to protect information that may be derived from these stored attributes, the system may also secure portions (or all) of the metadata using various techniques including, for example, encryption.
When performing a search, the system may use natural language processing capabilities to identify search criteria for one or more of these implicit attributes. Thus, the system may provide a mechanism to answer various types of queries that may not necessarily be provided in a predefined search query format. For example, the system may provide search results in response to natural language voice-based query such as “Show me my most recent documents.” In addition, the query may include actions that are associated with another user such as a recipient or a sender of a content item. For example, a user may provide a query such as “Show me the last spreadsheet I sent to Bill,” or “Find all emails from Bill in April.”
In a second aspect, a user may perform a search for content items associated with a particular application. For example, a user may be interested in content originating from the application, or those to which the user generally associates with a particular application. For example, the user may provide a search query including “Show me my ‘NewApp’ items” (where “NewApp” is the name of a particular application). To provide such a search capability, however, may require determining which content item types a user associates with a particular application because multiple application may support a particular content item type. For example, an application may typically open various types of content items (e.g. various document formats), but those content items may not of interest to a user. Accordingly, in one embodiment, the system may determine content items the application has authority over (or content items belonging to the particular application). For example, in one embodiment, “authority over” may include content item types the application may not only open (e.g. open, read, import, etc.), but also content item types the application may create (e.g. create, export, write, etc.). For example, a content item an application may create may include content items types that are usable by other applications.
In order to determine content items associated with a particular application, the system may reference one more files associated with the particular application such as a manifest (or property list) type file. For example, the system may access such files to cross-reference a list of content items that an application may read with a list of content items the application may create.
Accordingly, one or more aspects of the system as further descried herein may provide an intuitive search mechanism for content items by allowing a user to provide natural language search queries.
Embodiments of the disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.
Various embodiments and aspects will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of various embodiments. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments. References to “one embodiment” or “an embodiment” or “some embodiments” means that a particular feature, structure, or characteristic described in conjunction with the embodiment can be included in at least one embodiment. The appearances of the phrase “embodiment” in various places in the specification do not necessarily refer to the same embodiment. The processes depicted in the figures that follow are performed by processing logic that comprises hardware (e.g. circuitry, dedicated logic, etc.), software, or a combination of both. Although the processes are described below in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in a different order, or some operations may be performed in parallel rather than sequentially.
As described above, the disclosure relates to searching for content items, which may be performed within an operating environment.
The client device 110 may be any type of computing device such as a smartphone, tablet, laptop, desktop, wearable device (e.g. smartwatch), set-top-box, interactive speaker, etc., and the server 120 may be any kind of server (or computing device, or another client device 110), which may be a standalone device, or part of a cluster of servers, and may include a cloud-based server, application server, backend server, or a combination thereof.
As shown, the client device 110 may include various components or modules to perform various operations as described herein. The client device 110 may include a metadata processing module 140 that may process and collect various forms of metadata 160. Accordingly, the metadata processing module 140 may access metadata 160, as well as one or more indexes.
As referred to herein, metadata 160 may include any information that may include characteristics, attributes, and the like, that may be associated with particular content items (local content items 170 or public content items 175). For example, the metadata 160 may include attributes including implicit attributes as further described herein. The metadata 160 may be stored as part of a content item, and/or may be stored separately (e.g. within a database or file). The metadata 160 may include information stored in any suitable format (e.g. metadata entries, fields, objects, files, etc.). In one embodiment, metadata (e.g. if a metadata object or file) will itself contain entries or fields. In addition, the metadata may include information from different applications and a specific type of metadata may be created for each of the applications. In one embodiment, metadata may include a persistent identifier that uniquely identifies its associated content item. For example, this identifier remains the same even if the name of the file is changed or the file is modified. This allows for the persistent association between the particular content item and its metadata.
In addition, non-limiting examples of metadata may be found in commonly assigned U.S. Pat. No. 7,437,358, issued Oct. 14, 1998, the entirety of which is incorporated herein by reference.
The operating environment 100 may also include a local content items index 150 for local content items 170 and/or metadata 160, and a public content items index 155 for public content items 175. As referred to herein, content items may include content items stored on a particular device (e.g. local content items 170) such as documents, emails, messages, pictures, media, applications, contacts, calendar events, reminders, folders, browser history, bookmarks, posts, and the like, as well as content items from public sources (e.g. public content items 175) including websites, webpages, applications, map information, reviews, retail items (e.g. from a particular online retailer), streamed media (e.g. music, videos, eBooks, etc.), pictures, social media content (e.g. posts, pictures, messages, contacts, etc.), and the like. It should be noted that the local and public content items are not mutually exclusive, and accordingly, public content items 175 may also be local content items 170 (and vice versa).
The indexes (e.g. indexes 150 and 155) may include identifiers or representations of the content items and these indexes may be designed to allow a user to rapidly locate a wide variety of content items. These indexes may index metadata 160 associated with content items (e.g. local content items 170 and public content items 175), as well as index the contents of these content items. In some embodiments, the local content items index 150 may be updated continuously (e.g. as content items are shared, created, modified, printed, downloaded, etc.) using a background process (e.g. daemon) executing on a device. In addition, the public content items index 155 may be updated using a crawler 157 such as an internet crawler. For example, crawler 157 may retrieve (e.g. “crawl”) for information from various websites, as well as from third-party providers. For example, crawler 157 may retrieve metadata relating to, for example, multimedia content items (e.g. music, movies, audio book, etc.) provided by third-party providers that may be accessed via a user account with the third-party provider. For instance, the system (e.g. crawler 157) may coordinate with a third-party provider (via an API) to provide searchable metadata for online content items to which the user may subscribe. In some embodiments, the public content index 155 (or portions thereof) may be stored locally to allow the system to index interactions with public content items 175 (e.g. visited webpage history, media playlist history, etc.). Accordingly, in some embodiments, the system may maintain a history of user actions performed with content that may be stored publicly as further discussed herein.
The local content items index 150 and public content item index 155 may be stored on the same device, or on separate devices as shown. In one embodiment, the system may distinguish between private data (e.g. local content items 170) stored on the device, and the public content items 175 (e.g. non-local content items) that may be accessed from public sources such as the internet or third-parties. In one embodiment, the system may secure the local content items 170 and metadata 160, by incorporating a firewall or other features to maintain the privacy of user content. In one embodiment, components that are part of a server may also be part of the client device 110, and accordingly, the system may secure these components using various techniques such as “sandboxing.” For example, sandboxing may include compartmentalizing resources (e.g. data, memory addresses, processes, features, etc.) that one or more components access on the client device 110. In addition, various encryption mechanisms may also be employed to secure content items 170, metadata 160, or attributes as further described herein. It should be noted that although the content items are shown as a local (e.g. private) versus public dichotomy, in some embodiments, metadata (or content items) from public sources may be stored locally on the client device 110, and some local content items 170 may be accessed from a remote source (e.g. another client device 110, or server 120).
The system may include a digital assistant 132 (or digital assistant). The digital assistant 132 may reside on the client device 110, the server 120, or both, for example, as a client-server implementation. For example, as a client-server implementation, certain functionality of the digital assistant 132 may reside on the client device 110 such as user interface searching components, etc., functionality such as query and natural language processing may occur on the server 120. For example, as further described herein a query received from a user may be transmitted to a server 120 for processing, and the server 120 may instruct client device 110 (e.g. provide search criteria) to perform a search for content items (e.g. local content items 170).
Referring to
The speech-to-text processing module 134 may process received speech input (e.g. a user utterance) using various acoustic and language models to recognize the speech input as a sequence of phonemes, and ultimately, a sequence of words or tokens written in one or more languages. The speech-to-text processing module 134 may use any suitable speech recognition techniques, acoustic models, and language models, such as Hidden Markov Models, Dynamic Time Warping (DTW)-based speech recognition, and other statistical and/or analytical techniques. In one embodiment, the speech-to-text processing can be performed by the server 120, client device 110, or both. Once the speech-to-text processing module 134 obtains the result of the speech-to-text processing (e.g. a sequence of words or tokens), it may provide the result to the natural language processing module 136 for intent deduction.
The natural language processing module 136 (or natural language processor) of the digital assistant 132 may take a sequence of words or tokens (e.g. token sequence) generated by the speech-to-text processing module 134, and attempts to associate the token sequence with one or more intents recognized by the digital assistant 132. For example, the natural language processing module 136 may receive a token sequence (e.g., a text string) from the speech-to-text processing module 134, and determines what intents or attributes are implicated by the words in the token sequence, which may include referring to a vocabulary (or vocabulary index) 138. For example, an intent represents a task that can be performed by the digital assistant 132 or the system 100. For example, within the context of some embodiments described herein, the intent may include locating or searching for content items. In some embodiments, the natural language processing includes identifying one of the one or more terms as a pronoun and determining a noun to which the pronoun refers. For example, terms such as “me” and “my” may be associated with a particular user and user actions performed by the particular user. Accordingly, the digital assistant 132 may use natural language processing to disambiguate ambiguous terms. For example, disambiguating may include identifying that one or more terms has multiple candidate meanings; prompting a user for additional information about terms; receiving the additional information from the user in response to the prompt; and disambiguating the terms in accordance with the additional information. In some embodiments, prompting the user for additional information includes providing a voice-based prompt to the user.
Referring back to
In one embodiment, the system may primarily (or exclusively) search for local content items, but may also include a set of predefined (or user defined) public content items. For example, in one embodiment, the system may search a predefined set or type of public contents items (e.g. maps, wiki pages, particular websites, etc.). Accordingly, in one embodiment, the system may search for content items that are stored on a local device, as well as for content items that are from public sources (e.g. internet). It should be noted that as described above, the search functionality described herein may be performed on the client device 110, or in conjunction with the server 120. For example, the search query may be transmitted to a server 120 for processing, and the server may instruct client device 110 to perform a local search.
As described, the client device 110 may also include an API 130, to allow components (e.g. third-party applications) to access, for example, the metadata processing module 140 and other components shown. For example, the API 130 may provide a method for transferring data and commands between the metadata processing module 140 and these components. The metadata processing module 140 may also receive data from an importer/exporter via the API 130 that may communicate with various components to provide metadata for certain types of content items. As referred to herein, a third-party refers to a manufacturer or company other than the manufacturer or company providing the operating system for the device or the device itself. For example, a developer may utilize the API 130 to allow the developers to import or provide an indication of extractable metadata for content items specific to a particular application (e.g. third-party application).
The client device 110 may also store various applications 123, including applications that may be installed or provided from third-party providers. When an application is installed, it may store various application components 162 including various, files, resources, etc. As further described herein, files stored as part of the application components 162 may include various manifest files that provide information regarding the characteristics, capabilities, and other information regarding a particular application that the system (e.g. operating system, API 130, various components) may access.
It should be noted that the configurations described herein are examples, and various other configurations may be used without departing from the embodiments described herein. For instance, components may be added, omitted, and may interact in various ways known to an ordinary person skilled in the art.
As described above, in one aspect, a system may allow a user may perform a search for content items associated with various types of user actions such as sending or receiving a document, sharing a content item, printing a document, or other types of user actions.
In 310, the system may detect a user action (or action) performed with a content item. For example, the system may include one or more processes (e.g. daemon), which may be part of an operating system, that detects various actions or actions performed by a user (e.g. user actions). As referred to herein, performing a user action with a content item may include various operations, commands, instructions, and the like that may be performed in conjunction with a content item. For example, the user action may include copying, sending, sharing, printing, moving, deleting, editing, modifying, creating, opening, downloading, saving, posting, archiving, playing, transferring, capturing (e.g. taking a picture), and like types actions performed with a content item.
In one embodiment, the user action may include sharing (or sending) a content item. The sharing may be performed by transmitting the content item within a network or through a direct communication link. For example, the user action may include sending a content item (e.g. from a first user) to a second user. For instance, sending a content item may include attaching a content item to an email sent to the second user as a recipient. In another example, sharing may include transmitting the content item to one or more users via a messaging or chat application. In another example, sharing a content item may include storing the content item in a file repository (e.g. virtual drop box) or cloud account that allows the content to be accessed by authorized users. In yet another example, sharing a content item may include providing the content item to a collaborative work environment or platform.
In addition, sharing a content item may include using a third-party application that may provide various sharing mechanisms. For example, an environment may allow a user to select a particular content item (e.g. right click) to provide a sharing menu including various applications that may be selected to perform a particular sharing action. Accordingly, the system may detect sharing a content item using such a mechanism.
In 315, the system may create and store one or more attributes characterizing the user action detected in 310. As referred to herein, attributes characterizing the action performed may include any attributes to provide information related to the user action. These attributes may be stored as any type of suitable value (e.g. number, string, object, etc.), and may be selected from a predefined set, or may be provided as a new value derived from the user action. The system may store these attributes as part of the metadata (e.g. metadata 160). In one embodiment, the attributes may be stored as part of the content item itself. For example, if a user were to transfer content items to a second device, the attributes would transfer with the content items, and accordingly, the second device may determine particular user actions performed. In addition, or as an alternative, the attributes may be stored as part of an index (e.g. indexes 150 or 155) to provide an efficient retrieval method as described above. It should be noted that as described above, when content items are transferred to a second device, a new index may also be created on the second device.
As shown in
The example of
It should be noted that the attributes shown in
In one embodiment, because information such as various user actions performed by a particular user may be considered private, the system may encrypt or secure one or more attributes. For example, in one embodiment, only the implicit attributes (e.g. attributes 400) may be encrypted.
Returning to
As mentioned, the system may identify particular types of references within the query. One type of reference may include a reference to a user action as described above. Accordingly, a reference to a user action may include an utterance corresponding to terms such as “shared, sent, copied, printed, moved, deleted, modified, created, edited, saved, opened, downloaded, posted, archived, played, transferred, captured,” and like type terms. For example, if a query is the phrase “show me the last document I modified,” the system may determine the utterance includes the term “modified” referencing a user action (e.g. user action attribute 408) of modifying a document.
The system may also identify various other references within a query for natural language processing. For example, another type of reference may include a reference to a content item. For example, a reference to a content item may include an utterance corresponding to any type of content item as described above. For instance, a reference to a content item may include a term such as “content item, document, file, spreadsheet, presentation, reminder, note, appointment, paper, email, message, picture, image, song, movie, playlist, video,” etc. For instance, using the same example of “Show me the last document I opened,” the system may determine the query includes the term “document” referencing a content item (e.g. content item type attribute 405). In addition, the system may determine whether the query or reference to the content item includes an application specific type of content item such as a pdf, word or word document, pages or pages document, etc. Accordingly, the system may also detect a particular file type that may be associated with a content item (e.g. “word document,” or “pages document,” “pdf document,” etc.).
In 330, the system may identify search criteria associated with the one or more stored attributes (e.g. implicit attributes 400) characterizing the user action referenced. Using the example above, the system may identify search criteria characterizing opening a document based on identifying the term “opened” in the query as described in 320. When determining search criteria corresponding to attributes characterizing a user action, various other references may also be used. For example, the type of content item, a reference to a time period (e.g. day, month, etc.), recency (e.g. most recent), user identifier, folder name, etc. may also be used.
As shown in the examples of
As shown in another example of
As shown in the example of
As shown in another example in
It should be noted that although the above examples use a term referencing a name as a user identifier (e.g. “Bill”), other suitable identifiers may also be used (e.g. account name, alias, email address, contact name, group name, family relation such as wife, father, etc.).
It should also be noted the reference terms shown in the examples above are provided as simplified examples. The system may use various other mechanisms for natural language processing to determine a particular set of search criteria within a search query. For example, the system may use various disambiguation techniques for disambiguating various terms including pronoun such as “me,” “my,” “I,” etc., in combination with terms associated with a sender or recipient such as “to/send” or “from/receive” to determine whether a particular user is a sender or recipient of a content item.
In addition, in some embodiments, the system may also provide multiple results sets based on the type of query. For example, particular queries may provide multiple interpretations and the system may account for such circumstances. For instance, the query “Show me all my emails read yesterday” may be interpreted in multiples ways, including emails that are read by the user and are received yesterday (e.g. isRead=Yes && date=yesterday), as well as emails that were read by the user yesterday (e.g. userRead=yesterday). In another example, the system may provide multiple results based on a requested type of content item. For instance, the query “Messages received from Bill” may include content item types such as an email, as well as content item types such as chat messages, and/or sms messages, and the like.
Returning once again to
As described above, some embodiments may work in conjunction with a digital assistant.
As another example,
In addition, once an initial set of search results is displayed, the user may provide additional filtering terms. For example, the filtering terms may include a further date specification (e.g. “Show me only the emails before April 15th”), subject matter (e.g. “Show me only the emails that include report summary in the subject line”), characteristics of the content item (e.g. “Show me only emails with attachments”), file location (“Show me documents saved in my documents folder”), and any other filtering terms that may correspond to one or more attributes stored as metadata for content items.
In some embodiments, the digital assistant may also be initiated within a particular application via an API. For example, the digital assistant may be initiated from within an application (including a third-party application) and provide contextual search results. For example, if the digital assistant is initiated from an email application, search results in response to a query that includes a request for “files” may include a set of emails (as a contextual response to the term file) that may be displayed within the email application. Similarly, particular types of documents may be provided depending on the application from which the digital assistant is initiated such as a particular document type based on the word processing application (e.g. pages, pdf, word, etc.)
As shown, in 701, the system may detect a user action performed with a content item. As described herein, the user action may include various actions that may be performed with a content item (e.g. sending, sharing, printing, downloading, modifying, etc.).
In 702, the system may store one or more attributes (e.g. implicit attributes 400) characterizing the user action performed. For example, the user action performed may include sharing (or sending) the content item between a first user and a second user, and accordingly, the one or more attributes may include an identifier for the first user (e.g. sender attribute 409) and an identifier for the second user (e.g. recipient attribute 411). In one embodiment, sending the content item may include sending the content item using an application, and the attributes characterizing the user action may further include an identifier for the application (e.g. application attribute 413), as well as a time or time stamp of when the content was sent (e.g. time attribute 414). In one embodiment, the attributes may be stored as metadata (e.g. metadata 160), which may be stored as part of the content item and/or part of an index.
In 703, the system may receive a search query (e.g. query 501) for one or more content items. In 704, the system may identify one or more references (e.g. references 503) within the search query including at least a reference to the user action. Accordingly, the system may use natural language processing to determine one or more words, phrases, clauses, etc. that may correspond to search criteria. For example, the reference to the user action may include an utterance of a term referencing sharing, sending, or emailing. The search query may further include a reference to a recipient such as an identifier (e.g. name, email address, user ID, etc.). In addition, the search query may further include a reference to the content item as described above (“file,” “document,” “email,” etc.).
In 705, the system may identify search criteria (e.g. search criteria 504) associated with the one or more references. In one embodiment, the search criteria may correspond to one or more attributes stored as metadata. In 706, the system may identify content items based on performing a search for content items associated with the attributes corresponding to the search criteria. In one embodiment, the system may perform the search by searching one or more indexes (e.g. indexes 150 and/or 155). Accordingly, the system may display the identified content items as search results in the response to the search query as described above.
As described above, in second aspect, a system may allow a user may perform a search for content items that a user associates with a particular application.
In 810, the system may detect installation of a new application on a client device. For example, the system may detect a user has installed a new application (including a third-party application), for example, from an application store (e.g. app store). The application may be installed from an installation package, which may include various types of files. In one embodiment, an application may be developed as an application bundle that includes a structured format for an application. For example, the bundle may include executables, resource files, and other support files, along with one or more manifest type files. In one embodiment, a manifest file (e.g. information property list file or info.plist file) may be a structured file that includes configuration information for the application. For example, the system may rely on the presence of this file to identify relevant information about a particular application. In one embodiment, the system may access such a file to determine content item types the application supports.
In 815, the system may determine an identifying name for the application. In one embodiment, the system may be able to recognize the identifying name from a server or other source. For example, a server may work in conjunction with a digital assistant (e.g. digital assistant 132) to recognize identifying names of newly installed applications by referencing an application ID or other form of unique identifier for the application. As another example, the identifying name may be provided and managed by an application store or service. In yet another example, the system may determine the identifying name from local resources such as accessing the manifest type file or other form of indicator. In one embodiment, the system may determine the identifying name of an application in response to detecting the installing of the application. Accordingly, the system may now be aware of the application's existence on a device. As a result, the system may now recognize the identifying name of the particular application in utterances received by the digital assistant.
In 820, the system may process a received query (or search query) for content items associated with a particular application. The query may be received from a user as an utterance (e.g. voice-based input), or as a typed entry. As described above, the query (and displayed results) may work in conjunction with a digital assistant (e.g. digital assistant 132). When processing the query, the system may determine whether the query includes one or more particular types of references. For example, the system may identify one or more references to the particular application (e.g. reference to the identifying name) and a reference to content items in the search query using natural language processing as described above. As referred to herein, a reference may include one or more words, phrases, clauses, etc. that may be associated with search criteria or attributes stored by the system. Accordingly, the system may identify references within a query to identify one or more attributes that may be used to perform a search for content items.
In 825, the system may determine one or more content item types associated with the application. In one embodiment, the system may determine content item types associated with the application in response to receiving the query in 820. For example, the system may identify a content item type only after a user requests such content in a query. Accordingly, the system may make such a determination at the time of the search. Accordingly, in such embodiments, the system may not necessarily be required to maintain a database of corresponding associations, and instead, may make the determination on an as needed basis. Moreover, by making a determination on an as needed basis, the system may not need to be concerned with which applications are currently installed on the device or which have been removed.
In one embodiment, content items associated with a particular application may include content items the application has authority over (or content items belonging to the particular application), or a particular content item type specific to a particular application (e.g. a content item type supported only by the particular application). For example, a user may associate particular content item types with a particular application. Accordingly, the system may respond to queries such as “Show me my ‘NewApp’ items,” where “NewApp” is the name (or identifying name) of the particular application. Typically, multiple applications may support a content item type in some manner (e.g. merely read or open), but those content item types may not be of interest to a user. Instead, a user may only be interested in those content items to which the user associates with a particular application such as those the application has authority over. Accordingly, in one embodiment, the system may determine content item types that are supported in multiple ways by the application to determine which content item types the application has authority over. In one embodiment, authority over may include content item types the application may create (e.g. not merely read or open). In addition, in one embodiment, authority over may include content item types the application may open (e.g. open, read, import, etc.), and also content item types the application may create (e.g. create, export, write, etc.). For example, the content item types the application may create may be specified by a unique identifier for a content item type such as a Universal Type Identifier (UTI). In one embodiment, the identifier may identify a content item type specific to the application and usable by other applications.
In one embodiment, the system may access one or more files (e.g. a manifest file) to determine the content items a particular application may support. In one embodiment, the one or more files may include a first list indicating which content item types the application supports reading or opening, and a second list, different from the first list, indicating which content item types the application supports creating or exporting. In addition, in one embodiment, the system may perform such a determination by accessing only the one or more files stored locally on the device (e.g. client device 110) and without accessing a file on another device or server (e.g. server 120). For example, in one embodiment, when a new application (e.g. “NewApp”) is installed, it may be bundled with a manifest type file. Accordingly, the system may determine the content items the application has authority over without having to query a server, which may include a database that would need to be periodically updated to determine which files an application has authority over. Accordingly, in one embodiment, such information may be determined at the time of installation, without having to communicate with a server.
As shown, a file 90 (e.g. manifest type file, information property list file, etc.) indicates data (or content) created by the particular application NewApp. As shown, the file 90 may indicate a content item type 906, a corresponding UTI 921 (or identifier), along with other information such as a file extension 922 for a particular application 920. As shown, NewApp may be capable of creating various data (or content, or content item types), but all of these may not be of interest to a user. For example, a data file or a log file may be used internally by the application or system, which would not the content typically accessed or used by a user. Accordingly, in order to determine which content item types that would be of interest to a user, the system may cross-reference these content item types with content items types the application may support in various ways. For example, file 91 (e.g. manifest type file, information property list file, etc.) indicates capabilities supported for various content item types 906 by the NewApp application. In this example, the capabilities include the ability to open 908 (e.g. read, import, etc.), edit 909, and save 910, etc. In this example, the system may determine an application has authority over content item types it may create and well as either open 908 or edit 909, which in this example is a NewApp Sheet 92. In addition, as shown in this example, the NewApp application may support either opening or editing other content item types (e.g. pages and pdf documents), but does not create such content item types, and thus, is not deemed to have authority over these content items. It should be noted that when determining the content item types the application supports, various lists may be cross-referenced including those stored in a multiple files as shown in this example, or those stored in a single files.
It should be noted that
Returning once again to
As described above, some embodiments may work in conjunction with a digital assistant.
In addition, once an initial set of search results is displayed, the user may provide additional filtering terms. For example, the filtering terms may include a further date specification (e.g. “Show me only the documents saved before April 15th”), subject matter (e.g. “Show me only the documents that include the title report summary”), characteristics of the content item (e.g. “Show me only content items modified recently”), file location (“Show me documents saved in my documents folder”), and any other filtering terms that may correspond to one or more attributes stored as metadata for content items.
In 851, the system may recognize a query (or search query) includes a request for content of the application. In one embodiment, the search query may be received by a search application. For example, the search query may be received as part of an interaction with a digital assistant. In one embodiment, recognizing a query includes a request for content of the application may include determining the received query includes at least a reference to an identifying name of the application. In addition, the system may also recognize the query includes a reference to content items.
In 852, the system may identify content associated with an application. In one embodiment, the system may identify the associated content in response to recognizing the query includes a request for content of the application. In one embodiment, content associated with an application may include a content item type the application has authority over. In one embodiment, a content item type the application has authority over may including determining the content item type is included in one or more lists of content item type capabilities for the application. In one embodiment, the system may determine content item types the application is capable of creating. In another embodiment, the system may determine a content item types the application is capable of both opening or editing, and creating or exporting. In one embodiment, the content item types the application supports creating or exporting may be identified with an identifier such as a Universal Type Identifier (UTI). In one embodiment, the identifier may identify a content item type specific to the application and usable by other applications. For example, the content item type specific to the application may be a newly added content item type the system may be capable of processing (e.g. opening, reading, editing, exporting, etc.) in response to the installation of the particular application.
In 853, the system may perform a search for content items having the identified content item type. Accordingly, the content items may be provided to the search application as search results for the query.
As shown, the computing system 1200 may include a bus 1205 which may be coupled to a processor 1210, ROM (Read Only Memory) 1220, RAM (or volatile memory) 1225, and storage (or non-volatile memory) 1230. The processor 1210 may retrieve stored instructions from one or more of the memories 1220, 1225, and 1230 and execute the instructions to perform processes, operations, or methods described herein. These memories represent examples of a non-transitory machine-readable medium or storage containing instructions which when executed by a computing system (or a processor), cause the computing system (or processor) to perform operations, processes, or methods described herein. The RAM 1225 may be implemented as, for example, dynamic RAM (DRAM), or other types of memory that require power continually in order to refresh or maintain the data in the memory. Storage 1230 may include, for example, magnetic, semiconductor, tape, optical, removable, non-removable, and other types of storage that maintain data even after power is removed from the system. It should be appreciated that storage 1230 may be remote from the system (e.g. accessible via a network).
A display controller 1250 may be coupled to the bus 1205 in order to receive display data to be displayed on a display device 1255, which can display any one of the user interface features or embodiments described herein and may be a local or a remote display device. The computing system 1200 may also include one or more input/output (I/O) components 1265 including mice, keyboards, touch screen, network interfaces, printers, speakers, and other devices. Typically, the input/output components 1265 are coupled to the system through an input/output controller 1260.
Modules 1270 (or components, units, or logic) may represent any of the modules described above, such as, for example, digital assistant 132, applications 122, search module 124, metadata processing module 140, and crawler 157 (and related modules, and sub-modules). Modules 1270 may reside, completely or at least partially, within the memories described above, or within a processor during execution thereof by the computing system. In addition, modules 1270 can be implemented as software, firmware, or functional circuitry within the computing system, or as combinations thereof.
The present disclosure recognizes that the use of personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to deliver targeted content that is of greater interest to the user. Accordingly, use of such personal information data enables calculated control of the delivered content. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure.
The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. For example, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of advertisement delivery services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services. In another example, users can select not to provide location information for targeted content delivery services. In yet another example, users can select to not provide precise location information, but permit the transfer of location zone information.
In the foregoing specification, example embodiments of the disclosure have been described. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.
This application claims the benefit of U.S. Provisional Patent Application No. 62/349,106, filed Jun. 12, 2016, the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62349106 | Jun 2016 | US |