Tagging of people in images is commonplace nowadays. Both on social media sites as well as personal image organization applications, users can tag themselves, their friends and family, etc. in photographs. In such applications, the user can then identify all of the photographs in which they appear, or in which a particular family member appears. However, this only provides a limited amount of information about a single tagged person at a time.
Some embodiments provide a method for displaying a graphical representation of relationships between entities that appear in a set of content items. Specifically, within a defined set of content items (e.g., a content library) some embodiments determine the number of co-appearances of selected entities in content items. In addition, for each particular non-selected entity of a set of non-selected entities, some embodiments determine the number of co-appearances of the selected entities along with the particular non-selected entity (i.e., the number of content items in which all of the selected entities and the particular non-selected entity appear). The method of some embodiments generates a visualization of these relationships that indicates, e.g., which of the entities are selected and the numbers of co-appearances of the different combinations of entities.
In some embodiments, the graphical representation is displayed for an image library by an image organization application. Images catalogued by the image organization application of some embodiments may include tags that indicate the presence of various entities in the images (e.g., people's faces, pets or other animals, non-living entities such as inanimate objects or locations, etc.). The image organization application of some embodiments includes a graphical user interface (GUI) that allows a user to view the various entities tagged in the images of the image library, and select one of the entities in order be presented with the images in which the entity is tagged.
Some embodiments display the various entities in a hierarchical fashion, with the entities presented as different sizes based on their relative importance to the user of the image organization application. For instance, some embodiments present representations of several of the entities most important to the user as a largest size across the top of the GUI, several more representations of the entities as an intermediate size in the middle of the GUI, and additional representations of less important entities as a smallest size at the bottom of the GUI, although other arrangements are possible. The importance of the entities may be determined based on user input (e.g., a user may move the displayed representations of the entities between the different groups in the hierarchy) or automatically by the application (e.g., based on the number of images tagged with the different entities).
Within this GUI, the user may select one or more of the tagged entities in order to view information about the images that contain the selected entities. Specifically, when the user selects one of the entities, some embodiments identify, for each pairing of the selected entity with one of the other non-selected entities, the number of images that contain both the selected entity and the non-selected entity. For example, if a user selects the first of three entities, some embodiments determine (i) the number of images containing both the first entity and the second entity and (ii) the number of images containing both the first entity and the third entity. When the user selects two or more entities, some embodiments identify both (i) the number of images that contain all of the selected entities and, (ii) for each grouping of the selected entities and one non-selected entity, the number of images that contain all of the selected entities and the non-selected entity. For example, if a user selects the first and second entities in a group of four entities, such embodiments determine (i) the number of images containing both the first and second entities, (ii) the number of images containing the first, second, and third entities, and (iii) the number of images containing the first, second, and fourth entities.
After determining the counts of the different groups of entities in the set of content items, some embodiments display a visualization of the relationships between the entities in the content items. In the case of the image organization application, some embodiments graphically display connections between the representations of the entities, with the connections indicating the determined numbers of images for each visualized relationship. For instance, when a user selects a representation of a particular entity, some embodiments draw lines connecting the particular entity representation to several other entity representations, with a selectable item for each line that indicates the number of images in which both the particular entity and the other entity both appear. When the user selects a second representation of a second entity, some embodiments display a line connecting the representations of the two selected entities along with a selectable item that indicates the number of images in which both of the selected entities appear. Furthermore, some embodiments display additional lines off of the primary line that connect to the representations of one or more other entity representations, along with selectable items indicating the number of images in which both the selected entities and the other entity appear together. In some embodiments, the representations of the entities remain static as the application draws the connecting lines. In other embodiments, however, the application moves the representations within the GUI, such that the representations of the selected entities are displayed next to each other (e.g., at the top), while the representations of the other entities that appear in images along with the selected entities are displayed underneath (e.g., in a fanning graph display). The other representations that do not appear in any images are displayed away from the connected entity representation (e.g., along the bottom).
In some embodiments, the items within the visualization that indicate the number of co-appearances in the set of content items are selectable in order to bring up a display of the corresponding content items. In the case of the image organization application, selection of one of the items indicating the number of images in which a set of entities appear causes the application to display thumbnails of the images in which those entities appear. This enables the user to use additional features of the image organization application to generate a card, photobook, slideshow, etc. using the images, in some embodiments.
The preceding Summary is intended to serve as a brief introduction to some embodiments as described herein. It is not meant to be an introduction or overview of all subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments provide a method for displaying a graphical representation of relationships between entities that appear in a set of content items. Specifically, within a defined set of content items (e.g., a content library) some embodiments determine the number of co-appearances of selected entities in content items. In addition, for each particular non-selected entity of a set of non-selected entities, some embodiments determine the number of co-appearances of the selected entities along with the particular non-selected entity (i.e., the number of content items in which all of the selected entities and the particular non-selected entity appear). The method of some embodiments generates a visualization of these relationships that indicates, e.g., which of the entities are selected and the numbers of co-appearances of the different combinations of entities.
The set of content items 120, in some embodiments, is a set of items of one or more types. For instance, the content items might be text documents, audio files, images (e.g., photographs), videos, etc. The entities for which tags 125 are stored may be any type of entity that can be appear in a content item. Thus, the entities could be words in text, specific people, pets, objects, locations, etc. in images or video, sound snippets in audio, words or ideas in audio, images in video, etc. In different embodiments, the tags may be user-generated or automatically detected by the application that implements the graphical representations (or by a different application).
For example, in some embodiments the graphical representations of relationships are displayed for an image library by an image organization application. Images catalogued by the image organization application of some embodiments may include tags that indicate the presence of various entities in the images (e.g., people's faces, pets or other animals, non-living entities such as inanimate objects or locations, etc.). The image organization application of some embodiments includes a graphical user interface (GUI) that allows a user to view the various entities tagged in the images of the image library, and select one of the entities in order be presented with the images in which the entity is tagged.
The entity representations 130 of some embodiments are graphical representations of the entities used to represent the entities in the GUI. For instance, when the entities are items (e.g., faces of people or pets, locations, tangible items, etc.) tagged within images, some embodiments select (e.g., automatically or via a user selection) one of the tagged image regions for each entity to represent the entity in the GUI. Some embodiments display the various entities in a hierarchical fashion, with the entities presented as different sizes based on their relative importance to the user of the image organization application. For instance, some embodiments present representations of several of the entities most important to the user as a largest size across the top of the GUI, several more representations of the entities as an intermediate size in the middle of the GUI, and additional representations of less important entities as a smallest size at the bottom of the GUI, although other arrangements are possible. The importance of the entities may be determined based on user input (e.g., a user may move the displayed representations of the entities between the different groups in the hierarchy) or automatically by the application (e.g., based on the number of images tagged with the different entities).
The filter 105 of some embodiments uses tag selections 135 from a user to identify a set of filter content items 140. In some embodiments, the filtered content items 140 are a set of content items 120 whose tags 125 match the tag selections 135 according to a particular heuristic. For example, some embodiments identify all of the content items 120 that have all of the selected tags 135. In addition, some embodiments identify, for each particular unselected entity, the content items that are tagged with all of the selected entities and the particular unselected entity.
Within the image organization application GUI, the user may select the representation of one or more of the tagged entities in order to view information about the images that contain the selected entities. Specifically, when the user selects one of the entities, some embodiments identify, for each pairing of the selected entity with one of the other non-selected entities, the number of images that contain both the selected entity and the non-selected entity. For example, if a user selects the first of three entities, some embodiments determine (i) the number of images containing both the first entity and the second entity and (ii) the number of images containing both the first entity and the third entity. When the user selects two or more entities, some embodiments identify both (i) the number of images that contain all of the selected entities and, (ii) for each grouping of the selected entities and one non-selected entity, the number of images that contain all of the selected entities and the non-selected entity. For example, if a user selects the first and second entities in a group of four entities, such embodiments determine (i) the number of images containing both the first and second entities, (ii) the number of images containing the first, second, and third entities, and (iii) the number of images containing the first, second, and fourth entities.
The filtered content item data 140, along with the entity representations 130, is used by the visualization engine 110 to generate a graphical display 145 of relationships within the content items. This graphical display 145 is presented within the user interface 115. Different embodiments may provide different graphical displays. For instance, some embodiments highlight the representations of the selected entities and display connections between the selected entities as well as the non-selected entities that also appear in the content items with the selected entities. Within the visualization, some embodiments indicate the number of content items with the different sets of tags, or show the content items themselves (or representations of the content items).
In the case of the image organization application, some embodiments graphically display connections between the representations of the entities, with the connections indicating the determined numbers of images for each visualized relationship. For instance, when a user selects a representation of a particular entity, some embodiments draw lines connecting the particular entity representation to several other entity representations, with a selectable item for each line that indicates the number of images in which both the particular entity and the other entity both appear. When the user selects a second representation of a second entity, some embodiments display a line connecting the representations of the two selected entities along with a selectable item that indicates the number of images in which both of the selected entities appear. Furthermore, some embodiments display additional lines off of the primary line that connect to the representations of one or more other entity representations, along with selectable items indicating the number of images in which both the selected entities and the other entity appear together. In some embodiments, the representations of the entities remain static as the application draws the connecting lines. In other embodiments, however, the application moves the representations within the GUI, such that the representations of the selected entities are displayed next to each other (e.g., at the top), while the representations of the other entities that appear in images along with the selected entities are displayed underneath (e.g., in a fanning graph display). The other representations that do not appear in any images are displayed away from the connected entity representation (e.g., along the bottom).
In some embodiments, the items within the visualization that indicate the number of co-appearances in the set of content items are selectable in order to bring up a display of the corresponding content items. In the case of the image organization application, selection of one of the items indicating the number of images in which a set of entities appear causes the application to display thumbnails of the images in which those entities appear. This enables the user to use additional features of the image organization application to generate a card, photobook, slideshow, etc. using the images, in some embodiments.
Many more details of embodiments of the visualization of relationships between entities in a set of content items will be described in the sections below. Section I introduces the image organization application GUI of some embodiments. Section II then describes in detail the generation and display of visualizations of relationships between entities within a set of content items, providing examples from the image organization application of some embodiments. Finally, Section III describes an electronic system with which some embodiments of the invention are implemented.
In the following sections, the visualization of relationships between entities will be described in the content of an image organization application. However, one of ordinary skill in the art will recognize that the invention is not limited to displaying visualizations of relationships between tagged items within images. For instance, the content items could include video, audio, text, etc. The content items could also include a user's communications (e.g., text messages, e-mails, audio and/or video calls, etc.), and the contacts of the user (i.e., the people with which the user has those communications) could be the entities. The visualization of some embodiments then indicates the number and/or type of communications between different groups of contacts.
The image organization application of some embodiments may provide image organization functions (e.g., the ability to tag images, group images into collections, etc.), image-editing capabilities, content creation functions (e.g., the ability to create new content using the images, such as cards, photobooks, photo journals, etc.), and other such functions. Thus, the image organization application is not limited to being merely a simple image viewer, but may provide other image-usage functions as well.
The faces tab 207 is currently selected in the GUI 200. As shown, the GUI 200 for the faces tab displays representations of entities (often faces of people) tagged in images stored by the image organization application. In some embodiments, the image organization application includes face detection capabilities. The application identifies the locations of faces within an image, and provides an interface for the user of the application to input names for the different faces. When a user inputs the same name for faces in different images, these images will be grouped together based on this tagged entity. In addition to face detection, users may tag other items (i.e., items other than detected faces) in the images. This allows users to tag any people that might not be detected as such (e.g., because the face is mostly covered up or not shown), as well as other entities such as pets, objects (e.g., food, jewelry items, etc.), locations (e.g., a house, a room, a backyard, a park, a building, etc.).
In the GUI 200, the different faces are shown within circles. For entities that are tagged in multiple images, some embodiments select a particular instance of the entity and generate a representation of the entity. Some embodiments select the first tagged instance, allow the user to choose an instance for the representation, make a judgment on which is the clearest instance of the entity using a set of heuristics, etc. To generate the representation, some embodiments identify a portion of the image that is tagged and attempt to center the entity within the representation. Though not shown in
Some embodiments display the entity representations in a hierarchical manner, as shown in
In some embodiments, the representations are selectable (using a first type of selection) to cause the application to display the set of images in which the entity appears. In addition, as described in the following sections, the representations of some embodiments are selectable (using a second type of selection) in order to cause the application to display a visualization of the relationships between the selected entity and the other entities within the collection of images.
As shown, the process 300 begins by receiving (at 305) a selection of a set of entities that appear in a set of content items. The selection might be from a user selection (e.g., a selection with a cursor controller, a touch selection from a touch input device, etc.) in some embodiments. The user may select one entity or multiple entities in some embodiments. In the image organization application example, the user might select one or more entities (e.g., people, pets, objects, etc.) that are tagged within images organized by the application.
The process then determines (at 310) a count of the content items that include the selected entities. In some embodiments, this is a count of the items that include all of the selected entities. For instance, if two entities tagged in images are selected, the image organization application of some embodiments identifies the intersection of the set of images in which the first entity is tagged and the set of images in which the second entity is tagged. Thus, content items that include only one of the entities and not the other entity are excluded from this count.
In some embodiments, only the count of content items with all of the selected entities is determined when more than one entity is selected. In this case, the application or module performing the relationship visualization process does not perform operations 315-325 or similar operations. However, some embodiments also generate counts of content items that include all of the selected entities as well as different non-selected entities, in order to provide further information about the relationships of the appearance of the various entities in the set of content items.
Thus, the process next determines (at 315) whether any more non-selected entities remain for analysis. If there are no non-selected entities, then operations 320 and 325 are not performed. Furthermore, once all non-selected entities in the set of entities have been processed by these operations (to determine the relationships of the non-selected entities to the selected entities), the process advances to operation 330.
Otherwise, so long as at least one non-selected entity remains for processing, the process identifies (at 320) a current non-selected entity. Some embodiments perform the entity relationship process for all non-selected entities, while other embodiments perform this processing for only some of the non-selected entities. For example, in the case of the image organization application shown in
The process 300 then determines (at 325) a count of the content items that include (i) the selected entities and (ii) the current non-selected entity. For any non-selected entity, this count will always be less than or equal to the count of content items that include all of the selected entities, determined at 310. However, for all of the non-selected entities, the total counts may combine to be much greater than the count determined at 310. This is because content items that include two non-selected entities as well as all of the selected entities will be included in the counts for both of the non-selected entities. That is, the process of some embodiments identifies the set of content items that are tagged with at least the selected entities and the current non-selected entity, but may also have additional entities tagged. In some embodiments, the user can add one or more of the non-selected entities to the set of selected entities in order to be presented with more details regarding the relationships of the specific entities in the content items.
After all of the non-selected entities have been processed, the process 300 generates (at 330) a visualization of the selected entities and the counts of the content items determined at 310 and 325 indicating the relationships between the entities within the content items. The process then displays (at 335) the visualization (e.g., in the user interface of the application performing the process 300). In some embodiments, the application highlights the selected entities and displays connections between the selected entities and the non-selected entities (i.e., connections from one selected entity to another as well as connections from the selected entities to the separate non-selected entities).
In the case of the image organization application, some embodiments graphically display connections between the representations of the entities, with the connections indicating the determined numbers of images for each visualized relationship. For instance, when a user selects a representation of a particular entity, some embodiments draw lines connecting the particular entity representation to several other entity representations, with a selectable item for each line that indicates the number of images in which both the particular entity and the other entity both appear. When the user selects a second representation of a second entity, some embodiments display a line connecting the representations of the two selected entities along with a selectable item that indicates the number of images in which both of the selected entities appear. Furthermore, some embodiments display additional lines off of the primary line that connect to the representations of one or more other entity representations, along with selectable items indicating the number of images in which both the selected entities and the other entity appear together.
In some embodiments, the representations of the entities remain static as the application draws the connecting lines. In other embodiments, however, the application moves the representations within the GUI, such that the representations of the selected entities are displayed next to each other (e.g., at the top), while the representations of the other entities that appear in images along with the selected entities are displayed underneath (e.g., in a fanning graph display). The other representations that do not appear in any images are displayed away from the connected entity representation (e.g., along the bottom).
In the second stage, the user selects an entity representation 425. In this example, a user positions a cursor controller over the entity representation 425 and provides a selection input (e.g., a single-click or double-click of a mouse, a single or double tap of a touchpad, a keystroke input, etc.). While the examples shown in this figures as well as the subsequent figures illustrate cursor controller input, one of ordinary skill in the art will recognize that the various types of input shown could be received through a touchscreen or near-touchscreen in other embodiments. For instance, a user could press-and-hold, tap, etc. a touchscreen at the location at which the entity representation 425 was displayed in order to provide similar input in some embodiments.
The third and fourth stages 415 and 420 illustrate the entity relationship visualization of some embodiments in which the entity representations are static. The third stage 415 illustrates that the application highlights the representation 425 for the selected entity, displays connections between this selected representation 425 and other non-selected entity representations, and displays various counts of the number of images in which both the selected entity and the various non-selected entities appear. For each non-selected entity that is tagged in an image along with the selected entity, the application draws a line through the GUI. Some embodiments draw these lines so that two lines do not cross. In this case, the lines emanate from either the top or the bottom of the selected entity representation, but may be distributed differently in different embodiments. For a line to a non-selected representation that crosses through other entity representations between the selected representation and the non-selected representation, some embodiments draw the line underneath the intervening representation. For example, the line between the selected representation 425 and the non-selected representation 430 is drawn underneath the non-selected representation 435. In addition, each of the lines to one of the representations for a non-selected entity ends at a GUI item that displays a number. This is the number of images in which both the selected entity and the non-selected entity are tagged. Thus, the number nine for the line connecting to the non-selected entity representation 440 indicates that the image organization application has nine images tagged with both the selected entity and the non-selected entity represented by the representation 440.
In the fourth stage 420, the lines fade away leaving only the highlight of the selected entity and the GUI items indicating the number of images in which the various non-selected entities appear with the selected entity. In some embodiments, the application animates the visualization. Upon selection of the entity representation, the application draws lines emanating from the selected representation to the non-selected representations, then fades these lines out, leaving only the GUI items indicating the co-appearance counts. Other embodiments use different animations, or leave the connection lines in the GUI rather than fading the lines out.
The third stage 515 illustrates that both of the representations 425 and 525 of the selected entities are now highlighted, and a line is drawn connecting these two representations. In the middle of this line is a GUI item 530 with a number (8) that indicates the count of images that include both of the selected entities. In addition, the application displays connections between this GUI item 530 and the other entities that appear in images including both of the selected entities. For example, the entity shown in representation 440, which appeared in nine images with the first selected entity, appears in four images that include both of the two selected entities. In some embodiments, as shown, the line connecting the selected entities is differentiated from the other connection lines (e.g., by drawing the line thicker, darker, a different color, etc.).
The fourth stage 520 illustrates that the lines to the non-selected entities again have been removed (e.g., faded out), leaving the GUI items that indicate the co-appearance count for the various non-selected entities. Furthermore, the connection between the representations of the two selected entities remains along with the GUI item 530, in order for the user to easily discern which entities are selected and the number of co-appearances of those selected entities.
In some embodiments, the GUI items that indicate the number of co-appearances in the image library for a set of entities are selectable to cause the application to display the set of images in which all of the entities in the group appear.
The second stage illustrates that the GUI 400 no longer displays the entity representations, and instead displays a set of four images. Each of these images includes at least the three entities corresponding to the entity representations 425, 525, and 440. In addition, each of the non-selected entities connected to the GUI item 620 in stage 610 of
Whereas
The second stage 810 illustrates the resulting visualization of the relationships between the selected entity corresponding to representation 825 and various non-selected entities. In this case, the application highlights the selected entity by moving the corresponding entity representation 825 to a prominent location in the GUI 800 (in this case, the top center of the display). In addition, the other entity representations are arranged in a hub-and-spoke arrangement, with representations for each of the entities that appear in at least one image along with the selected entity connected to the representation 825 for the selected entity. Along the lines from the selected entity to each of the non-selected entities the application displays a GUI item indicating the number of images in which both the selected entity and the non-selected entity appear. Some embodiments animate this transition from the first stage 805 to the second stage 810 by, e.g., first rearranging the entity representations and then drawing the lines from the selected entity representation to the various non-selected entities.
To determine the arrangement of the various representations in the second stage 810, different embodiments use different techniques. Some embodiments arrange the representations based on the different hierarchical groups, with the largest representations on one side (e.g., the left), then the representations decreasing in size from that side to the other. Other embodiments arrange the representations in order from the largest number of co-appearances to the fewest, while still other embodiments calculate the locations that will result in the smallest total movement of the entity representations. Some embodiments also arrange the representations for the non-selected entities with no co-appearances in the same way (e.g., from largest to smallest representation, or based on smallest overall movement distance).
In the third stage 815, the user selects the representation 830 for one of the entities that appears in at least one image along with the previously selected entity. The fourth stage 820 illustrates the resulting visualization of the relationships between the two selected entities and various non-selected entities. In this case, the representations 825 and 830 of the two selected entities are moved into the prominent location in the GUI, and a line 840 is drawn connecting them. In the middle of this line is a GUI item 835 indicating the number of images in which both of the selected entities appear (8). In addition, fanning out of the line 840 are additional lines for each of the non-selected entities that appear in at least one of the eight images with the two selected entities. As with the previous transition, some embodiments animate the transition between the stages 815 and 820. In some embodiments, the application removes the lines indicating connections, rearranges the entity representations, and then redraws the lines to the newly arranged entities.
In all of the above examples, the second and third selections of are entities that are tagged in at least one image along with the previously-selected entities. In some embodiments, if a user selects one of the representations for an entity with no co-appearances, the image organization application nullifies the previous selections such that the newly selected entity is the only selected entity. Other embodiments, however, prevent the user from selecting entities that do not have any images in common with the currently selected entities.
The second stage 1110 illustrates the resulting visualization. As shown, the application draws lines from the representation 1125 of the single selected entity to the representations of several non-selected entities. As in this case, in some embodiments, the application limits the non-selected entities to which connections are shown to those entities in the same hierarchical group as the selected entity (in this example, the top group with the largest entity representations). This visualization is similar to that shown in the first set of
The third and fourth stages 1115 and 1120 illustrate an additional aspect of the visualizations of some embodiments. In the third stage, the user moves the location indicator over a GUI item 1130 that indicates the number of images in which the selected entity and the particular non-selected entity corresponding to the representation 1135 both appear. While selecting this GUI item would cause the application to present these seven images in the GUI, moving the location indicator over the item causes the application to highlight the connection 1140 between the entity representation 1125 for the selected entity and the entity representation 1135 for the particular non-selected entity. Similarly, in the fourth stage, the user moves the location indicator over a GUI item 1145 for a different non-selected entity, and the connection 1150 between that different entity's representation 1155 and the selected entity representation 1125 is highlighted.
Also in the second stage, the user selects a third entity representation 1220. The third stage 1215 illustrates that, as a result, the application draws a connection to a new GUI item 1225 that indicates the number of images in which all three of the selected entities appear. Some embodiments animate the transition between the second stage 1210 and the third stage 1215 by moving the GUI item along the connection 1150 while changing the number displayed by the GUI item, while also drawing the connection 1230. While not shown in
One of ordinary skill in the art will recognize that different embodiments that use various combinations of the above-described features may also be possible for visualizing the relationships of entities within a set of content items, whether those content items are images in a photo library of an image organization application, or other types of content. For instance, the application could rearrange the entities as shown in
When a user has not interacted with the image organization application except to select a GUI item that causes a display of the entity representations (e.g., faces tab 207 of
Once a user selects a first entity, however, the application GUI transitions to the state 1310 to generate a visualization showing the counts of images with the selected entity and each non-selected entity (or at least each of a set of the non-selected entities). The application, in some embodiments, performs the operations 310-330 of
While in this state, the user can provide various inputs to further affect the GUI. In some embodiments, if the user moves a location indicator over a particular count item such as the GUI items 1130 or 1145 (or provides a different input in other embodiments), the application transitions to state 1320 to highlight the connection between the entities (or the entity representations) corresponding to the particular count item. Examples of this operation are shown in stages 1115 and 1120 of
In addition, while the application is in the state 1315, the user may select an additional entity representation within the GUI, causing the application to transition to state 1325. At this state 1325, the application generates a new visualization that shows the counts of images with the selected entities and each non-selected entity (or at least each of a set of the non-selected entities), then transitions back to state 1315 to display the generated visualization. The application, in some embodiments, again performs the operations 310-330 of
In addition, from either state 1315 or 1320, the user can select one of the count items in order to cause the application to transition to state 1330. In state 1330, the application displays the images that contain the entities corresponding to the selected count item (i.e., images in which the entities are all tagged). Examples of this state of the GUI include stage 710 of
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more computational or processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, random access memory (RAM) chips, hard drives, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
The bus 1405 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1400. For instance, the bus 1405 communicatively connects the processing unit(s) 1410 with the read-only memory 1430, the GPU 1415, the system memory 1420, and the permanent storage device 1435.
From these various memory units, the processing unit(s) 1410 retrieves instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. Some instructions are passed to and executed by the GPU 1415. The GPU 1415 can offload various computations or complement the image processing provided by the processing unit(s) 1410. In some embodiments, such functionality can be provided using CoreImage's kernel shading language.
The read-only-memory (ROM) 1430 stores static data and instructions that are needed by the processing unit(s) 1410 and other modules of the electronic system. The permanent storage device 1435, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 1400 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive, integrated flash memory) as the permanent storage device 1435.
Other embodiments use a removable storage device (such as a floppy disk, flash memory device, etc., and its corresponding drive) as the permanent storage device. Like the permanent storage device 1435, the system memory 1420 is a read-and-write memory device. However, unlike storage device 1435, the system memory 1420 is a volatile read-and-write memory, such a random access memory. The system memory 1420 stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 1420, the permanent storage device 1435, and/or the read-only memory 1430. For example, the various memory units include instructions for processing multimedia clips in accordance with some embodiments. From these various memory units, the processing unit(s) 1410 retrieves instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 1405 also connects to the input and output devices 1440 and 1445. The input devices 1440 enable the user to communicate information and select commands to the electronic system. The input devices 1440 include alphanumeric keyboards and pointing devices (also called “cursor control devices”), cameras (e.g., webcams), microphones or similar devices for receiving voice commands, etc. The output devices 1445 display images generated by the electronic system or otherwise output data. The output devices 1445 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD), as well as speakers or similar audio output devices. Some embodiments include devices such as a touchscreen that function as both input and output devices.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself. In addition, some embodiments execute software stored in programmable logic devices (PLDs), ROM, or RAM devices.
As used in this specification and any claims of this application, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification and any claims of this application, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (including