Annotating a collection of media content items

Information

  • Patent Grant
  • 12056441
  • Patent Number
    12,056,441
  • Date Filed
    Monday, September 20, 2021
    3 years ago
  • Date Issued
    Tuesday, August 6, 2024
    3 months ago
  • CPC
    • G06F40/169
    • G06F16/24578
    • G06F16/41
    • G06F16/487
    • G06F16/489
  • Field of Search
    • CPC
    • G06F16/24578
    • G06F16/41
    • G06F16/487
    • G06F16/489
    • G06F16/48
    • G06F40/169
  • International Classifications
    • G06F40/169
    • G06F16/2457
    • G06F16/41
    • G06F16/48
    • G06F16/487
    • Term Extension
      21
Abstract
Various embodiments provide for systems, methods, and computer-readable storage media for annotating a collection of media items, such as digital images. According to some embodiments, an annotation system automatically determines one or more annotations for a plurality of media content items, and generates a collection of media content items that associates the determined annotations with the plurality of media content items. Depending on the embodiment, annotations that may be determined for the plurality of media content (and associated with the collection for the media content items) can include, without limitation, a caption, a geographic location, a category, a novelty measurement, an event, and a highlight media content item representing the collection.
Description
TECHNICAL FIELD
Technical Field

Embodiments described herein relate to media content and, more particularly, but not by way of limitation, to systems, methods, devices, and instructions for annotating a collection of media content items.


Background

Mobile devices, such as smartphones, are often used to generate media content items that can include, without limitation, text messages, digital images (e.g., photographs), videos, and animations. A plurality of messages can be organized into a collection (e.g., gallery) of messages, which an individual can share with other individuals over a network, such as through a social network.





BRIEF DESCRIPTION OF THE DRAWINGS

Various ones of the appended drawings merely illustrate some embodiments of the present disclosure and should not be considered as limiting its scope. The drawings are not necessarily drawn to scale. To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced, and like numerals may describe similar components in different views.



FIG. 1 is a block diagram showing an example messaging system, for exchanging data (e.g., messages and associated content) over a network that can include an annotation system, according to some embodiments.



FIG. 2 is block diagram illustrating further details regarding a messaging system that includes an annotation system, according to some embodiments.



FIG. 3 is a schematic diagram illustrating data which may be stored in the database of the messaging system, according to some embodiments.



FIG. 4 is a schematic diagram illustrating a structure of a message, according to some embodiments, generated by a messaging client application for communication.



FIG. 5 is a schematic diagram illustrating an example access-limiting process, in terms of which access to media content item (e.g., an ephemeral message and associated multimedia payload of data) or a media content item collection (e.g., an ephemeral message story) may be time-limited (e.g., made ephemeral).



FIG. 6 is a block diagram illustrating various modules of an annotation system, according to some embodiments.



FIGS. 7 and 8 are flowcharts illustrating methods for annotating a collection of media content items, according to certain embodiments.



FIG. 9 is a block diagram illustrating a representative software architecture, which may be used in conjunction with various hardware architectures herein described.



FIG. 10 is a block diagram illustrating components of a machine, according to some embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein.





DETAILED DESCRIPTION

Various embodiments provide systems, methods, devices, and instructions for annotating a collection of media content items (e.g., story or gallery of media content items). According to some embodiments, an annotation system automatically determines one or more annotations for a plurality of media content items, and generates a collection of media content items that associates the determined annotations with the plurality of media content items. Depending on the embodiment, annotations that may be determined for the plurality of media content (and associated with the collection for the media content items) can include, without limitation, a caption (e.g., single word or phrase), a geographic location, a category, a novelty measurement, an event (e.g., periodic event, ongoing event, or concluded event), and a highlight media content item (e.g., for representing the collection). One or more other annotations may be determined by the annotation system. For some embodiments, the plurality of media content items being processed (e.g., implemented into an annotated collection of media content items) is automatically identified using algorithms or rules that groups (e.g., clusters or curates) the plurality of media content items from a larger plurality of media content items (e.g., stored on a media content item database) based on various factors or concepts (e.g., topics, events, places, celebrities, space/time proximity, media sources, breaking news, etc.). By annotating collections of media content items, various embodiments can improve a computing device ability to search for, organize, or present, such collections based on one or more of their determined characteristics.


The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail.


Reference will now be made in detail to embodiments of the present disclosure, examples of which are illustrated in the appended drawings. The present disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein.



FIG. 1 is a block diagram showing an example messaging system 100, for exchanging data (e.g., messages and associated content) over a network 106, which can include an annotation system, according to some embodiments. The messaging system 100 includes multiple client devices 102, each of which hosts a number of applications including a messaging client application 104. Each messaging client application 104 is communicatively coupled to other instances of the messaging client application 104 and a messaging server system 108 via a network 106 (e.g., the Internet).


Accordingly, each messaging client application 104 can communicate and exchange data with another messaging client application 104 and with the messaging server system 108 via the network 106. The data exchanged between messaging client applications 104, and between a messaging client application 104 and the messaging server system 108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video or other multimedia data).


The messaging server system 108 provides server-side functionality via the network 106 to a particular messaging client application 104. While certain functions of the messaging system 100 are described herein as being performed by either a messaging client application 104 or by the messaging server system 108, it will be appreciated that the location of certain functionality either within the messaging client application 104 or the messaging server system 108 is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system 108, but to later migrate this technology and functionality to the messaging client application 104 where a client device 102 has a sufficient processing capacity.


The messaging server system 108 supports various services and operations that are provided to the messaging client application 104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client application 104. This data may include message content, client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system 100 are invoked and controlled through functions available via user interfaces (UIs) of the messaging client application 104.


Turning now specifically to the messaging server system 108, an Application Program Interface (API) server 110 is coupled to, and provides a programmatic interface to, an application server 112. The application server 112 is communicatively coupled to a database server 118, which facilitates access to a database 120 in which is stored data associated with messages (e.g., collections of messages) processed by the application server 112.


Dealing specifically with the Application Program Interface (API) server 110, this server receives and transmits message data (e.g., commands and message payloads) between the client device 102 and the application server 112. Specifically, the API server 110 provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client application 104 in order to invoke functionality of the application server 112. The API server 110 exposes various functions supported by the application server 112, including account registration; login functionality; the sending of messages, via the application server 112, from a particular messaging client application 104 to another messaging client application 104; the sending of media files (e.g., digital images or video) from a messaging client application 104 to the messaging server application 114, and for possible access by another messaging client application 104; the setting of a collection of media content items (e.g., story), the retrieval of a list of friends of a user of a client device 102; the retrieval of such collections; the retrieval of messages and content, the adding and deletion of friends to a social graph; the location of friends within a social graph; and opening an application event (e.g., relating to the messaging client application 104).


The application server 112 hosts a number of applications, systems, and subsystems, including a messaging server application 114, an annotation system 116, and a social network system 122. The messaging server application 114 implements a number of message processing technologies and functions, particularly related to the aggregation and other processing of media content items (e.g., textual and multimedia content items) included in messages received from multiple instances of the messaging client application 104. As will be described herein, media content items from multiple sources may be aggregated into collections of media content items (e.g., stories or galleries), which may be automatically annotated by various embodiments described herein. For example, the collections of media content items can be annotated by associating the collections with captions, geographic locations, categories, novelty measurements, events, highlight media content items, and the like. The collections of media content items can be made available for access, by the messaging server application 114, to the messaging client application 104. Other processor- and memory-intensive processing of data may also be performed server-side by the messaging server application 114, in view of the hardware requirements for such processing.


The application server 112 also includes the annotation system 116 that is dedicated to performing various image processing operations, typically with respect to digital images or video received within the payload of a message at the messaging server application 114.


The social network system 122 supports various social networking functions and services, and makes these functions and services available to the messaging server application 114. To this end, the social network system 122 maintains and accesses an entity graph 304 (FIG. 3) within the database 120. Examples of functions and services supported by the social network system 122 include the identification of other users of the messaging system 100 with which a particular user has relationships or is “following”, and also the identification of other entities and interests of a particular user.


The application server 112 is communicatively coupled to a database server 118, which facilitates access to a database 120 in which is stored data associated with messages processed by the messaging server application 114.



FIG. 2 is block diagram illustrating further details regarding the messaging system 100 that includes an annotation system 206, according to some embodiments. Specifically, the messaging system 100 is shown to comprise the messaging client application 104 and the application server 112, which in turn embody a number of some subsystems, namely an ephemeral timer system 202, a collection management system 204, and the annotation system 206.


The ephemeral timer system 202 is responsible for enforcing the temporary access to content permitted by the messaging client application 104 and the messaging server application 114. To this end, the ephemeral timer system 202 incorporates a number of timers that, based on duration and display parameters associated with a message, or collection of messages (e.g., a story), selectively display and enable access to messages and associated content via the messaging client application 104. Further details regarding the operation of the ephemeral timer system 202 are provided below.


The collection management system 204 is responsible for managing collections of media content items (e.g., collections of text, image, video, and audio data), which may be initially user curated or automatically generated based on various factors or concepts (e.g., topics, events, places, celebrities, space/time proximity, media sources, breaking news, etc.) and then annotated as described herein. In some examples, a collection of media content items (e.g., messages, including digital images, video, text, and audio) may be organized into a “gallery,” such as an “event gallery” or an “event story.” Such a collection may be made available for a specified time period, such as the duration of an event to which the media content items relate. For example, media content items relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system 204 may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client application 104. According to some embodiments, the icon comprises one or more media content items from the collection that are identified as highlighted media content items for the collection as described herein.


The collection management system 204 furthermore includes a curation interface 208 that allows a collection manager to manage and curate a particular collection of media content items. For example, the curation interface 208 enables an event organizer to curate a collection of media content items relating to a specific event (e.g., delete inappropriate media content items or redundant messages). Additionally, the collection management system 204 employs machine vision (or image recognition technology) and media content item rules to automatically curate a media content item collection. In certain embodiments, compensation may be paid to a user for inclusion of user-generated media content items into a collection. In such cases, the curation interface 208 operates to automatically make payments to such users for the use of their media content items.


The annotation system 206 determines one or more annotations for a plurality of media content items, and generates a collection of media content items that associates the determined annotations with the plurality of media content items as described herein. Depending on the embodiment, annotations that may be determined for the plurality of media content (and associated with the collection for the media content items) can include, without limitation, a caption, a geographic location, a category, a novelty measurement, an event, and a highlight media content item (e.g., for representing the collection). For some embodiments, the annotation system 206 determines a particular caption for a plurality of media content items by selecting the particular caption from a set of captions, where the set of captions being extracted from the plurality of media content items. The annotation system 206 determines a particular geographic location for the plurality of media content items. The annotation system 206 determines a particular category for the plurality of media content items based on at least one of analysis of a set of visual labels identified for the plurality of media content items, or analysis of at least one caption in the set of captions. The annotation system 206 generates a collection of media content items that comprises the plurality of media content items and collection annotation data that at least associates the collection with the particular caption, with the particular geographic location, and with the particular category. The annotation system 206 provides the collection of media content items to a client device for access by a user at the client device.



FIG. 3 is a schematic diagram illustrating data 300 which may be stored in the database 120 of the messaging server system 108, according to certain embodiments. While the content of the database 120 is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database).


The database 120 includes message data stored within a message table 314. The entity table 302 stores entity data, including an entity graph 304. Entities for which records are maintained within the entity table 302 may include individuals, corporate entities, organizations, objects, places, events etc. Regardless of type, any entity regarding which the messaging server system 108 stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown).


The entity graph 304 furthermore stores information regarding relationships and associations between entities. Such relationships may be social, professional (e.g., work at a common corporation or organization) interest-based or activity-based, merely for example.


In an annotation table 312, the database 120 also stores annotation data, such as annotations applied to a message or a collection of media content items. As described herein, annotations applied to a collection of media content items can include, without limitation, a caption (e.g., single word or phrase), a geographic location, a category, a novelty measurement, an event (e.g., periodic event, ongoing event, or concluded event), and a highlight media content item (e.g., for representing the collection). Annotations applied to a message may include, for example, filters, media overlays, texture fills and sample digital images in an annotation table 312. Filters, media overlays, texture fills, and sample digital images for which data is stored within the annotation table 312 are associated with and applied to videos (for which data is stored in a video table 310) or digital images (for which data is stored in an image table 308). In one example, an image overlay can be displayed as overlaid on a digital image or video during presentation to a recipient user. For example, a user may append a media overlay on a selected portion of the digital image, resulting in presentation of an annotated digital image that includes the media overlay over the selected portion of the digital image. In this way, a media overlay can be used, for example, as a digital sticker or a texture fill that a user can use to annotate or otherwise enhance a digital image, which may be captured by a user (e.g., photograph).


Filters may be of various types, including user-selected filters from a gallery of filters presented to a sending user by the messaging client application 104 when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters) which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client application 104, based on geolocation information determined by a GPS unit of the client device 102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client application 104, based on other inputs or information gathered by the client device 102 during the message creation process. Examples of data filters include current temperature at a specific location, a current speed at which a sending user is traveling, battery life for a client device 102, or the current time.


Other annotation data that may be stored within the image table 308 is so-called “lens” data. A “lens” may be a real-time special effect and sound that may be added to an image or a video.


As mentioned above, the video table 310 stores video data which, in one embodiment, is associated with messages for which records are maintained within the message table 314. Similarly, the image table 308 stores image data associated with messages for which message data is stored in the entity table 302. The entity table 302 may associate various annotations from the annotation table 312 with various images and videos stored in the image table 308 and the video table 310.


A story table 306 stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection of media content items (e.g., a story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for which a record is maintained in the entity table 302) or automatically generated based on various factors or concepts (e.g., topics, events, places, celebrities, space/time proximity, media sources, breaking news, etc.). A user may create a “personal story” in the form of a collection of media content items that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client application 104 may include an icon that is user-selectable to enable a sending user to add specific media content items to his or her personal story.


A collection may also constitute a “live story,” which is a collection of media content items from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted media content items from various locations and events. Users whose client devices have location services enabled and are at a common location event at a particular time may, for example, be presented with an option, via a user interface of the messaging client application 104, to contribute media content items to a particular live story. The live story may be identified to the user by the messaging client application 104, based on his or her location. The end result is a “live story” told from a community perspective.


A further type of media content item collection is known as a “location story,” which enables a user whose client device 102 is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some embodiments, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus).



FIG. 4 is a schematic diagram illustrating a structure of a message 400, according to some embodiments, generated by a messaging client application 104 for communication to a further messaging client application 104 or the messaging server application 114. The content of a particular message 400 is used to populate the message table 314 stored within the database 120, accessible by the messaging server application 114. Similarly, the content of a message 400 is stored in memory as “in-transit” or “in-flight” data of the client device 102 or the application server 112. The message 400 is shown to include the following components:


A message identifier 402: a unique identifier that identifies the message 400.


A message text payload 404: text, to be generated by a user via a user interface of the client device 102 and that is included in the message 400.


A message image payload 406: image data, captured by a camera component of a client device 102 or retrieved from memory of a client device 102, and that is included in the message 400.


A message video payload 408: video data, captured by a camera component or retrieved from a memory component of the client device 102 and that is included in the message 400.


A message audio payload 410: audio data, captured by a microphone or retrieved from the memory component of the client device 102, and that is included in the message 400.


A message annotation 412: annotation data (e.g., filters, stickers, texture fills, or other enhancements) that represents annotations to be applied to message image payload 406, message video payload 408, or message audio payload 410 of the message 400.


A message duration parameter 414: parameter value indicating, in seconds, the amount of time for which content of the message (e.g., the message image payload 406, message video payload 408, message audio payload 410) is to be presented or made accessible to a user via the messaging client application 104.


A message geolocation parameter 416: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message. Multiple message geolocation parameter 416 values may be included in the payload, each of these parameter values being associated with respect to media content items included in the content (e.g., a specific image within the message image payload 406, or a specific video in the message video payload 408).


A message story identifier 418: identifier values identifying one or more media content item collections (e.g., “stories”) with which a particular media content item in the message image payload 406 of the message 400 is associated. For example, multiple images within the message image payload 406 may each be associated with multiple media content item collections using identifier values.


A message tag 420: each message 400 may be tagged with multiple tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload 406 depicts an animal (e.g., a lion), a tag value may be included within the message tag 420 that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.


A message sender identifier 422: an identifier (e.g., a messaging system identifier, email address or device identifier) indicative of a user of the client device 102 on which the message 400 was generated and from which the message 400 was sent.


A message receiver identifier 424: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device 102 to which the message 400 is addressed.


The contents (e.g., values) of the various components of message 400 may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload 406 may be a pointer to (or address of) a location within an image table 308. Similarly, values within the message video payload 408 may point to data stored within a video table 310, values stored within the message annotations 412 may point to data stored in an annotation table 312, values stored within the message story identifier 418 may point to data stored in a story table 306, and values stored within the message sender identifier 422 and the message receiver identifier 424 may point to user records stored within an entity table 302.



FIG. 5 is a schematic diagram illustrating an access-limiting process 500, in terms of which access to a media content item (e.g., an ephemeral message 502, and associated multimedia payload of data) or a media content item collection (e.g., an ephemeral message story 504) may be time-limited (e.g., made ephemeral). Though the access-limiting process 500 is described below with respect to the ephemeral message 502 and the ephemeral message story 504, for the access-limiting process 500 can be applied to another type of media content item or collection of media content items, such as a collection of media content items annotated by an embodiment described herein.


An ephemeral message 502 is shown to be associated with a message duration parameter 506, the value of which determines an amount of time that the ephemeral message 502 will be displayed to a receiving user of the ephemeral message 502 by the messaging client application 104. In one embodiment, an ephemeral message 502 is viewable by a receiving user for up to a maximum of 10 seconds, depending on the amount of time that the sending user specifies using the message duration parameter 506.


The message duration parameter 506 and the message receiver identifier 424 are shown to be inputs to a message timer 512, which is responsible for determining the amount of time that the ephemeral message 502 is shown to a particular receiving user identified by the message receiver identifier 424. In particular, the ephemeral message 502 will only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter 506. The message timer 512 is shown to provide output to a more generalized ephemeral timer system 202, which is responsible for the overall timing of display of content (e.g., an ephemeral message 502) to a receiving user.


The ephemeral message 502 is shown in FIG. 5 to be included within an ephemeral message story 504 (e.g., a personal story, or an event story). The ephemeral message story 504 has an associated story duration parameter 508, a value of which determines a time duration for which the ephemeral message story 504 is presented and accessible to users of the messaging system 100. The story duration parameter 508, for example, may be the duration of a music concert, where the ephemeral message story 504 is a collection of media content items pertaining to that concert. Alternatively, a user (either the owning user or a curator user) may specify the value for the story duration parameter 508 when performing the setup and creation of the ephemeral message story 504.


Additionally, each ephemeral message 502 within the ephemeral message story 504 has an associated story participation parameter 510, a value of which determines the duration of time for which the ephemeral message 502 will be accessible within the context of the ephemeral message story 504. Accordingly, a particular ephemeral message story 504 may “expire” and become inaccessible within the context of the ephemeral message story 504, prior to the ephemeral message story 504 itself expiring in terms of the story duration parameter 508. The story duration parameter 508, story participation parameter 510, and message receiver identifier 424 each provide input to a story timer 514, which operationally determines, firstly, whether a particular ephemeral message 502 of the ephemeral message story 504 will be displayed to a particular receiving user and, if so, for how long. Note that the ephemeral message story 504 is also aware of the identity of the particular receiving user as a result of the message receiver identifier 424.


Accordingly, the story timer 514 operationally controls the overall lifespan of an associated ephemeral message story 504, as well as an individual ephemeral message 502 included in the ephemeral message story 504. In one embodiment, each and every ephemeral message 502 within the ephemeral message story 504 remains viewable and accessible for a time period specified by the story duration parameter 508. In a further embodiment, a certain ephemeral message 502 may expire, within the context of ephemeral message story 504, based on a story participation parameter 510. Note that a message duration parameter 506 may still determine the duration of time for which a particular ephemeral message 502 is displayed to a receiving user, even within the context of the ephemeral message story 504. Accordingly, the message duration parameter 506 determines the duration of time that a particular ephemeral message 502 is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message 502 inside or outside the context of an ephemeral message story 504.


The ephemeral timer system 202 may furthermore operationally remove a particular ephemeral message 502 from the ephemeral message story 504 based on a determination that it has exceeded an associated story participation parameter 510. For example, when a sending user has established a story participation parameter 510 of 24 hours from posting, the ephemeral timer system 202 will remove the relevant ephemeral message 502 from the ephemeral message story 504 after the specified 24 hours. The ephemeral timer system 202 also operates to remove an ephemeral message story 504 either when the story participation parameter 510 for each and every ephemeral message 502 within the ephemeral message story 504 has expired, or when the ephemeral message story 504 itself has expired in terms of the story duration parameter 508.


In certain use cases, a creator of a particular ephemeral message story 504 may specify an indefinite story duration parameter 508. In this case, the expiration of the story participation parameter 510 for the last remaining ephemeral message 502 within the ephemeral message story 504 will determine when the ephemeral message story 504 itself expires. In this case, a new ephemeral message 502, added to the ephemeral message story 504, with a new story participation parameter 510, effectively extends the life of an ephemeral message story 504 to equal the value of the story participation parameter 510.


Responsive to the ephemeral timer system 202 determining that an ephemeral message story 504 has expired (e.g., is no longer accessible), the ephemeral timer system 202 communicates with the messaging system 100 (and, for example, specifically the messaging client application 104) to cause an indicium (e.g., an icon) associated with the relevant ephemeral message story 504 to no longer be displayed within a user interface of the messaging client application 104. Similarly, when the ephemeral timer system 202 determines that the message duration parameter 506 for a particular ephemeral message 502 has expired, the ephemeral timer system 202 causes the messaging client application 104 to no longer display an indicium (e.g., an icon or textual identification) associated with the ephemeral message 502.



FIG. 6 is a block diagram illustrating various modules of an annotation system 206, according to some embodiments. The annotation system 206 is shown as including a media content item grouping module 602, a caption determination module 604, a geographic location determination module 606, a category determination module 608, a novelty determination module 610, a periodic event determination module 612, an ongoing event determination module 614, a concluded event determination module 616, a highlight determination module 618, a collection generation module 620, and a collection provider module 622. The various modules of the annotation system 206 are configured to communicate with each other (e.g., via a bus, shared memory, or a switch). Any one or more of these modules may be implemented using one or more processors 600 (e.g., by configuring such one or more processors 600 to perform functions described for that module) and hence may include one or more of the processors 600.


Any one or more of the modules described may be implemented using hardware alone (e.g., one or more of the computer processors of a machine, such as machine 1000) or a combination of hardware and software. For example, any described module of the annotation system 206 may physically include an arrangement of one or more of the processors 600 (e.g., a subset of or among the one or more processors of the machine, such the machine 1000) configured to perform the operations described herein for that module. As another example, any module of the annotation system 206 may include software, hardware, or both, that configure an arrangement of one or more processors 600 (e.g., among the one or more processors of the machine, such as the machine 1000)) to perform the operations described herein for that module. Accordingly, different modules of the annotation system 206 may include and configure different arrangements of such processors 600 or a single arrangement of such processors 600 at different points in time. Moreover, any two or more modules of the annotation system 206 may be combined into a single module, and the functions described herein for a single module may be subdivided among multiple modules. Furthermore, according to various embodiments, modules described herein as being implemented within a single machine, database, or device may be distributed across multiple machines, databases, or devices.


The media content item grouping module 602 identifies a plurality of media content items. According to some embodiments, the plurality of media content items is identified by grouping (e.g., clustering) specific media content items based on one or more factors or concepts. Example factors/concepts can include, without limitation, topics, events, places, celebrities, space/time proximity, media sources, breaking news, and the like. For instance, specific media content items may be grouped into a plurality of media content items based on: proximity of geographic locations associated with the specific media content items; proximity of times associated with the specific media content items; topics associated with the specific media content items; media sources associated with the specific media content items; or media types associated with the specific media content items. Depending on the embodiments, the plurality of media content items may be identified by the media content item grouping module 602 from a larger plurality of media content items, which may be stored on a datastore (e.g., database) that collects media content items from a plurality of users (e.g., through their respective client devices 102). For instance, the larger plurality of media content items (from which the plurality of media content items is identified by the module 602) may comprise media content items posted to an online service, such as a social network platform, by users of the online service.


The plurality of media content items may be identified by the media content item grouping module 602 automatically, which may occur on a periodic basis (e.g., every fifteen to forty minutes) or near real-time basis. The plurality of media content items may be identified as part of a dynamic collection pipeline that gathers together media content items from one or more sources into coherent groups (e.g., coherent clusters) based on, for example, a topic (e.g., popular topic, fresh topic, widespread topic, breaking news, fashion, sports, etc.), a visual feature, proximity of space (e.g., at or around a similar geographic location, such as places with centroids less than 200 m apart), proximity of time (e.g., same time, same day, same day of the week, etc.), quality (e.g., quality of user providing the media content item or quality of the media content item), or some combination thereof.


The caption determination module 604 determines a particular caption (e.g., particular caption phrase) for a plurality of media content items. According to some embodiments, the particular caption is determined by extracting a set of captions from the plurality of media content items and selecting the particular caption from the set of captions. In this way, the caption determination module 604 can determine a best caption to describe or represent the plurality of media content items.


For some embodiments, the caption determination module 604 extracts, from the plurality of media content items, a set of captions associated with one or more individual media content items in the plurality of media content items. Subsequently, the caption determination module 604 ranks captions in the set of captions by scoring one or more captions in the set of captions. The scoring of an individual caption (e.g., comprising a single word or a phrase) may be determined, for example, based on popularity of the individual caption within the plurality of media content items, uniqueness of the individual caption within the plurality of media content items, popularity of individual terms within the individual caption, independence of the individual caption within the plurality of media content items, or some combination thereof. For some embodiments, one or more of these factors are utilized to assign scores to individual captions extracted from the plurality of media content items. Thereafter, a scored caption having the highest rank (e.g., based on its assigned score) may be determined (e.g., selected) by the caption determination module 604 as the particular caption for the plurality of media content items.


The popularity of the individual caption within the plurality of media content items may be determined by the caption determination module 604 based on the number of media content items in the plurality that are associated with the individual caption, based on the number of users providing the media content items in the plurality (e.g., users who posted media content items to a social networking platform that are now included in the plurality) who have used the individual caption with respect to media content items not in the plurality (e.g., other media content items the users posted on the social network platform), or a combination of both.


The uniqueness of the individual caption within the plurality of media content items may be determined by the caption determination module 604 based on how frequently the individual caption is used for media content items associated with different geographic locations and different times, historical data on media content item submissions (e.g., historical media content item postings to a social networking platform), or a combination of both.


The popularity of individual terms within the individual caption may be determined by the caption determination module 604 based on media content items in the plurality of media content items that are not associated with the individual caption but that are associated with a caption that includes at least some (but not all) of the individual caption (e.g., sub-phrases of the individual caption). The popularity of individual terms may be determined by determining how many media content items in the plurality of media content items satisfy such a condition.


The independence of the individual caption, within the plurality of media content items, can indicate how sufficient the individual caption is in describing the majority of the media content items in the plurality. The caption determination module 604 may consider all the media content items in the plurality associated with a given caption that includes the individual caption and assign each of those associations a coverage score (e.g., between 0 and 1) that measures how much of the given caption is covered by the individual caption. The independence of the individual caption may be defined based on the distribution of coverage scores. A high independence score (e.g., independence score close to 1) may represent that the individual caption is typically used as a full caption when associated with a media content item in the plurality, whereas a low independence score (e.g., independence score close to 0) may represent that the individual caption is typically used along with other terms or phrases to form a full snap caption.


The geographic location determination module 606 determines a particular geographic location for the plurality of media content items. As used herein, a geographic location can comprise a physical location associated with geographic coordinates or a place identified by a place type (e.g., business establishment, restaurant, coffee shop, library, shopping mall, park, etc.) or a proper name (e.g., STARBUCKS, MCDONALDS, EIFFEL TOWER, WALMART, STAPLES CENTER, etc.). According to some embodiments, the particular geographic location is determined by the geographic location determination module 606 using a place recognition model that can infer where a particular media content item was created or captured based on processing visual content (e.g., digital image) included in the media content item. The place recognition model may be trained on a place dataset (e.g., comprising digital images and associated identifiers for different places), which enables it to identify a learned place based on what is depicted in a new digital image. Additionally, the particular geographic location is determined by the geographic location determination module 606 using metadata included by one or more media content items of the plurality of media content items.


The category determination module 608 determines a particular category (e.g., concert, sports game, fashion show, animal story, food, party, politics, protest, breaking news, etc.) for the plurality of media content items. According to some embodiments, the particular category is determined based on analysis (e.g., statistical analysis) of a set of visual labels identified for the plurality of media content items, based on analysis (e.g., statistical analysis) of at least one caption in a set of captions extracted from the plurality of media content items, or a combination of both. The set of visual labels (e.g., building, vehicle, grass, etc.) may be identified for a media content item by using a machine learning system (e.g., deep neural network model) that can identify an object or a concept appearing in visual content (e.g., digital image or video) of the media content item. For some embodiments, the category determination module 608 uses caption/visual label representations (e.g., vectors) for categories to determine the particular category for the plurality of media content items.


For example, a caption/visual label representation for a given category may be formed by determining captions and visual labels relevant to the given category and determining their individual relevance scores. The process for determining this may comprise identifying captions and visual labels relevant to the given category from media content items or collections of media content items known or identified to be related to the given category, and extracting and aggregating captions or visual labels from known/identified media content items and collections. A caption/visual label representation may be formed for each possible category that can be associated with the plurality of media content items.


Additionally, caption counts may be determined for each caption extracted from the plurality of media content items, and the resulting caption counts may be aggregated such that each caption count for a given caption is weighted based on the uniqueness of that given caption. This aggregation can result in a caption/visual label representation for the plurality of media content items.


Subsequently, for each possible category, the caption/visual label representation for the plurality of media content items is compared against the caption/visual label representation of the possible category. Eventually, the category determination module 608 can determine (e.g., select) the possible categories having the smallest caption/visual label representation difference (e.g., vector difference) with the caption/visual label representation for the plurality of media content items.


The novelty determination module 610 determines a novelty measurement for the plurality of media content items. According to some embodiments, the novelty measurement is determined based on at least one caption in a set of captions extracted from the plurality of media content items, or based on at least one visual label in a set of visual labels identified for the plurality of media content items.


For example, the novelty measurement of the plurality of media content items can be determined by the novelty determination module 610 based on an aggregation of novelty measurements of individual captions in the set of captions, individual visual labels in the set of visual labels of the plurality of media content items, or both. The novelty measurement of individual captions or visual labels can be determined by the novelty determination module 610 determining a frequency of new media content items (e.g., those newly posted to a social networking platform by various users), across different time horizons (e.g., past twenty-four hours, average daily count in past week, same day last week, etc.), associated with the individual captions and visual labels. Based on determined frequencies of the new media content items (associated with the individual captions and visual labels), the novelty determination module 610 can determine a ratio of frequencies, corresponding to different time periods, that can represent a novelty measurement. For instance, for a given caption, a ratio of the twenty-four hour frequency of new media content items (associated with the given caption) to the average daily frequency of new media content items (associated with the given caption) can determine the novelty of the given caption.


The periodic event determination module 612 determines whether the plurality of media content items is associated with a periodic event (e.g., an event that occurs repeatedly on a periodic basis). Initially, the periodic event determination module 612 may determine that the plurality of media content items is associated with a particular event based on, for example, at least one caption extracted from the plurality of media content items, or at least one visual label identified for the plurality of media content items. For example, the periodic event determination module 612 may use a model that can recognize an event (e.g., party, concert, protest, etc.) based on visual content from the plurality of media content items. Once the particular event has been determined for the plurality of media content items, the periodic event determination module 612 can determine whether the particular event is a periodic event.


According to some embodiments, the periodic event determination module 612 determines whether the plurality of media content items is associated with a periodic event based on a similarity of at least one caption, in the set of captions, to a given caption of one or more other media content items associated with the periodic event. Additionally, for some embodiments, the periodic event determination module 612 determines whether the plurality of media content items is associated with a periodic event based on a similarity of at least one visual label, in the set of visual labels, to a given visual label of the one or more other media content items associated with the periodic event. For example, the periodic event determination module 612 may check whether one or more captions or visual labels of the plurality of media content items are similar to the captions and visual labels corresponding to media content items taken at the same geographic location or at (or around) the same time of the day (e.g., over the past several days or past week) as the plurality of media content items.


The ongoing event determination module 614 determines whether the plurality of media content items is associated with an ongoing event. Initially, the ongoing event determination module 614 may determine that the plurality of media content items is associated with a particular event based on, for example, at least one caption extracted from the plurality of media content items, or at least one visual label identified for the plurality of media content items. For example, the ongoing event determination module 614 may use a model that can recognize an event (e.g., party, concert, protest, etc.) based on visual content from the plurality of media content items. Once the particular event has been determined for the plurality of media content items, the ongoing event determination module 614 can determine whether the particular event is an ongoing event.


According to some embodiments, the ongoing event determination module 614 determines whether the plurality of media content items is associated with an ongoing event based on a trend of media content items being added to the plurality of media content items over a period of time. For instance, the ongoing event determination module 614 may determine a trend based on a number (e.g., volume) of new media content items (e.g., those newly posted to a social networking platform by various users). Additionally, an increasing or approximately stable number can signal that an event associated with the plurality of media content items is an ongoing event.


The concluded event determination module 616 determines whether the collection of media is further associated with a concluded event. Initially, the concluded event determination module 616 may determine that the plurality of media content items is associated with a particular event based on, for example, at least one caption extracted from the plurality of media content items, or at least one visual label identified for the plurality of media content items. For example, the concluded event determination module 616 may use a model that can recognize an event (e.g., party, concert, protest, etc.) based on visual content from the plurality of media content items. Once the particular event has been determined for the plurality of media content items, the concluded event determination module 616 can determine whether the particular event is an ongoing event.


According to some embodiments, the concluded event determination module 616 determines that the plurality of media content items is associated with a concluded event in response to the ongoing event determination module 614 determining that the plurality of media content items is not associated with an ongoing event.


For some embodiments, the concluded event determination module 616 determines whether the plurality of media content items is associated with a concluded event based on a trend of media content items being added to the plurality of media content items over a period of time. For instance, the concluded event determination module 616 may determine a trend based on a number (e.g., volume) of new media content items (e.g., those newly posted to a social networking platform by various users). Additionally, a decreasing number can signal that an event associated with the plurality of media content items is a concluded event (or soon to conclude event).


The highlight determination module 618 determines a set of highlight media content items for the plurality of media content items. According to some embodiments, the set of highlight media content items is selected, from the plurality of media content items, based on a set of scores determined for each individual media content item in the plurality of media content items, which can then be combined to determine a combined score for each individual media content item. For example, the highlight determination module 618 may determine the set of highlight media content items by determining for individual media content items in the plurality of media content items: a score representing how cohesive or representative (e.g., in terms of topics or visual labels) the individual media content item is with respect to the plurality of media content items; a score representing how descriptive the captions or tags (e.g., place tags) associated with the individual media content item are with respect to the plurality of media content items; or a score representing how many media content items similar to those of the individual media content item appear in the plurality of media content items. Additionally, the highlight determination module 618 may determine the set of highlight media content items by determining, for individual media content items in the plurality of media content items: a score representing whether the individual media content item meets with a user or system preference (e.g., video content items preferred); or a score representing a quality of the individual media content item.


For an individual media content item in the plurality of media content items, the highlight determination module 618 can determine a score representing how cohesive or representative (e.g., in terms of topics or visual labels) the individual media content item is with respect to the plurality of media content items. In doing so, an individual media content item associated with a “musician” visual label or with a caption including the term “musician” can be preferred as a highlight media content item for a collection of media content items associated with a concert event.


For some embodiments, a pre-compiled set of visual labels that is common or interesting with respect to a given type of event is used by the highlight determination module 618. For example, for a concert event, the pre-compiled set of visual labels can include such visual labels as “musician,” “drummer,” “guitarist,” “bassist,” and “backup singer,” which reflect a preference for highlight media content items that depict such visual content for collections associated with a concert event. Accordingly, a score determined by the highlight determination module 618 can represent whether the individual media content item is associated with at least one visual label included by the pre-compiled set of visual labels.


Similarly, for some embodiments, a pre-compiled set of relevant terms that is common or interesting with respect to a given event or a given type of event is used by the highlight determination module 618. For instance, for a football game, the pre-compiled set of relevant terms can include such terms as “touchdown” or “Super Bowl,” which reflects a preference for highlight media content items associated with a football game event. Accordingly, a score determined by the highlight determination module 618 can represent whether the individual media content item is associated with at least one term included by the pre-compiled set of relevant terms.


For an individual media content item in the plurality of media content items, the highlight determination module 618 can determine a score representing how descriptive the captions or tags (e.g., place tags) associated with the individual media content item are with respect to the plurality of media content items based on the uniqueness of a caption associated with the individual media content item, based on whether the individual media content item has a geo-filter, based on whether the individual media content item has a venue filter, or some combination thereof.


For an individual media content item in the plurality of media content items, the highlight determination module 618 can determine a score representing how many media content items similar to those of the set of highlight media content items appear in the plurality of media content items based on aggregated similarity of media content items in the plurality of media content items to the individual media content item (e.g., similarity with respect to a time, a geographic location, a caption, or visual feature).


For an individual media content item in the plurality of media content items, the highlight determination module 618 can determine a score representing a quality of the individual media content item by determining a creator quality score representing user interactions with media content items posted by a user providing the individual media content item. Additionally, the highlight determination module 618 can determine a score representing a quality of the individual media content item by determining a media quality score for the individual media content item based on input signals of the individual media content item, which can be used to filter out individual media content items that are, for example, too dark, too bright, too shaky, blurry, or lack valuable visual content.


For some embodiments, determining the set of highlight media content items by the highlight determination module 618 comprises selecting a cover media content item, from the set of highlight media content items, that will represent the plurality of media content items as a collection of media content items accessible by a client device associated with a user. For example, to represent the plurality of media content items as a collection, at least some portion of the cover media content item may be used to generate a graphical tile that can be presented to a user on a graphical user interface (e.g., displayed on a client device 102 associated with the user) as the representation of the collection. The cover media content item may be selected from the set of highlight media content items by scoring media content items in the set of highlight media content items, and selecting the cover media content item based on the resulting scores (e.g., cover media content item corresponding to the highest score). The components for scoring a given highlight media content item can include, without limitation: original score that resulted in the given highlight media content item being included in the set of highlight media content items; representativeness of visual features (e.g., visual labels) of the given highlight media content item relative to the plurality of media content items (e.g., what the story is generally about); or one or more user or system preferences for media content items (e.g., preferences for non-selfie media content items and non-captioned media content items). Representativeness of visual features of the given highlight media content item relative to the plurality of media content items may be determined by measuring a cosine similarity of aggregated visual labels of the plurality of media content items to visual labels of the given highlight media content item.


The collection generation module 620 generates a collection of media content items that comprises the plurality of media content items and collection annotation data that at least associates the collection with (if not also stores) determined associations, such as those determined by various components of the annotation system 206 (e.g., the caption determination module 604, the geographic location determination module 606, the category determination module 608, the novelty determination module 610, the periodic event determination module 612, the ongoing event determination module 614, the concluded event determination module 616, and the highlight determination module 618).


The collection generation module 620 may generate a collection of media content items by generating one or more data structures (e.g., data records in the story table 306) that represent the collection of media content items. The annotations determined by the various components of the annotation system 206 may be stored in data structure separate from data structure for collections of media content items (e.g., store annotations as records in the annotation table 312). By generating and storing the data structures, some embodiments store the collection of media content items for future access (e.g., by users of client devices 102). For instance, a set of data records generated (e.g., in the story table 306) for the collection of media content items can comprise a set of identifiers that identify individual media content items that make up the collection, and can comprise data storing, representing, or associating (with the collection) annotations determined by the various components of the annotation system 206 (e.g., data records in the story table 306 refer to records in the annotation table 312 that store annotations associated with collections).


For example, the collection annotation data may at least associate the collection with the particular caption determined by the caption determination module 604. The collection annotation data may at least associate the collection with the particular geographic location determined by the geographic location determination module 606. The collection annotation data may at least associate the collection with the particular category determined by the category determination module 608. The collection annotation data may at least associate the collection with the novelty measurement determined by the novelty determination module 610. The collection annotation data may at least associate the collection with a periodic event in response to the periodic event determination module 612 determining that the collection is associated with the ongoing event. The collection annotation data may at least associate the collection with an ongoing event in response to the ongoing event determination module 614 determining that the collection is associated with the ongoing event. The collection annotation data may at least associate the collection with a concluded event in response to the concluded event determination module 616 determining that the collection is associated with the concluded event. The collection annotation data may at least associate the collection with the set of highlight media content items determined by the highlight determination module 618.


The collection provider module 622 provides the collection of media content items, generated by the collection generation module 620, to a client device (e.g., the client device 102) for access by a user at the client device. Depending on the embodiment, the collection provider module 622 may make the collection of media content items generated by the collection generation module 620 available for access by the client device. The providing, for instance, may comprise publishing the collection of media content items to an online service accessible (e.g., a social networking platform) by the client device. Additionally, the providing may comprise transmitting some or all of the collection of media content items to the client device for local storage at the client device and subsequent viewing.


For example, the collection of media content items generated by the collection generation module 620 may be stored with the messaging server system 108. In response to a request from the client device 102 to the message server system 108, the stored collection of media content items may be provided (e.g., transmitted in whole or in part) from the message server system 108 to the client device 102 for access at the client device 102 by a user (e.g., for viewing through a graphical user interface presented by the messaging client application 104). In another example, the collection of media content items may be published to an online resource, such as a website, which may be accessible by one or more users through their associated client devices. In some instances, the collection of media content items may be provided by the collection provider module 622 as a story or gallery, which may comprise such as a collection of messages or ephemeral messages.



FIG. 7 is a flowchart illustrating a method 700 for annotating a collection of media content items, according to certain embodiments. The method 700 may be embodied in computer-readable instructions for execution by one or more computer processors such that the operations of the method 700 may be performed in part or in whole by the messaging server system 108 or, more specifically, the annotation system 206 of the messaging server application 116. Accordingly, the method 700 is described below by way of example with reference to the annotation system 206. At least some of the operations of the method 700 may be deployed on various other hardware configurations, and the method 700 is not intended to be limited to being operated by the messaging server system 108. Though the steps of method 700 may be depicted and described in a certain order, the order in which the steps are performed may vary between embodiments. For example, a step may be performed before, after, or concurrently with another step. Additionally, the components described above with respect to the method 700 are merely examples of components that may be used with the method 700, and that other components may also be utilized, in some embodiments.


At operation 702, the caption determination module 604 determines a particular caption for a plurality of media content items. According to some embodiments, the particular caption is determined by extracting a set of captions from the plurality of media content items and selecting the particular caption from the set of captions. For instance, the set of captions can be extracted from the plurality of media content items, a set of scores for the set of the set of captions can be determined, a ranking for the set of captions can be determined based on the set of scores, and the particular caption can be selected from the set of captions based on the ranking.


At operation 704, the geographic location determination module 606 determines a particular geographic location for the plurality of media content items. As used herein, a geographic location can comprise a physical location associated with geographic coordinates or a place identified by a place type (e.g., business establishment, restaurant, coffee shop, library, shopping mall, park, etc.) or a proper name (e.g., STARBUCKS, MCDONALDS, EIFFEL TOWER, WALMART, STAPLES CENTER, etc.).


At operation 706, the category determination module 608 determines a particular category for the plurality of media content items. According to some embodiments, the particular category is determined based on a set of visual labels identified for the plurality of media content items, analysis of at least one caption in a set of captions extracted from the plurality of media content items, or both.


At operation 708, the collection generation module 620 generates a collection of media content items that comprises the plurality of media content items and collection annotation data that at least associates the collection with (if not also stores) determined associations, such as those determined by one or more operations 702, 704, 706. For example, the collection annotation data may at least associate the collection with the particular caption determined by operation 702. The collection annotation data may at least associate the collection with the particular geographic location determined by operation 704. The collection annotation data may at least associate the collection with the particular category determined by operation 706.


At operation 710, the collection provider module 622 provides the collection of media content items, generated by operation 708, to a client device (e.g., the client device 102) for access by a user at the client device. Depending on the embodiment, the collection provider module 622 may make the collection of media content items generated by operation 708 available for access by the client device. The providing, for instance, may comprise publishing the collection of media content items to an online service accessible (e.g., a social networking platform) by the client device. Additionally, the providing may comprise transmitting some or all of the collection of media content items to the client device for local storage at the client device and subsequent viewing.



FIG. 8 is a flowchart illustrating a method 800 for annotating a collection of media content items, according to certain embodiments. The method 800 may be embodied in computer-readable instructions for execution by one or more computer processors such that the operations of the method 800 may be performed in part or in whole by the messaging server system 108 or, more specifically, the annotation system 206 of the messaging server application 116. Accordingly, the method 800 is described below by way of example with reference to the annotation system 206. At least some of the operations of the method 800 may be deployed on various other hardware configurations, and the method 800 is not intended to be limited to being operated by the messaging server system 108. Though the steps of method 800 may be depicted and described in a certain order, the order in which the steps are performed may vary between embodiments. For example, a step may be performed before, after, or concurrently with another step. Additionally, the components described above with respect to the method 800 are merely examples of components that may be used with the method 800, and that other components may also be utilized, in some embodiments.


At operation 802, the media content item grouping module 602 identifies a plurality of media content items. According to some embodiments, the plurality of media content items is identified by grouping (e.g., clustering) specific media content items based on one or more factors or concepts. Example factors/concepts can include, without limitation, topics, events, places, celebrities, space/time proximity, media sources, breaking news, and the like.


At operation 804, the caption determination module 604 determines a particular caption for a plurality of media content items. For some embodiments, operation 804 is similar to operation 702 of the method 700 described above with respect to FIG. 7.


At operation 806, the geographic location determination module 606 determines a particular geographic location for the plurality of media content items. For some embodiments, operation 806 is similar to operation 704 of the method 700 described above with respect to FIG. 7.


At operation 808, the category determination module 608 determines a particular category for the plurality of media content items. For some embodiments, operation 808 is similar to operation 706 of the method 700 described above with respect to FIG. 7.


At operation 810, the novelty determination module 610 determines a novelty measurement for the plurality of media content items. According to some embodiments, the novelty measurement is determined based on at least one caption in a set of captions extracted from the plurality of media content items, or based on at least one visual label in a set of visual labels identified for the plurality of media content items.


At operation 812, the modules 612, 614 and 616 determine an event for the plurality of media content items. According to some embodiments, determining the event comprises the periodic event determination module 612 determining whether the plurality of media content items is associated with a periodic event (e.g., an event that occurs repeatedly on a periodic basis). The periodic event determination module 612 may determine whether the plurality of media content items is associated with a periodic event based on a similarity of at least one caption, in the set of captions, to a given caption of one or more other media content items associated with the periodic event. The periodic event determination module 612 may determine whether the plurality of media content items is associated with a periodic event based on a similarity of at least one visual label, in the set of visual labels, to a given visual label of the one or more other media content items associated with the periodic event.


According some embodiments, determining the event comprises the ongoing event determination module 614 determining whether the plurality of media content items is associated with an ongoing event. The ongoing event determination module 614 may determine whether the plurality of media content items is associated with an ongoing event based on a trend of media content items being added to the plurality of media content items over a period of time. For some embodiments, determining the event comprises the concluded event determination module 616 determining whether the collection of media is further associated with a concluded event.


At operation 814, the highlight determination module 618 determines a set of highlight media content items for the plurality of media content items. According to some embodiments, the set of highlight media content items is selected, from the plurality of media content items, based on a set of scores determined for individual media content items in the plurality of media content items.


At operation 816, the collection generation module 620 generates a collection of media content items that comprises the plurality of media content items and collection annotation data that at least associates the collection with (if not also stores) determined associations, such as those determined by one or more of operations 804, 806, 808, 810, 812, 814. For example, the collection annotation data may at least associate the collection with the particular caption determined by operation 804. The collection annotation data may at least associate the collection with the particular geographic location determined by operation 806. The collection annotation data may at least associate the collection with the particular category determined by operation 808. The collection annotation data may at least associate the collection with the novelty measurement determined by operation 810. The collection annotation data may at least associate the collection with a periodic event in response to operation 812 determining that the collection is associated with the ongoing event. The collection annotation data may at least associate the collection with an ongoing event in response to operation 812 determining that the collection is associated with the ongoing event. The collection annotation data may at least associate the collection with a concluded event in response to operation 812 determining that the collection is associated with the concluded event. The collection annotation data may at least associate the collection with the set of highlight media content items determined by operation 814. For some embodiments, operation 816 is similar to operation 708 of the method 700 described above with respect to FIG. 7.


At operation 818, the collection provider module 622 provides the collection of media content items, generated by operation 816, to a client device (e.g., the client device 102) for access by a user at the client device. For some embodiments, operation 818 is similar to operation 710 of the method 700 described above with respect to FIG. 7.



FIG. 9 is a block diagram illustrating an example software architecture 906, which may be used in conjunction with various hardware architectures herein described. FIG. 9 is a non-limiting example of a software architecture and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture 906 may execute on hardware such as machine 1000 of FIG. 10 that includes, among other things, processors 1004, memory/storage 1006, and I/O components 1018. A representative hardware layer 952 is illustrated and can represent, for example, the machine 1000 of FIG. 10. The representative hardware layer 952 includes a processing unit 954 having associated executable instructions 904. Executable instructions 904 represent the executable instructions of the software architecture 906, including implementation of the methods, components and so forth described herein. The hardware layer 952 also includes memory or storage modules memory/storage 956, which also have executable instructions 904. The hardware layer 952 may also comprise other hardware 958.


In the example architecture of FIG. 9, the software architecture 906 may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture 906 may include layers such as an operating system 902, libraries 920, applications 916, and a presentation layer 914. Operationally, the applications 916 or other components within the layers may invoke application programming interface (API) calls 908 through the software stack and receive a response in the example form of messages 912 to the API calls 908. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special purpose operating systems may not provide a frameworks/middleware 918, while others may provide such a layer. Other software architectures may include additional or different layers.


The operating system 902 may manage hardware resources and provide common services. The operating system 902 may include, for example, a kernel 922, services 924 and drivers 926. The kernel 922 may act as an abstraction layer between the hardware and the other software layers. For example, the kernel 922 may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services 924 may provide other common services for the other software layers. The drivers 926 are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers 926 include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth, depending on the hardware configuration.


The libraries 920 provide a common infrastructure that is used by the applications 916 or other components or layers. The libraries 920 provide functionality that allows other software components to perform tasks in an easier fashion than to interface directly with the underlying operating system 902 functionality (e.g., kernel 922, services 924, or drivers 926). The libraries 920 may include system libraries 944 (e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, the libraries 920 may include API libraries 946 such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as MPEG4, H.264, MP3, AAC, AMR, JPG, PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries 920 may also include a wide variety of other libraries 948 to provide many other APIs to the applications 916 and other software components/modules.


The frameworks/middleware 918 (also sometimes referred to as middleware) provide a higher-level common infrastructure that may be used by the applications 916 or other software components/modules. For example, the frameworks/middleware 918 may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks/middleware 918 may provide a broad spectrum of other APIs that may be used by the applications 916 or other software components/modules, some of which may be specific to a particular operating system 902 or platform.


The applications 916 include built-in applications 938 or third-party applications 940. Examples of representative built-in applications 938 may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, or a game application. Third-party applications 940 may include an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform, and may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or other mobile operating systems. The third-party applications 940 may invoke the API calls 908 provided by the mobile operating system (such as operating system 902) to facilitate functionality described herein.


The applications 916 may use built-in operating system functions (e.g., kernel 922, services 924, or drivers 926), libraries 920, and frameworks/middleware 918 to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems, interactions with a user may occur through a presentation layer, such as presentation layer 914. In these systems, the application/component “logic” can be separated from the aspects of the application/component that interact with a user.



FIG. 10 is a block diagram illustrating components of a machine 1000, according to some embodiments, able to read instructions from a machine-readable medium (e.g., a computer-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, FIG. 10 shows a diagrammatic representation of the machine 1000 in the example form of a computer system, within which instructions 1010 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 1000 to perform any one or more of the methodologies discussed herein may be executed. As such, the instructions 1010 may be used to implement modules or components described herein. The instructions 1010 transform the general, non-programmed machine 1000 into a particular machine 1000 programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine 1000 operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine 1000 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine 1000 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 1010, sequentially or otherwise, that specify actions to be taken by machine 1000. Further, while only a single machine 1000 is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions 1010 to perform any one or more of the methodologies discussed herein.


The machine 1000 may include processors 1004, memory/storage 1006, and I/O components 1018, which may be configured to communicate with each other such as via a bus 1002. The memory/storage 1006 may include a memory 1014, such as a main memory, or other memory storage, and a storage unit 1016, both accessible to the processors 1004 such as via the bus 1002. The storage unit 1016 and memory 1014 store the instructions 1010 embodying any one or more of the methodologies or functions described herein. The instructions 1010 may also reside, completely or partially, within the memory 1014, within the storage unit 1016, within at least one of the processors 1004 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 1000. Accordingly, the memory 1014, the storage unit 1016, and the memory of processors 1004 are examples of machine-readable media.


The I/O components 1018 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 1018 that are included in a particular machine 1000 will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 1018 may include many other components that are not shown in FIG. 10. The I/O components 1018 are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various embodiments, the I/O components 1018 may include output components 1026 and input components 1028. The output components 1026 may include visual components (e.g., a display such as a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components 1028 may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instrument), tactile input components (e.g., a physical button, a touch screen that provides location or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like.


In further embodiments, the I/O components 1018 may include biometric components 1030, motion components 1034, environmental components 1036, or position components 1038, among a wide array of other components. For example, the biometric components 1030 may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram based identification), and the like. The motion components 1034 may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components 1036 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometer that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components 1038 may include location sensor components (e.g., a Global Positioning System (GPS) receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.


Communication may be implemented using a wide variety of technologies. The I/O components 1018 may include communication components 1040 operable to couple the machine 1000 to a network 1032 or devices 1020 via coupling 1022 and coupling 1024 respectively. For example, the communication components 1040 may include a network interface component or other suitable device to interface with the network 1032. In further examples, communication components 1040 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices 1020 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)).


Moreover, the communication components 1040 may detect identifiers or include components operable to detect identifiers. For example, the communication components 1040 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components 1040, such as, location via Internet Protocol (IP) geo-location, location via Wi-Fi® signal triangulation, location via detecting a NFC beacon signal that may indicate a particular location, and so forth.


As used herein, “ephemeral message” can refer to a message (e.g., message item) that is accessible for a time-limited duration (e.g., maximum of 10 seconds). An ephemeral message may comprise a text content, image content, audio content, video content and the like. The access time for the ephemeral message may be set by the message sender or, alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, an ephemeral message is transitory. A message duration parameter associated with an ephemeral message may provide a value that determines the amount of time that the ephemeral message can be displayed or accessed by a receiving user of the ephemeral message. An ephemeral message may be accessed or displayed using a messaging client software application capable of receiving and displaying content of the ephemeral message, such as an ephemeral messaging application.


As also used herein, “ephemeral message story” can refer to a collection of ephemeral message content items that is accessible for a time-limited duration, similar to an ephemeral message. An ephemeral message story may be sent from one user to another, and may be accessed or displayed using a messaging client software application capable of receiving and displaying the collection of ephemeral message content items, such as an ephemeral messaging application.


Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.


Although an overview of the inventive subject matter has been described with reference to specific embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure.


The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The detailed description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.


As used herein, modules may constitute software modules (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium), hardware modules, or any suitable combination thereof. A “hardware module” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner. In various embodiments, one or more computer systems or one or more hardware modules thereof may be configured by software (e.g., an application or portion thereof) as a hardware module that operates to perform operations described herein for that module.


In some embodiments, a hardware module may be implemented electronically. For example, a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware module may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC. A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. As an example, a hardware module may include software encompassed within a CPU or other programmable processor.


Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module includes a CPU configured by software to become a special-purpose processor, the CPU may be configured as respectively different special-purpose processors (e.g., each included in a different hardware module) at different times. Software (e.g., a software module) may accordingly configure one or more processors, for example, to become or otherwise constitute a particular hardware module at one instance of time and to become or otherwise constitute a different hardware module at a different instance of time.


Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over suitable circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory (e.g., a memory device) to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information from a computing resource).


The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented module” refers to a hardware module in which the hardware includes one or more processors. Accordingly, the operations described herein may be at least partially processor-implemented, hardware-implemented, or both, since a processor is an example of hardware, and at least some operations within any one or more of the methods discussed herein may be performed by one or more processor-implemented modules, hardware-implemented modules, or any suitable combination thereof.


As used herein, the term “or” may be construed in either an inclusive or exclusive sense. The terms “a” or “an” should be read as meaning “at least one,” “one or more,” or the like. The use of words and phrases such as “one or more,” “at least,” “but not limited to,” or other like phrases shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.


Boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.


The description above includes systems, methods, devices, instructions, and computer media (e.g., computing machine program products) that embody illustrative embodiments of the disclosure. In the description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail.

Claims
  • 1. A method comprising: determining, by one or more processors, a set of scores for a plurality of media content items by determining an individual score for an individual media content item in the plurality of media content items;selecting, by the one or more processors, a set of highlight media content items from the plurality of media content items based on the set of scores;determining, by the one or more processors, an individual caption for the plurality of media content items, the determining of the individual caption comprising: extracting, from the plurality of media content items, a set of captions associated with one or more individual media content items in the plurality of media content items;determining a set of caption scores for the set of captions by assigning a select caption score for each select caption in the set of captions, at least one caption in the set of captions being assigned an individual caption score based on an independence of the at least one caption within the plurality of media content items by: determining a distribution of coverage scores, the determining of the distribution of coverage scores comprising assigning, for each individual media content item in the plurality of media content items associated with a given caption that includes the at least one caption, a coverage score that measures how much of the given caption is covered by the at least one caption; anddetermining the independence of the at least one caption based on the distribution of coverage scores, the independence indicating how sufficient the at least one caption is in describing a majority of the media content items in the plurality of media content items, the at least one caption comprising a phrase;determining a ranking for the set of captions based on the set of caption scores; andselecting, from the set of captions, the individual caption based on the ranking;generating, by the one or more processors, a collection of media content items that comprises the plurality of media content items, the individual caption, and collection annotation data, the collection annotation data associating the collection of media content items with the set of highlight media content items; andproviding, by the one or more processors, the collection of media content items to a client device for access by a user at the client device, one or more media content items from the set of highlight media content items being used to generate a graphical tile on the client device, the graphical tile being used to graphically represent the collection of media content items on the client device.
  • 2. The method of claim 1, wherein the individual caption score is a first caption score, and wherein at least another caption in the set of captions is assigned a second caption score based on at least one of: a popularity of the select caption within the plurality of media content items; ora uniqueness of the select caption within the plurality of media content items.
  • 3. The method of claim 1, comprising: identifying, by the one or more processors, the plurality of media content items by grouping specific media content items based on at least one of proximity of geographic locations associated with the specific media content items, proximity of times associated with the specific media content items, topics associated with the specific media content items, media sources associated with the specific media content items, or media types associated with the specific media content items.
  • 4. The method of claim 1, comprising: determining, by the one or more processors, whether the plurality of media content items is associated with a periodic event based on at least one of: a similarity of at least one caption, in a set of captions extracted from the plurality of media content items, to a given caption of one or more other media content items associated with the periodic event; ora similarity of at least one visual label, in a set of visual labels identified for the plurality of media content items, to a given visual label of the one or more other media content items associated with the periodic event;the collection annotation data at least associating the collection of media content items with the periodic event in response to determining that the plurality of media content items is associated with the periodic event.
  • 5. The method of claim 1, comprising: determining, by the one or more processors, whether the plurality of media content items is associated with an ongoing event based on a trend of media content items being added to the plurality of media content items over a period of time, the collection annotation data at least associating the collection of media content items with the ongoing event in response to determining that the plurality of media content items is associated with the ongoing event.
  • 6. The method of claim 1, comprising: determining, by the one or more processors, whether the collection of media content items is associated with a concluded event, the collection annotation data at least associating the collection of media content items with the concluded event in response to determining that the plurality of media content items is associated with the concluded event.
  • 7. The method of claim 1, wherein the assigning of the select caption score for the select caption is based on a popularity of the select caption within the plurality of media content items, the popularity being determined based on at least one of: a number of media content items in the plurality of media content items that are associated with the select caption; ora number of users providing one or more media content items in the plurality of media content items who have used the select caption with respect to media content items not in the plurality of media content items.
  • 8. A system comprising: one or more processors; andone or more machine-readable mediums storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: determining a set of scores for a plurality of media content items by determining an individual score for an individual media content item in the plurality of media content items;selecting a set of highlight media content items from the plurality of media content items based on the set of scores;determining an individual caption for the plurality of media content items, the determining of the individual caption comprising: extracting, from the plurality of media content items, a set of captions associated with one or more individual media content items in the plurality of media content items;determining a set of caption scores for the set of captions by assigning a select caption score for each select caption in the set of captions, at least one caption in the set of captions being assigned an individual caption score based on an independence of the at least one caption within the plurality of media content items, the independence indicating how sufficient the at least one caption is in describing a majority of the media content items in the plurality of media content items by: determining a distribution of coverage scores, the determining of the distribution of coverage scores comprising assigning, for each individual media content item in the plurality of media content items associated with a given caption that includes the at least one caption, a coverage score that measures how much of the given caption is covered by the at least one caption; anddetermining the independence of the at least one caption based on the distribution of coverage scores, the at least one caption comprising a phrase;determining a ranking for the set of captions based on the set of caption scores; andselecting, from the set of captions, the individual caption based on the ranking;generating a collection of media content items that comprises the plurality of media content items, the individual caption, and collection annotation data, the collection annotation data associating the collection of media content items with the set of highlight media content items; andproviding the collection of media content items to a client device for access by a user at the client device, one or more media content items from the set of highlight media content items being used to generate a graphical tile on the client device, the graphical tile being used to graphically represent the collection of media content items on the client device.
  • 9. The system of claim 8, wherein the individual caption score is a first caption score, and wherein at least another caption in the set of captions is assigned a second caption score based on at least one of: a popularity of the select caption within the plurality of media content items; ora uniqueness of the select caption within the plurality of media content items.
  • 10. The system of claim 8, wherein the operations comprise: identifying the plurality of media content items by grouping specific media content items based on at least one of proximity of geographic locations associated with the specific media content items, proximity of times associated with the specific media content items, topics associated with the specific media content items, media sources associated with the specific media content items, or media types associated with the specific media content items.
  • 11. The system of claim 8, wherein the operations comprise: determining whether the plurality of media content items is associated with a periodic event based on at least one of: a similarity of at least one caption, in a set of captions extracted from the plurality of media content items, to a given caption of one or more other media content items associated with the periodic event; ora similarity of at least one visual label, in a set of visual labels identified for the plurality of media content items, to a given visual label of the one or more other media content items associated with the periodic event;the collection annotation data at least associating the collection of media content items with the periodic event in response to determining that the plurality of media content items is associated with the periodic event.
  • 12. The system of claim 8, wherein the operations comprise: determining whether the plurality of media content items is associated with an ongoing event based on a trend of media content items being added to the plurality of media content items over a period of time, the collection annotation data at least associating the collection of media content items with the ongoing event in response to determining that the plurality of media content items is associated with the ongoing event.
  • 13. A non-transitory computer-readable medium storing instructions that, when executed by one or more computer processors, cause the one or more computer processors to perform operations comprising: determining a set of scores for a plurality of media content items by determining an individual score for an individual media content item in the plurality of media content items;selecting a set of highlight media content items from the plurality of media content items based on the set of scores;determining an individual caption for the plurality of media content items, the determining of the individual caption comprising: extracting, from the plurality of media content items, a set of captions associated with one or more individual media content items in the plurality of media content items;determining a set of caption scores for the set of captions by assigning a select caption score for each select caption in the set of captions, at least one caption in the set of captions being assigned an individual caption score based on an independence of the at least one caption within the plurality of media content items, the independence indicating how sufficient the at least one caption is in describing a majority of the media content items in the plurality of media content items by: determining a distribution of coverage scores, the determining of the distribution of coverage scores comprising assigning, for each individual media content item in the plurality of media content items associated with a given caption that includes the at least one caption, a coverage score that measures how much of the given caption is covered by the at least one caption; anddetermining the independence of the at least one caption based on the distribution of coverage scores, the at least one caption comprising a phrase;determining a ranking for the set of captions based on the set of caption scores; andselecting, from the set of captions, the individual caption based on the ranking;generating a collection of media content items that comprises the plurality of media content items, the individual caption, and collection annotation data, the collection annotation data associating the collection of media content items with the set of highlight media content items; andproviding the collection of media content items to a client device for access by a user at the client device, one or more media content items from the set of highlight media content items being used to generate a graphical tile on the client device, the graphical tile being used to graphically represent the collection of media content items on the client device.
  • 14. The non-transitory computer-readable medium of claim 13, wherein the individual caption score is a first caption score, and wherein at least another caption in the set of captions is assigned a second caption score based on at least one of: a popularity of the select caption within the plurality of media content items; ora uniqueness of the select caption within the plurality of media content items.
  • 15. The non-transitory computer-readable medium of claim 13, wherein the operations comprise: identifying the plurality of media content items by grouping specific media content items based on at least one of proximity of geographic locations associated with the specific media content items, proximity of times associated with the specific media content items, topics associated with the specific media content items, media sources associated with the specific media content items, or media types associated with the specific media content items.
  • 16. The non-transitory computer-readable medium of claim 13, wherein the operations comprise: determining whether the plurality of media content items is associated with a periodic event based on at least one of: a similarity of at least one caption, in a set of captions extracted from the plurality of media content items, to a given caption of one or more other media content items associated with the periodic event; ora similarity of at least one visual label, in a set of visual labels identified for the plurality of media content items, to a given visual label of the one or more other media content items associated with the periodic event;the collection annotation data at least associating the collection of media content items with the periodic event in response to determining that the plurality of media content items is associated with the periodic event.
  • 17. The non-transitory computer-readable medium of claim 13, wherein the operations comprise: determining whether the plurality of media content items is associated with an ongoing event based on a trend of media content items being added to the plurality of media content items over a period of time, the collection annotation data at least associating the collection of media content items with the ongoing event in response to determining that the plurality of media content items is associated with the ongoing event.
Parent Case Info

This application is a continuation of and claims the benefit of priority of U.S. patent application Ser. No. 15/941,743, filed Mar. 30, 2018, which is hereby incorporated by reference in its entirety.

US Referenced Citations (638)
Number Name Date Kind
666223 Shedlock Jan 1901 A
4581634 Williams Apr 1986 A
4975690 Torres Dec 1990 A
5072412 Henderson, Jr. et al. Dec 1991 A
5493692 Theimer et al. Feb 1996 A
5713073 Warsta Jan 1998 A
5754939 Herz et al. May 1998 A
5855008 Goldhaber et al. Dec 1998 A
5883639 Walton et al. Mar 1999 A
5999932 Paul Dec 1999 A
6012098 Bayeh et al. Jan 2000 A
6014090 Rosen et al. Jan 2000 A
6029141 Bezos et al. Feb 2000 A
6038295 Mattes Mar 2000 A
6049711 Yehezkel et al. Apr 2000 A
6154764 Nitta et al. Nov 2000 A
6167435 Druckenmiller et al. Dec 2000 A
6204840 Petelycky et al. Mar 2001 B1
6205432 Gabbard et al. Mar 2001 B1
6216141 Straub et al. Apr 2001 B1
6285381 Sawano et al. Sep 2001 B1
6285987 Roth et al. Sep 2001 B1
6310694 Okimoto et al. Oct 2001 B1
6317789 Rakavy et al. Nov 2001 B1
6334149 Davis, Jr. et al. Dec 2001 B1
6349203 Asaoka et al. Feb 2002 B1
6353170 Eyzaguirre et al. Mar 2002 B1
6446004 Cao et al. Sep 2002 B1
6449657 Stanbach et al. Sep 2002 B2
6456852 Bar et al. Sep 2002 B2
6484196 Maurille Nov 2002 B1
6487601 Hubacher et al. Nov 2002 B1
6523008 Avrunin Feb 2003 B1
6542749 Tanaka et al. Apr 2003 B2
6549768 Fraccaroli Apr 2003 B1
6618593 Drutman et al. Sep 2003 B1
6622174 Ukita et al. Sep 2003 B1
6631463 Floyd et al. Oct 2003 B1
6636247 Hamzy et al. Oct 2003 B1
6636855 Holloway et al. Oct 2003 B2
6643684 Malkin et al. Nov 2003 B1
6658095 Yoakum et al. Dec 2003 B1
6665531 Soderbacka et al. Dec 2003 B1
6668173 Greene Dec 2003 B2
6684238 Dutta Jan 2004 B1
6684257 Camut et al. Jan 2004 B1
6698020 Zigmond et al. Feb 2004 B1
6700506 Winkler Mar 2004 B1
6720860 Narayanaswami Apr 2004 B1
6724403 Santoro et al. Apr 2004 B1
6757713 Ogilvie et al. Jun 2004 B1
6832222 Zimowski Dec 2004 B1
6834195 Brandenberg et al. Dec 2004 B2
6836792 Chen Dec 2004 B1
6898626 Ohashi May 2005 B2
6959324 Kubik et al. Oct 2005 B1
6970088 Kovach Nov 2005 B2
6970907 Ullmann et al. Nov 2005 B1
6980909 Root et al. Dec 2005 B2
6981040 Konig et al. Dec 2005 B1
7020494 Spriestersbach et al. Mar 2006 B2
7027124 Foote et al. Apr 2006 B2
7072963 Anderson et al. Jul 2006 B2
7085571 Kalhan et al. Aug 2006 B2
7110744 Freeny, Jr. Sep 2006 B2
7124164 Chemtob Oct 2006 B1
7149893 Leonard et al. Dec 2006 B1
7173651 Knowles Feb 2007 B1
7188143 Szeto Mar 2007 B2
7203380 Chiu et al. Apr 2007 B2
7206568 Sudit Apr 2007 B2
7227937 Yoakum et al. Jun 2007 B1
7237002 Estrada et al. Jun 2007 B1
7240089 Boudreau Jul 2007 B2
7269426 Kokkonen et al. Sep 2007 B2
7280658 Amini et al. Oct 2007 B2
7315823 Brondrup Jan 2008 B2
7349768 Bruce et al. Mar 2008 B2
7356564 Hartselle et al. Apr 2008 B2
7394345 Ehlinger et al. Jul 2008 B1
7411493 Smith Aug 2008 B2
7423580 Markhovsky et al. Sep 2008 B2
7454442 Cobleigh et al. Nov 2008 B2
7508419 Toyama Mar 2009 B2
7512649 Faybishenko et al. Mar 2009 B2
7519670 Hagale et al. Apr 2009 B2
7535890 Rojas May 2009 B2
7546554 Chiu et al. Jun 2009 B2
7607096 Oreizy et al. Oct 2009 B2
7617176 Zeng Nov 2009 B2
7639943 Kalajan Dec 2009 B1
7650231 Gadler Jan 2010 B2
7668537 DeVries Feb 2010 B2
7739304 Naaman Jun 2010 B2
7770137 Forbes et al. Aug 2010 B2
7778973 Choi Aug 2010 B2
7779444 Glad Aug 2010 B2
7787886 Markhovsky et al. Aug 2010 B2
7796946 Eisenbach Sep 2010 B2
7801954 Cadiz et al. Sep 2010 B2
7856360 Kramer et al. Dec 2010 B2
8001204 Burtner et al. Aug 2011 B2
8032586 Challenger et al. Oct 2011 B2
8082255 Carlson, Jr. et al. Dec 2011 B1
8090351 Klein Jan 2012 B2
8098904 Loffe et al. Jan 2012 B2
8099109 Altman et al. Jan 2012 B2
8112716 Kobayashi Feb 2012 B2
8131597 Hudetz Mar 2012 B2
8135166 Rhoads Mar 2012 B2
8136028 Loeb et al. Mar 2012 B1
8146001 Reese Mar 2012 B1
8161115 Yamamoto Apr 2012 B2
8161417 Lee Apr 2012 B1
8195203 Tseng Jun 2012 B1
8199747 Rojas et al. Jun 2012 B2
8208943 Petersen Jun 2012 B2
8214443 Hamburg Jul 2012 B2
8234350 Gu et al. Jul 2012 B1
8276092 Narayanan et al. Sep 2012 B1
8279319 Date Oct 2012 B2
8280406 Ziskind et al. Oct 2012 B2
8285199 Hsu et al. Oct 2012 B2
8287380 Nguyen et al. Oct 2012 B2
8301159 Hamynen et al. Oct 2012 B2
8306922 Kunal et al. Nov 2012 B1
8312086 Velusamy et al. Nov 2012 B2
8312097 Siegel et al. Nov 2012 B1
8326315 Phillips et al. Dec 2012 B2
8326327 Hymel et al. Dec 2012 B2
8332475 Rosen et al. Dec 2012 B2
8352546 Dollard Jan 2013 B1
8379130 Forutanpour et al. Feb 2013 B2
8385950 Wagner et al. Feb 2013 B1
8402097 Szeto Mar 2013 B2
8405773 Hayashi et al. Mar 2013 B2
8418067 Cheng et al. Apr 2013 B2
8423409 Rao Apr 2013 B2
8471914 Sakiyama et al. Jun 2013 B2
8472935 Fujisaki Jun 2013 B1
8510383 Hurley et al. Aug 2013 B2
8527345 Rothschild et al. Sep 2013 B2
8554627 Svendsen et al. Oct 2013 B2
8560612 Kilmer et al. Oct 2013 B2
8594680 Ledlie et al. Nov 2013 B2
8613089 Holloway et al. Dec 2013 B1
8660358 Bergboer et al. Feb 2014 B1
8660369 Llano et al. Feb 2014 B2
8660793 Ngo et al. Feb 2014 B2
8682350 Altman et al. Mar 2014 B2
8718333 Wolf et al. May 2014 B2
8724622 Rojas May 2014 B2
8732168 Johnson May 2014 B2
8744523 Fan et al. Jun 2014 B2
8745132 Obradovich Jun 2014 B2
8761800 Kuwahara Jun 2014 B2
8768876 Shim et al. Jul 2014 B2
8775972 Spiegel Jul 2014 B2
8788680 Naik Jul 2014 B1
8790187 Walker et al. Jul 2014 B2
8797415 Arnold Aug 2014 B2
8798646 Wang et al. Aug 2014 B1
8856349 Jain et al. Oct 2014 B2
8874677 Rosen et al. Oct 2014 B2
8886227 Schmidt et al. Nov 2014 B2
8909563 Jing Dec 2014 B1
8909679 Root et al. Dec 2014 B2
8909725 Sehn Dec 2014 B1
8972357 Shim et al. Mar 2015 B2
8995433 Rojas Mar 2015 B2
9015285 Ebsen et al. Apr 2015 B1
9020745 Johnston et al. Apr 2015 B2
9040574 Wang et al. May 2015 B2
9055416 Rosen et al. Jun 2015 B2
9094137 Sehn et al. Jul 2015 B1
9100806 Rosen et al. Aug 2015 B2
9100807 Rosen et al. Aug 2015 B2
9110985 Boscolo Aug 2015 B2
9113301 Spiegel et al. Aug 2015 B1
9119027 Sharon et al. Aug 2015 B2
9123074 Jacobs et al. Sep 2015 B2
9143382 Bhogal et al. Sep 2015 B2
9143681 Ebsen et al. Sep 2015 B1
9152477 Campbell et al. Oct 2015 B1
9191776 Root et al. Nov 2015 B2
9195912 Huang Nov 2015 B1
9204252 Root Dec 2015 B2
9225897 Sehn et al. Dec 2015 B1
9258459 Hartley Feb 2016 B2
9344606 Hartley et al. May 2016 B2
9385983 Sehn Jul 2016 B1
9396354 Murphy et al. Jul 2016 B1
9407712 Sehn Aug 2016 B1
9407816 Sehn Aug 2016 B1
9430783 Sehn Aug 2016 B1
9439041 Parvizi et al. Sep 2016 B2
9443227 Evans et al. Sep 2016 B2
9450907 Pridmore et al. Sep 2016 B2
9459778 Hogeg et al. Oct 2016 B2
9489661 Evans et al. Nov 2016 B2
9491134 Rosen et al. Nov 2016 B2
9532171 Allen et al. Dec 2016 B2
9537811 Allen et al. Jan 2017 B2
9628950 Noeth et al. Apr 2017 B1
9710821 Heath Jul 2017 B2
9817883 Barthel Nov 2017 B2
9854219 Sehn Dec 2017 B2
10089399 Gadepalli Oct 2018 B2
10515379 Gupta Dec 2019 B2
11163941 Al Majid et al. Nov 2021 B1
20020047868 Miyazawa Apr 2002 A1
20020078456 Hudson et al. Jun 2002 A1
20020087631 Sharma Jul 2002 A1
20020097257 Miller et al. Jul 2002 A1
20020122659 Mcgrath et al. Sep 2002 A1
20020128047 Gates Sep 2002 A1
20020144154 Tomkow Oct 2002 A1
20030001846 Davis et al. Jan 2003 A1
20030016247 Lai et al. Jan 2003 A1
20030017823 Mager et al. Jan 2003 A1
20030020623 Cao et al. Jan 2003 A1
20030023874 Prokupets et al. Jan 2003 A1
20030037124 Yamaura et al. Feb 2003 A1
20030052925 Daimon et al. Mar 2003 A1
20030101230 Benschoter et al. May 2003 A1
20030110503 Perkes Jun 2003 A1
20030126215 Udell Jul 2003 A1
20030148773 Spriestersbach et al. Aug 2003 A1
20030164856 Prager et al. Sep 2003 A1
20030229607 Zellweger et al. Dec 2003 A1
20040027371 Jaeger Feb 2004 A1
20040064429 Hirstius et al. Apr 2004 A1
20040078367 Anderson et al. Apr 2004 A1
20040111467 Willis Jun 2004 A1
20040158739 Wakai et al. Aug 2004 A1
20040189465 Capobianco et al. Sep 2004 A1
20040203959 Coombes Oct 2004 A1
20040215625 Svendsen et al. Oct 2004 A1
20040243531 Dean Dec 2004 A1
20040243688 Wugofski Dec 2004 A1
20050021444 Bauer et al. Jan 2005 A1
20050022211 Veselov et al. Jan 2005 A1
20050048989 Jung Mar 2005 A1
20050078804 Yomoda Apr 2005 A1
20050097176 Schatz et al. May 2005 A1
20050102381 Jiang et al. May 2005 A1
20050104976 Currans May 2005 A1
20050114783 Szeto May 2005 A1
20050119936 Buchanan et al. Jun 2005 A1
20050122405 Voss et al. Jun 2005 A1
20050193340 Amburgey et al. Sep 2005 A1
20050193345 Klassen et al. Sep 2005 A1
20050198128 Anderson Sep 2005 A1
20050223066 Buchheit et al. Oct 2005 A1
20050288954 McCarthy et al. Dec 2005 A1
20060026067 Nicholas et al. Feb 2006 A1
20060107297 Toyama et al. May 2006 A1
20060114338 Rothschild Jun 2006 A1
20060119882 Harris et al. Jun 2006 A1
20060242239 Morishima et al. Oct 2006 A1
20060252438 Ansamaa et al. Nov 2006 A1
20060265417 Amato et al. Nov 2006 A1
20060270419 Crowley et al. Nov 2006 A1
20060287878 Wadhwa et al. Dec 2006 A1
20070004426 Pfleging et al. Jan 2007 A1
20070038715 Collins et al. Feb 2007 A1
20070040931 Nishizawa Feb 2007 A1
20070073517 Panje Mar 2007 A1
20070073823 Cohen et al. Mar 2007 A1
20070075898 Markhovsky et al. Apr 2007 A1
20070082707 Flynt et al. Apr 2007 A1
20070136228 Petersen Jun 2007 A1
20070182541 Harris Aug 2007 A1
20070192128 Celestini Aug 2007 A1
20070198340 Lucovsky et al. Aug 2007 A1
20070198495 Buron et al. Aug 2007 A1
20070208751 Cowan et al. Sep 2007 A1
20070210936 Nicholson Sep 2007 A1
20070214180 Crawford Sep 2007 A1
20070214216 Carrer et al. Sep 2007 A1
20070233556 Koningstein Oct 2007 A1
20070233801 Eren et al. Oct 2007 A1
20070233859 Zhao et al. Oct 2007 A1
20070243887 Bandhole et al. Oct 2007 A1
20070244750 Grannan et al. Oct 2007 A1
20070255456 Funayama Nov 2007 A1
20070281690 Altman et al. Dec 2007 A1
20080022329 Glad Jan 2008 A1
20080025701 Ikeda Jan 2008 A1
20080032703 Krumm et al. Feb 2008 A1
20080033930 Warren Feb 2008 A1
20080043041 Hedenstroem et al. Feb 2008 A2
20080049704 Witteman et al. Feb 2008 A1
20080062141 Chandhri Mar 2008 A1
20080076505 Ngyen et al. Mar 2008 A1
20080092233 Tian et al. Apr 2008 A1
20080094387 Chen Apr 2008 A1
20080104503 Beall et al. May 2008 A1
20080109844 Baldeschweiler et al. May 2008 A1
20080120409 Sun et al. May 2008 A1
20080147730 Lee et al. Jun 2008 A1
20080148150 Mall Jun 2008 A1
20080158230 Sharma et al. Jul 2008 A1
20080168033 Ott et al. Jul 2008 A1
20080168489 Schraga Jul 2008 A1
20080189177 Anderton et al. Aug 2008 A1
20080207176 Brackbill et al. Aug 2008 A1
20080208692 Garaventi et al. Aug 2008 A1
20080214210 Rasanen et al. Sep 2008 A1
20080222545 Lemay Sep 2008 A1
20080255976 Altberg et al. Oct 2008 A1
20080256446 Yamamoto Oct 2008 A1
20080256577 Funaki et al. Oct 2008 A1
20080266421 Takahata et al. Oct 2008 A1
20080270938 Carlson Oct 2008 A1
20080288338 Wiseman et al. Nov 2008 A1
20080306826 Kramer et al. Dec 2008 A1
20080313329 Wang et al. Dec 2008 A1
20080313346 Kujawa et al. Dec 2008 A1
20080318616 Chipalkatti et al. Dec 2008 A1
20090006191 Arankalle et al. Jan 2009 A1
20090006565 Velusamy et al. Jan 2009 A1
20090015703 Kim et al. Jan 2009 A1
20090024956 Kobayashi Jan 2009 A1
20090030774 Rothschild et al. Jan 2009 A1
20090030999 Gatzke et al. Jan 2009 A1
20090040324 Nonaka Feb 2009 A1
20090042588 Lottin et al. Feb 2009 A1
20090058822 Chaudhri Mar 2009 A1
20090063455 Li Mar 2009 A1
20090079846 Chou Mar 2009 A1
20090089678 Sacco et al. Apr 2009 A1
20090089710 Wood et al. Apr 2009 A1
20090093261 Ziskind Apr 2009 A1
20090132341 Klinger May 2009 A1
20090132453 Hangartner et al. May 2009 A1
20090132665 Thomsen et al. May 2009 A1
20090148045 Lee et al. Jun 2009 A1
20090153492 Popp Jun 2009 A1
20090157450 Athsani et al. Jun 2009 A1
20090157752 Gonzalez Jun 2009 A1
20090160970 Fredlund et al. Jun 2009 A1
20090163182 Gatti et al. Jun 2009 A1
20090177299 Van De Sluis Jul 2009 A1
20090192900 Collision Jul 2009 A1
20090199242 Johnson et al. Aug 2009 A1
20090215469 Fisher et al. Aug 2009 A1
20090232354 Camp, Jr. et al. Sep 2009 A1
20090234815 Boerries et al. Sep 2009 A1
20090239552 Churchill et al. Sep 2009 A1
20090249222 Schmidt et al. Oct 2009 A1
20090249244 Robinson et al. Oct 2009 A1
20090254540 Musgrove Oct 2009 A1
20090265647 Martin et al. Oct 2009 A1
20090288022 Almstrand et al. Nov 2009 A1
20090290812 Naaman Nov 2009 A1
20090291672 Treves et al. Nov 2009 A1
20090292608 Polachek Nov 2009 A1
20090319607 Belz et al. Dec 2009 A1
20090327073 Li Dec 2009 A1
20100062794 Han Mar 2010 A1
20100082427 Burgener et al. Apr 2010 A1
20100082693 Hugg et al. Apr 2010 A1
20100100568 Papin et al. Apr 2010 A1
20100113065 Narayan et al. May 2010 A1
20100130233 Parker May 2010 A1
20100131880 Lee et al. May 2010 A1
20100131895 Wohlert May 2010 A1
20100153144 Miller et al. Jun 2010 A1
20100159944 Pascal et al. Jun 2010 A1
20100161658 Hamynen et al. Jun 2010 A1
20100161831 Haas et al. Jun 2010 A1
20100162149 Sheleheda et al. Jun 2010 A1
20100183280 Beauregard et al. Jul 2010 A1
20100185552 Deluca et al. Jul 2010 A1
20100185665 Horn et al. Jul 2010 A1
20100191631 Weidmann Jul 2010 A1
20100197318 Petersen et al. Aug 2010 A1
20100197319 Petersen et al. Aug 2010 A1
20100198683 Aarabi Aug 2010 A1
20100198694 Muthukrishnan Aug 2010 A1
20100198826 Petersen et al. Aug 2010 A1
20100198828 Petersen et al. Aug 2010 A1
20100198862 Jennings et al. Aug 2010 A1
20100198870 Petersen et al. Aug 2010 A1
20100198917 Petersen et al. Aug 2010 A1
20100201482 Robertson et al. Aug 2010 A1
20100201536 Robertson et al. Aug 2010 A1
20100214436 Kim et al. Aug 2010 A1
20100223128 Dukellis et al. Sep 2010 A1
20100223343 Bosan et al. Sep 2010 A1
20100250109 Johnston et al. Sep 2010 A1
20100257196 Waters et al. Oct 2010 A1
20100259386 Holley et al. Oct 2010 A1
20100273509 Sweeney et al. Oct 2010 A1
20100281045 Dean Nov 2010 A1
20100306669 Della Pasqua Dec 2010 A1
20110004071 Faiola et al. Jan 2011 A1
20110010205 Richards Jan 2011 A1
20110029512 Folgner et al. Feb 2011 A1
20110040783 Uemichi et al. Feb 2011 A1
20110040804 Peirce et al. Feb 2011 A1
20110050909 Ellenby et al. Mar 2011 A1
20110050915 Wang et al. Mar 2011 A1
20110064388 Brown et al. Mar 2011 A1
20110066743 Hurley et al. Mar 2011 A1
20110083101 Sharon et al. Apr 2011 A1
20110102630 Rukes May 2011 A1
20110119133 Igelman et al. May 2011 A1
20110137881 Cheng et al. Jun 2011 A1
20110145327 Stewart Jun 2011 A1
20110145564 Moshir et al. Jun 2011 A1
20110159890 Fortescue et al. Jun 2011 A1
20110164163 Bilbrey et al. Jul 2011 A1
20110194761 Wang Aug 2011 A1
20110196737 Vadlamani Aug 2011 A1
20110197194 D'Angelo et al. Aug 2011 A1
20110202598 Evans et al. Aug 2011 A1
20110202968 Nurmi Aug 2011 A1
20110211534 Schmidt et al. Sep 2011 A1
20110213845 Logan et al. Sep 2011 A1
20110215966 Kim et al. Sep 2011 A1
20110225048 Nair Sep 2011 A1
20110235858 Hanson Sep 2011 A1
20110238763 Shin et al. Sep 2011 A1
20110255736 Thompson et al. Oct 2011 A1
20110273575 Lee Nov 2011 A1
20110282799 Huston Nov 2011 A1
20110283188 Farrenkopf Nov 2011 A1
20110302103 Carmel Dec 2011 A1
20110314419 Dunn et al. Dec 2011 A1
20110320373 Lee et al. Dec 2011 A1
20120011433 Skrenta Jan 2012 A1
20120028659 Whitney et al. Feb 2012 A1
20120030018 Passmore Feb 2012 A1
20120033718 Kauffman et al. Feb 2012 A1
20120036015 Sheikh Feb 2012 A1
20120036443 Ohmori et al. Feb 2012 A1
20120054797 Skog et al. Mar 2012 A1
20120059722 Rao Mar 2012 A1
20120062805 Candelore Mar 2012 A1
20120066219 Naaman Mar 2012 A1
20120084731 Filman et al. Apr 2012 A1
20120084835 Thomas et al. Apr 2012 A1
20120099800 Llano et al. Apr 2012 A1
20120108293 Law et al. May 2012 A1
20120110096 Smarr et al. May 2012 A1
20120113143 Adhikari et al. May 2012 A1
20120113272 Hata May 2012 A1
20120123830 Svendsen et al. May 2012 A1
20120123871 Svendsen et al. May 2012 A1
20120123875 Svendsen et al. May 2012 A1
20120124126 Alcazar et al. May 2012 A1
20120124176 Curtis et al. May 2012 A1
20120124458 Cruzada May 2012 A1
20120131507 Sparandara et al. May 2012 A1
20120131512 Takeuchi et al. May 2012 A1
20120136985 Popescu May 2012 A1
20120143760 Abulafia et al. Jun 2012 A1
20120150978 Monaco Jun 2012 A1
20120165100 Lalancette et al. Jun 2012 A1
20120166971 Sachson et al. Jun 2012 A1
20120169855 Oh Jul 2012 A1
20120172062 Altman et al. Jul 2012 A1
20120173991 Roberts et al. Jul 2012 A1
20120176401 Hayward et al. Jul 2012 A1
20120184248 Speede Jul 2012 A1
20120197724 Kendall Aug 2012 A1
20120200743 Blanchflower et al. Aug 2012 A1
20120209924 Evans et al. Aug 2012 A1
20120210244 De Francisco Lopez et al. Aug 2012 A1
20120212632 Mate et al. Aug 2012 A1
20120220264 Kawabata Aug 2012 A1
20120226748 Bosworth et al. Sep 2012 A1
20120233000 Fisher et al. Sep 2012 A1
20120236162 Imamura Sep 2012 A1
20120239761 Linner et al. Sep 2012 A1
20120250951 Chen Oct 2012 A1
20120252418 Kandekar et al. Oct 2012 A1
20120254325 Majeti et al. Oct 2012 A1
20120278387 Garcia et al. Nov 2012 A1
20120278692 Shi Nov 2012 A1
20120290637 Perantatos et al. Nov 2012 A1
20120299954 Wada et al. Nov 2012 A1
20120304052 Tanaka et al. Nov 2012 A1
20120304080 Wormald et al. Nov 2012 A1
20120307096 Ford et al. Dec 2012 A1
20120307112 Kunishige et al. Dec 2012 A1
20120319904 Lee et al. Dec 2012 A1
20120323933 He et al. Dec 2012 A1
20120324018 Metcalf et al. Dec 2012 A1
20130006759 Srivastava et al. Jan 2013 A1
20130024757 Doll et al. Jan 2013 A1
20130031093 Ishida Jan 2013 A1
20130036364 Johnson Feb 2013 A1
20130045753 Obermeyer et al. Feb 2013 A1
20130050260 Reitan Feb 2013 A1
20130055083 Fino Feb 2013 A1
20130057587 Leonard et al. Mar 2013 A1
20130059607 Herz et al. Mar 2013 A1
20130060690 Oskolkov et al. Mar 2013 A1
20130063369 Malhotra et al. Mar 2013 A1
20130067027 Song et al. Mar 2013 A1
20130071093 Hanks et al. Mar 2013 A1
20130080254 Thramann Mar 2013 A1
20130085790 Palmer et al. Apr 2013 A1
20130086072 Peng et al. Apr 2013 A1
20130090171 Holton et al. Apr 2013 A1
20130095857 Garcia et al. Apr 2013 A1
20130104053 Thornton et al. Apr 2013 A1
20130110885 Brundrett, III May 2013 A1
20130110978 Gordon May 2013 A1
20130111514 Slavin et al. May 2013 A1
20130128059 Kristensson May 2013 A1
20130129142 Miranda-Steiner May 2013 A1
20130129252 Lauper May 2013 A1
20130132477 Bosworth et al. May 2013 A1
20130145286 Feng et al. Jun 2013 A1
20130159110 Rajaram et al. Jun 2013 A1
20130159919 Leydon Jun 2013 A1
20130169822 Zhu et al. Jul 2013 A1
20130173729 Starenky et al. Jul 2013 A1
20130182133 Tanabe Jul 2013 A1
20130185131 Sinha et al. Jul 2013 A1
20130191198 Carlson et al. Jul 2013 A1
20130194301 Robbins et al. Aug 2013 A1
20130198176 Kim Aug 2013 A1
20130202198 Adam Aug 2013 A1
20130204825 Su Aug 2013 A1
20130218965 Abrol et al. Aug 2013 A1
20130218968 Mcevilly et al. Aug 2013 A1
20130222323 Mckenzie Aug 2013 A1
20130227476 Frey Aug 2013 A1
20130232194 Knapp et al. Sep 2013 A1
20130263031 Oshiro et al. Oct 2013 A1
20130265450 Barnes, Jr. Oct 2013 A1
20130267253 Case et al. Oct 2013 A1
20130275505 Gauglitz et al. Oct 2013 A1
20130290443 Collins et al. Oct 2013 A1
20130297581 Ghosh Nov 2013 A1
20130297694 Ghosh Nov 2013 A1
20130304646 De Geer Nov 2013 A1
20130304818 Brumleve Nov 2013 A1
20130311255 Cummins et al. Nov 2013 A1
20130325964 Berberat Dec 2013 A1
20130344896 Kirmse et al. Dec 2013 A1
20130346869 Asver et al. Dec 2013 A1
20130346877 Borovoy et al. Dec 2013 A1
20140006129 Heath Jan 2014 A1
20140011538 Mulcahy et al. Jan 2014 A1
20140019264 Wachman et al. Jan 2014 A1
20140032682 Prado et al. Jan 2014 A1
20140040371 Gurevich Feb 2014 A1
20140043204 Basnayake et al. Feb 2014 A1
20140045530 Gordon et al. Feb 2014 A1
20140046914 Das Feb 2014 A1
20140047016 Rao Feb 2014 A1
20140047045 Baldwin et al. Feb 2014 A1
20140047335 Lewis et al. Feb 2014 A1
20140049652 Moon et al. Feb 2014 A1
20140052485 Shidfar Feb 2014 A1
20140052633 Gandhi Feb 2014 A1
20140057660 Wager Feb 2014 A1
20140082651 Sharifi Mar 2014 A1
20140092130 Anderson et al. Apr 2014 A1
20140096029 Schultz Apr 2014 A1
20140114565 Aziz et al. Apr 2014 A1
20140122658 Haeger et al. May 2014 A1
20140122787 Shalvi et al. May 2014 A1
20140129953 Spiegel May 2014 A1
20140143143 Fasoli et al. May 2014 A1
20140149519 Redfern et al. May 2014 A1
20140155102 Cooper et al. Jun 2014 A1
20140173424 Hogeg Jun 2014 A1
20140173457 Wang et al. Jun 2014 A1
20140189592 Benchenaa et al. Jul 2014 A1
20140201227 Hamilton-Dick Jul 2014 A1
20140207679 Cho Jul 2014 A1
20140214471 Schreiner, III Jul 2014 A1
20140222564 Kranendonk et al. Aug 2014 A1
20140258405 Perkin Sep 2014 A1
20140265359 Cheng et al. Sep 2014 A1
20140266703 Dalley, Jr. et al. Sep 2014 A1
20140279061 Elimeliah et al. Sep 2014 A1
20140279436 Dorsey et al. Sep 2014 A1
20140279540 Jackson Sep 2014 A1
20140280537 Pridmore Sep 2014 A1
20140282096 Rubinstein et al. Sep 2014 A1
20140287779 O'keefe et al. Sep 2014 A1
20140289833 Briceno Sep 2014 A1
20140306986 Gottesman et al. Oct 2014 A1
20140317302 Naik Oct 2014 A1
20140324627 Haver et al. Oct 2014 A1
20140324629 Jacobs Oct 2014 A1
20140325383 Brown et al. Oct 2014 A1
20150020086 Chen et al. Jan 2015 A1
20150025977 Doyle Jan 2015 A1
20150046278 Pei et al. Feb 2015 A1
20150046436 Li Feb 2015 A1
20150071619 Brough Mar 2015 A1
20150087263 Branscomb et al. Mar 2015 A1
20150088622 Ganschow et al. Mar 2015 A1
20150095020 Leydon Apr 2015 A1
20150096042 Mizrachi Apr 2015 A1
20150116529 Wu et al. Apr 2015 A1
20150169827 Laborde Jun 2015 A1
20150172534 Miyakawa et al. Jun 2015 A1
20150178260 Brunson Jun 2015 A1
20150186368 Zhang Jul 2015 A1
20150222814 Li et al. Aug 2015 A1
20150227840 Codella Aug 2015 A1
20150261917 Smith Sep 2015 A1
20150312184 Langholz et al. Oct 2015 A1
20150331945 Lytkin Nov 2015 A1
20150350136 Flynn, III et al. Dec 2015 A1
20150365795 Allen et al. Dec 2015 A1
20150378502 Hu et al. Dec 2015 A1
20160006927 Sehn Jan 2016 A1
20160014063 Hogeg et al. Jan 2016 A1
20160042249 Babenko Feb 2016 A1
20160085773 Chang et al. Mar 2016 A1
20160085863 Allen et al. Mar 2016 A1
20160099901 Allen et al. Apr 2016 A1
20160180887 Sehn Jun 2016 A1
20160182422 Sehn et al. Jun 2016 A1
20160182875 Sehn Jun 2016 A1
20160239248 Sehn Aug 2016 A1
20160277419 Allen et al. Sep 2016 A1
20160321708 Sehn Nov 2016 A1
20160359993 Hendrickson et al. Dec 2016 A1
20170006094 Abou Mahmoud et al. Jan 2017 A1
20170061308 Chen et al. Mar 2017 A1
20170132230 Muralidhar May 2017 A1
20170161618 Swaminathan Jun 2017 A1
20170200065 Wang Jul 2017 A1
20170287006 Azmoodeh et al. Oct 2017 A1
20170352050 Nixon Dec 2017 A1
20190138617 Farre Guiu May 2019 A1
Foreign Referenced Citations (31)
Number Date Country
2887596 Jul 2015 CA
2051480 Apr 2009 EP
2151797 Feb 2010 EP
2399928 Sep 2004 GB
19990073076 Oct 1999 KR
20010078417 Aug 2001 KR
WO-1996024213 Aug 1996 WO
WO-1999063453 Dec 1999 WO
WO-2000058882 Oct 2000 WO
WO-2001029642 Apr 2001 WO
WO-2001050703 Jul 2001 WO
WO-2006118755 Nov 2006 WO
WO-2007092668 Aug 2007 WO
WO-2009043020 Apr 2009 WO
WO-2011040821 Apr 2011 WO
WO-2011119407 Sep 2011 WO
WO-2013008238 Jan 2013 WO
WO-2013045753 Apr 2013 WO
WO-2014006129 Jan 2014 WO
WO-2014068573 May 2014 WO
WO-2014115136 Jul 2014 WO
WO-2014194262 Dec 2014 WO
WO-2015192026 Dec 2015 WO
WO-2016044424 Mar 2016 WO
WO-2016054562 Apr 2016 WO
WO-2016065131 Apr 2016 WO
WO-2016100318 Jun 2016 WO
WO-2016100318 Jun 2016 WO
WO-2016100342 Jun 2016 WO
WO-2016149594 Sep 2016 WO
WO-2016179166 Nov 2016 WO
Non-Patent Literature Citations (39)
Entry
Y. Feng and M. Lapata, “Automatic Caption Generation for News Images,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, No. 4, pp. 797-812, Apr. 2013 (Year: 2013).
“A Whole New Story”, Snap, Inc., [Online] Retrieved from the Internet: <URL: https://www.snap.com/en-US/news/>, (2017), 13 pgs.
“Adding photos to your listing”, eBay, [Online] Retrieved from the Internet: <URL: http://pages.ebay.com/help/sell/pictures.html>, (accessed May 24, 2017), 4 pgs.
“U.S. Appl. No. 15/941,743, Advisory Action dated Aug. 10, 2020”, 5 pgs.
“U.S. Appl. No. 15/941,743, Examiner Interview Summary dated Jul. 15, 2020”, 3 pgs.
“U.S. Appl. No. 15/941,743, Examiner Interview Summary dated Dec. 21, 2020”, 3 pgs.
“U.S. Appl. No. 15/941,743, Final Office Action dated Mar. 17, 2021”, 16 pgs.
“U.S. Appl. No. 15/941,743, Final Office Action dated May 20, 2020”, 17 pgs.
“U.S. Appl. No. 15/941,743, Non Final Office Action dated Sep. 28, 2020”, 14 pgs.
“U.S. Appl. No. 15/941,743, Non Final Office Action dated Dec. 26, 2019”, 15 pgs.
“U.S. Appl. No. 15/941,743, Notice of Allowance dated Jul. 1, 2021”, 9 pgs.
“U.S. Appl. No. 15/941,743, Response filed Mar. 19, 2020 to Non Final Office Action dated Dec. 26, 2019”, 15 pgs.
“U.S. Appl. No. 15/941,743, Response filed May 17, 2021 to Final Office Action dated Mar. 17, 2021”, 16 pages.
“U.S. Appl. No. 15/941,743, Response filed Jul. 16, 2020 to Final Office Action dated May 20, 2020”, 16 pgs.
“U.S. Appl. No. 15/941,743, Response filed Dec. 21, 2020 to Non Final Office Action dated Sep. 28, 2020”, 15 pgs.
“BlogStomp”, StompSoftware, [Online] Retrieved from the Internet: <URL: http://stompsoftware.com/blogstomp>, (accessed May 24, 2017), 12 pgs.
“Cup Magic Starbucks Holiday Red Cups come to life with AR app”, Blast Radius, [Online] Retrieved from the Internet: <URL: https://web.archive.org/web/20160711202454/http://www.blastradius.com/work/cup-magic>, (2016), 7 pgs.
“Daily App: InstaPlace (iOS/Android): Give Pictures a Sense of Place”, TechPP, [Online] Retrieved from the Internet: <URL: http://techpp.com/2013/02/15/instaplace-app-review>, (2013), 13 pgs.
“InstaPlace Photo App Tell the Whole Story”, [Online] Retrieved from the Internet: <URL: youtu.be/uF_gFkg1hBM>, (Nov. 8, 2013), 113 pgs., 1:02 min.
“International Application Serial No. PCT/US2015/037251, International Search Report dated Sep. 29, 2015”, 2 pgs.
“Introducing Snapchat Stories”, [Online] Retrieved from the Internet: <URL: https://web.archive.org/web/20131026084921/https://www.youtube.com/watch?v=88Cu3yN-LIM>, (Oct. 3, 2013), 92 pgs.; 00:47 min.
“Macy's Believe-o-Magic”, [Online] Retrieved from the Internet: <URL: https://web.archive.org/web/20190422101854/https://www.youtube.com/watch?v=xvzRXy3J0Z0&feature=youtu.be>, (Nov. 7, 2011), 102 pgs.; 00:51 min.
“Macy's Introduces Augmented Reality Experience in Stores across Country as Part of Its 2011 Believe Campaign”, Business Wire, [Online] Retrieved from the Internet: <URL: https://www.businesswire.com/news/home/20111102006759/en/Macys-Introduces-Augmented-Reality-Experience-Stores-Country>, (Nov. 2, 2011), 6 pgs.
“Starbucks Cup Magic”, [Online] Retrieved from the Internet: <URL: https://www.youtube.com/watch?v=RWwQXi9RG0w>, (Nov. 8, 2011), 87 pgs.; 00:47 min.
“Starbucks Cup Magic for Valentine's Day”, [Online] Retrieved from the Internet: <URL: https://www.youtube.com/watch?v=8nvqOzjq10w>, (Feb. 6, 2012), 88 pgs.; 00:45 min.
“Starbucks Holiday Red Cups Come to Life, Signaling the Return of the Merriest Season”, Business Wire, [Online] Retrieved from the Internet: <URL: http://www.businesswire.com/news/home/20111115005744/en/2479513/Starbucks-Holiday-Red-Cups-Life-Signaling-Return>, (Nov. 15, 2011), 5 pgs.
“Surprise!”, [Online] Retrieved from the Internet: <URL: https://www.snap.com/en-US/news/post/surprise>, (Oct. 3, 2013), 1 pg.
Buscemi, Scott, “Snapchat introduces ‘Stories’, a narrative built with snaps”, [Online] Retrieved from the Internet: <URL: https://9to5mac.com/2013/10/03/snapchat-introduces-stories-a-narrative-built-with-snaps/>, (Oct. 3, 2013), 2 pgs.
Carthy, Roi, “Dear All Photo Apps: Mobli Just Won Filters”, TechCrunch, [Online] Retrieved from the Internet: <URL: https://techcrunch.com/2011/09/08/mobli-filters>, (Sep. 8, 2011), 10 pgs.
Etherington, Darrell, “Snapchat Gets Its Own Timeline With Snapchat Stories, 24-Hour Photo & Video Tales”, [Online] Retrieved from the Internet: <URL: https://techcrunch.com/2013/10/03/snapchat-gets-its-own-timeline-with-snapchat-stories-24-hour-photo-video-tales/>, (Oct. 3, 2013), 2 pgs.
Hamburger, Ellis, “Snapchat's next big thing: ‘Stories’ that don't just disappear”, [Online] Retrieved from the Internet: <URL: https://www.theverge.com/2013/10/3/4791934/snapchats-next-big-thing-stories-that-dont-just-disappear>, (Oct. 3, 2013), 5 pgs.
Janthong, Isaranu, “Instaplace ready on Android Google Play store”, Android App Review Thailand, [Online] Retrieved from the Internet: <URL: http://www.android-free-app-review.com/2013/01/instaplace-android-google-play-store.html>, (Jan. 23, 2013), 9 pgs.
Macleod, Duncan, “Macys Believe-o-Magic App”, [Online] Retrieved from the Internet: <URL: http://theinspirationroom.com/daily/2011/macys-believe-o-magic-app>, (Nov. 14, 2011), 10 pgs.
Macleod, Duncan, “Starbucks Cup Magic Lets Merry”, [Online] Retrieved from the Internet: <URL: http://theinspirationroom.com/daily/2011/starbucks-cup-magic>, (Nov. 12, 2011), 8 pgs.
Notopoulos, Katie, “A Guide to the New Snapchat Filters and Big Fonts”, [Online] Retrieved from the Internet: <URL: https://www.buzzfeed.com/katienotopoulos/a-guide-to-the-new-snapchat-filters-and-big-fonts?utm_term=.bkQ9qVZWe#.nv58YXpkV>, (Dec. 22, 2013), 13 pgs.
Panzarino, Matthew, “Snapchat Adds Filters, A Replay Function and for Whatever Reason, Time, Temperature and Speed Overlays”, TechCrunch, [Online] Retrieved form the Internet: <URL: https://techcrunch.com/2013/12/20/snapchat-adds-filters-new-font-and-for-some-reason-time-temperature-and-speed-overlays/>, (Dec. 20, 2013), 12 pgs.
Tripathi, Rohit, “Watermark Images in PHP and Save File on Server”, [Online] Retrieved from the Internet: <URL: http://code.rohitink.com/2012/12/28/watermark-images-in-php-and-save-file-on-server>, (Dec. 28, 2012), 4 pgs.
“U.S. Appl. No. 15/941,743, Corrected Notice of Allowability dated Oct. 8, 2021”.
U.S. Appl. No. 15/941,743, filed Mar. 30, 2018, Annotating a Collection of Media Content Items.
Related Publications (1)
Number Date Country
20220004703 A1 Jan 2022 US
Continuations (1)
Number Date Country
Parent 15941743 Mar 2018 US
Child 17479383 US