METADATA-BASED FILE-IDENTIFICATION SYSTEMS AND METHODS

Information

  • Patent Application
  • 20150074152
  • Publication Number
    20150074152
  • Date Filed
    September 08, 2014
    9 years ago
  • Date Published
    March 12, 2015
    9 years ago
Abstract
In a system comprising media files resident on various devices, devices equipped with media servers can deliver files to devices with media clients for purposes of playback (rendering) and/or storage. Some media servers may be capable of delivering files in various formats and may offer clients delivery format options. Media clients are aware of preferential list of formats that can be supported on a device and can choose from delivery options provided by media servers. Media files are introduced on the devices either via means external to this system or by leveraging system's media servers and media clients to transfer content between devices. When media files are introduced on the devices by means external to this system, media scanners detect such media files and make it available to media servers and thus to the rest of the system.
Description
FIELD

This disclosure is directed to the field of software, and more particularly to identifying and serving audio and/or video media files in a distributed-media-library system.


BACKGROUND

Various media systems allow different devices to share locally-hosted media files with other devices connected via a network. In some cases, two or more devices may each have copies of a given piece of media content, copies that may differ from one another in terms of media format, resolution, bitrate, or the like. However, existing systems may fail to identify multiple copies of the same piece of media content within the system and provide methods for automatically selecting which copy to use for playback in a given context.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a simplified multi-device distributed media library organization system in which distributed-media client/server device, distributed-media client/server device, distributed-media client/server device, and distributed-media client/server device are connected to network.



FIG. 2 illustrates a media-introduction routine for introducing audio and/or video media files into a distributed-media-library system, such as may be performed by a distributed-media client/server device in accordance with one embodiment.



FIG. 3 illustrates a metadata-based ID subroutine for generating a unique identifier for a given audio and/or video media file of a given media type, such as may be performed by a distributed-media client/server device in accordance with one embodiment.



FIG. 4 illustrates a routine for serving media within a shared media system, such as may be performed by a distributed-media client/server device in accordance with one embodiment.



FIG. 5 illustrates a routine for accessing and playing and/or storing media files, such as may be performed by a distributed-media client/server device in accordance with one embodiment.



FIG. 6 illustrates several components of an exemplary distributed-media client/server device in accordance with one embodiment.





DESCRIPTION

The phrases “in one embodiment”, “in various embodiments”, “in some embodiments”, and the like are used repeatedly. Such phrases do not necessarily refer to the same embodiment. The terms “comprising”, “having”, and “including” are synonymous, unless the context dictates otherwise.


In a distributed-media-library system, a piece of content can enter a user's repository via various devices. When a piece of content is introduced to the system, the content file is given a unique identifier that is derived from metadata associated with the content file (e.g., based on the file name and a file size), as opposed to being based on the media content itself. All devices in the distributed-media-library system use the same process for generating a unique identifier, so a given piece of content will always get the same identifier, no matter where it first enters the distributed-media-library system.


For example, two video files recorded on two different mobile devices could easily have the same file name (e.g., “movie001.mp4”). However, it is unlikely that two video files recorded on two different devices will have the same file size, and it is extremely unlikely that two video files that have the same file name will also have the same file size. Therefore, an identifier that is sufficiently likely to be unique can be derived by combining the file name and file size of a given video file.


For media types whose file sizes may vary less than those of video files, an additional metadata element may be introduced. For example, for an lossy-compressed audio file, a sufficiently unique identifier could be derived from three metadata elements: file name, file size, and content duration. Similarly, for a lossy-compressed still image, a sufficiently unique identifier could be derived from three metadata elements: file name, file size, and creation date or similar timestamp metadata.


In many distributed-media-library systems, a given piece of content may be transcoded when that content is transferred to from one device to another. The transcoded derivative version of a given piece of content would very likely have a different set of metadata elements than the original file. However, most, if not all, media and/or container formats provide support for embedding arbitrary metadata (e.g., the unique identifier) into a given audio and/or video media file. Using such arbitrary-metadata support, when derivative versions of a given piece of content are made from an original, the unique identifier of the original file is embedded into the derivative version so that the derivative version can be identified as corresponding to the original when the derivative version is subsequently encountered and/or processed by the distributed-media-library system. Consequently, when the distributed-media-library system presents a list of available content to a user, each piece of content can be listed only once, regardless of how many derivative versions may exist on various devices in the distributed-media-library system.


More specifically, as discussed herein, in various embodiments, a processor and/or processing device may be configured (e.g., via non-transitory computer-readable storage media) to perform a first method for introducing audio and/or video media files into a distributed-media-library system, the first method including steps similar to some or all of the following:

    • obtaining an audio and/or video media file of a given media type for import into the distributed-media-library system;
    • determining that the audio and/or video media file has not previously been processed by any device of the distributed-media-library system;
    • obtaining a predetermined set of media-type-specific, fingerprint instructions for generating unique identifiers based on metadata associated with audio and/or video media files of various media types;
    • selecting a fingerprint instruction from the set of media-type-specific fingerprint instructions based at least in part on the given media type;
    • determining a first set of at least two metadata elements associated with the audio and/or video media file;
    • combining the first set of at least two metadata elements according to the selected fingerprint instruction to generate a deterministic, system-wide, metadata-derived unique identifier;
    • embedding the metadata-derived unique identifier in the audio and/or video media file;
    • determining that no version of the audio and/or video media file already exists in the distributed-media-library system;
    • recording the metadata-derived unique identifier in a distributed media-metadata database;
    • generating a derivative version of the audio and/or video media file, the derivative version being associated with a second set of at least two metadata elements, the second set of at least two metadata elements differing from the first set of at least two metadata elements; and/or
    • embedding the metadata-derived unique identifier in the derivative version to associate the derivative version with the audio and/or video media file when the derivative version is subsequently encountered in the distributed-media-library system.


In some cases, determining that the audio and/or video media file has not previously been processed by any device of the distributed-media-library system may include attempting, but failing to locate an embedded deterministic, system-wide, metadata-derived unique identifier within the audio and/or video media file, or the like.


In some cases, determining that no version of the audio and/or video media file already exists in the distributed-media-library system may include querying the distributed media-metadata database to determine that the metadata-derived unique identifier has not been previously recorded in the distributed media-metadata database, or the like.


In some cases, when the audio and/or video media file is of a video media type, the first set of at least two metadata elements consists of a file-name metadata element and a file-size metadata element.


In some cases, when the audio and/or video media file is of an image media type, the first set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a creation or modification timestamp metadata element.


In some cases, when the audio and/or video media file is of an audio media type, the first set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a duration metadata element.


Described more fully below are many additional details, variations, and embodiments that may or may not include some or all of the steps, features, and/or functionality described above.


Reference is now made in detail to the description of the embodiments as illustrated in the drawings. While embodiments are described in connection with the drawings and related descriptions, there is no intent to limit the scope to the embodiments disclosed herein. On the contrary, the intent is to cover all alternatives, modifications and equivalents. In alternate embodiments, additional devices, or combinations of illustrated devices, may be added to, or combined, without limiting the scope to the embodiments disclosed herein.



FIG. 1 illustrates a simplified multi-device distributed media library organization system in which distributed-media client/server device 600A, distributed-media client/server device 600B, distributed-media client/server device 600C, and distributed-media client/server device 600D are connected to network 150.


In various embodiments, network 150 may include the Internet, a local area network (“LAN”), a wide area network (“WAN”), and/or other data network.


In various embodiments, additional infrastructure (e.g., cell sites, routers, gateways, firewalls, and the like), as well as additional devices may be present. However, it is not necessary to show such infrastructure and implementation details in FIG. 1 in order to describe an illustrative embodiment.


In an exemplary scenario, media files may be resident on various interconnected devices that share media library content. At least some of the devices may include media scanners, media clients, and/or media servers. The devices themselves can interconnect via media servers and media clients resident thereon.



FIG. 2 illustrates a media-introduction routine 200 for introducing audio and/or video media files into a distributed-media-library system, such as may be performed by a distributed-media client/server device 600 in accordance with one embodiment.


In block 205, media-introduction routine 200 detects and/or obtains an audio and/or video media file of a given media type for import into the distributed-media-library system.


In decision block 210, media-introduction routine 200 determines whether the audio and/or video media file includes an embedded unique identifier. The presence of an embedded deterministic, system-wide, metadata-derived unique identifier would indicate that the audio and/or video media file has previously been processed by some device of the distributed-media-library system. If so, media-introduction routine 200 proceeds to decision block 225; otherwise, media-introduction routine 200 proceeds to metadata-based ID subroutine 300.


In subroutine block 300, media-introduction routine 200 calls subroutine 300 (see FIG. 3, discussed below) to generate a metadata-derived unique identifier corresponding to the audio and/or video media file obtained in block 205.


In block 220, media-introduction routine 200 uses a metadata-embedding facility provided by the media and/or container format to embed a deterministic, system-wide, metadata-derived unique identifier into the audio and/or video media file.


In decision block 225, media-introduction routine 200 determines whether a version of the audio and/or video media file already exists in the distributed-media-library system. In some embodiments, media-introduction routine 200 may query a distributed media-metadata database (e.g., distributed media-metadata database) to determine whether the metadata-derived unique identifier has been previously recorded. If so, media-introduction routine 200 proceeds to block 230; otherwise, media-introduction routine 200 proceeds to block 235.


In block 230, media-introduction routine 200 updates the distributed media-metadata database to indicate that a new derivative version (namely, the audio and/or video media file obtained in block 205) of the identified media content has been encountered.


In block 235, media-introduction routine 200 creates a record for the metadata-derived unique identifier (representing the media content of the audio and/or video media file) in the distributed media-metadata database.


In decision block 240, media-introduction routine 200 determines whether generate a derivative version of the audio and/or video media file. In some cases when a new piece of content is encountered, the distributed-media-library system may automatically create one or more derivative versions that will be suitable for transfer to the user's other media devices. In other cases, the user may provide a subsequent indication to transcode the audio and/or video media file or otherwise create a derivative version.


If media-introduction routine 200 determines to automatically generate a derivative version or otherwise receives an indication to do so, then at that point, media-introduction routine 200 proceeds to block 245; otherwise, media-introduction routine 200 proceeds to block 255.


In block 245, media-introduction routine 200 generates a derivative version of the audio and/or video media file. Generally, a derivative version is associated with a set of at least two metadata elements that differ from those of the original file. In other words, the derivative version will generally have a different file name, file size, creation/modification date, and/or other similar metadata compared to the original file from which is was generated.


In block 250, media-introduction routine 200 embeds into the derivative version the metadata-derived unique identifier that was derived in subroutine block 300 based on metadata elements associated with the audio and/or video media file obtained in block 205. Embedding the metadata-derived unique identifier in this manner enables the distributed-media-library system to associate the derivative version with the original audio and/or video media file when the derivative version is subsequently encountered in the distributed-media-library system.


In block 255, media-introduction routine 200 makes the audio and/or video media file and its associated unique identifier available to serve to the rest of the system. When the media file is subsequently transferred within the system, it is accompanied by its unique identifier. Thus, while media files may be changed in format by media servers to be adapted to the capabilities of a device, the media file's original identity is preserved, allowing the system to understand on which media servers a media file is resident regardless of its format.


Media-introduction routine 200 ends in ending block 299.



FIG. 3 illustrates a metadata-based ID subroutine 300 for generating a unique identifier for a given audio and/or video media file of a given media type, such as may be performed by a distributed-media client/server device 600 in accordance with one embodiment.


In block 305, metadata-based ID subroutine 300 obtains a predetermined set of media-type-specific, fingerprint instructions for generating unique identifiers based on metadata associated with audio and/or video media files of various media types. In various embodiments, a fingerprint instruction may identify a set of at least two metadata elements and provide instructions for combining those metadata elements into a unique identifier.


In block 310, metadata-based ID subroutine 300 selects a fingerprint instruction from the set of media-type-specific fingerprint instructions based at least in part on the given media type.


In block 315, metadata-based ID subroutine 300 determines the set of at least two metadata elements associated with the given audio and/or video media file according to the selected fingerprint instruction selected in block 310.


For example, in some embodiments, when the given audio and/or video media file is of a video media type, the set of at least two metadata elements may consist of a file-name metadata element and a file-size metadata element. In some embodiments, when the given audio and/or video media file is of an image media type, the set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a creation or modification timestamp metadata element. In some embodiments, when the given audio and/or video media file is of an audio media type, the set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a duration metadata element.


In block 320, metadata-based ID subroutine 300 combines the set of at least two metadata elements according to the selected fingerprint instruction to generate a deterministic, system-wide, metadata-derived unique identifier. For example, in some embodiments, the selected fingerprint instruction may specify that the set of at least two metadata elements are to be concatenated or joined in a particular order to generate a unique identifier such as “movie004.mp4-3294196” (which consists of the file name string joined to an integer byte count of the file with a hyphen character). In other embodiments, different methods of combining the set of at least two metadata elements may be employed.


Metadata-based ID subroutine 300 ends in ending block 399, returning the unique identifier generated in block 320 to the caller.



FIG. 4 illustrates a routine 400 for serving media within a shared media system, such as may be performed by a distributed-media client/server device 600 in accordance with one embodiment.


In block 405, routine 400 receives a request, typically from a remote media client, for a media file indicated via a unique identifier.


In block 410, routine 400 identifies the indicated media file using the unique identifier.


In block 415, routine 400 provides the identified media file, with its associated unique identifier, to the requesting device for playback (rendering) and/or storage.


Routine 400 ends in ending block 499.



FIG. 5 illustrates a routine 500 for accessing and playing and/or storing media files, such as may be performed by a distributed-media client/server device 600 in accordance with one embodiment.


In block 505, routine 500 obtains a list of media files, identified according to unique identifiers, that are available locally and/or from media-servers within a distributed-media-library system.


In block 510, using unique identifiers embedded in and/or otherwise associated with the media files, routine 500 identifies one or more derivative versions of the same content, such that a given piece of content is presented only once regardless of how many copies and/or derivative versions exist in the distributed-media-library system.


In block 515, routine 500 obtains an indication, such as from a user, to obtain one of the remotely-served and/or local media files.


In block 520, routine 500 obtains one or more delivery-preference factors according to which an appropriate source for the indicated remotely-served and/or local media file may be selected. For example, in one embodiment, a delivery-preference factor may include a list of one or more preferential formats that can be supported on a device. In other embodiments, a delivery-preference factor may include network connection characteristics between a media server and a media client, device media capability, and the like.


In block 525, routine 500 selects a source for the indicated remotely-served and/or local media file based at least in part on the delivery-preference factors obtained in block 520 and the unique identifier of the indicated remotely-served and/or local media file.


In block 530, routine 500 sends to the source selected in block 525 a request for the indicated remotely-served and/or local media file. Once obtained, the file may be made available for playback (rendering), local storage, or for other like purposes.


Routine 500 ends in ending block 599.



FIG. 6 illustrates several components of an exemplary distributed-media client/server device in accordance with one embodiment. In various embodiments, distributed-media client/server device 600 may include a desktop PC, server, workstation, mobile phone, laptop, tablet, set-top box, appliance, or other computing device that is capable of performing operations such as those described herein. In some embodiments, distributed-media client/server device 600 may include many more components than those shown in FIG. 6. However, it is not necessary that all of these generally conventional components be shown in order to disclose an illustrative embodiment.


In various embodiments, distributed-media client/server device 600 may comprise one or more physical and/or logical devices that collectively provide the functionalities described herein. In some embodiments, distributed-media client/server device 600 may comprise one or more replicated and/or distributed physical or logical devices.


In some embodiments, distributed-media client/server device 600 may comprise one or more computing resources provisioned from a “cloud computing” provider, for example, Amazon Elastic Compute Cloud (“Amazon EC2”), provided by Amazon.com, Inc. of Seattle, Wash.; Sun Cloud Compute Utility, provided by Sun Microsystems, Inc. of Santa Clara, Calif.; Windows Azure, provided by Microsoft Corporation of Redmond, Wash., and the like.


Distributed-media client/server device 600 includes a bus 605 interconnecting several components including a network interface 610, a display 615, a central processing unit 620, and a memory 625.


Memory 625 generally comprises a random access memory (“RAM”) and permanent non-transitory mass storage device, such as a hard disk drive or solid-state drive. Memory 625 stores program code for a media-introduction routine 200 for introducing audio and/or video media files into a distributed-media-library system (see FIG. 2, discussed above); a routine 400 for serving media within a shared media system (see FIG. 4, discussed above); and a routine 500 for accessing and playing and/or storing media files (see FIG. 5, discussed above). In addition, the memory 625 also stores an operating system 635.


These and other software components may be loaded into memory 625 of distributed-media client/server device 600 using a drive mechanism (not shown) associated with a non-transitory computer-readable medium 630, such as a floppy disc, tape, DVD/CD-ROM drive, memory card, or the like.


Memory 625 also includes distributed media-metadata database 640.


Memory 625 also includes local media datastore 645. In some embodiments, distributed-media client/server device 600 may communicate with local media datastore 645 via network interface 610, a storage area network (“SAN”), a high-speed serial bus, and/or via the other suitable communication technology.


Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present disclosure. This application is intended to cover any adaptations or variations of the embodiments discussed herein.

Claims
  • 1. A media-library-device-implemented method for introducing audio and/or video media files into a distributed-media-library system, the method comprising: obtaining, by the media-library device, an audio and/or video media file of a given media type for import into the distributed-media-library system;determining, by the media-library device, that the audio and/or video media file has not previously been processed by any device of the distributed-media-library system;obtaining, by the media-library device, a predetermined set of media-type-specific, fingerprint instructions for generating unique identifiers based on metadata associated with audio and/or video media files of various media types;selecting, by the media-library device, a fingerprint instruction from the set of media-type-specific fingerprint instructions based at least in part on the given media type;determining, by the media-library device, a first set of at least two metadata elements associated with the audio and/or video media file;combining, by the media-library device, the first set of at least two metadata elements according to the selected fingerprint instruction to generate a deterministic, system-wide, metadata-derived unique identifier;embedding, by the media-library device, the metadata-derived unique identifier in the audio and/or video media file;determining, by the media-library device, that no version of the audio and/or video media file already exists in the distributed-media-library system; andrecording, by the media-library device, the metadata-derived unique identifier in a distributed media-metadata database.
  • 2. The method of claim 1, further comprising: generating a derivative version of the audio and/or video media file, the derivative version being associated with a second set of at least two metadata elements, the second set of at least two metadata elements differing from the first set of at least two metadata elements; andembedding the metadata-derived unique identifier in the derivative version to associate the derivative version with the audio and/or video media file when the derivative version is subsequently encountered in the distributed-media-library system.
  • 3. The method of claim 1, wherein determining that the audio and/or video media file has not previously been processed by any device of the distributed-media-library system comprises attempting, but failing to locate an embedded deterministic, system-wide, metadata-derived unique identifier within the audio and/or video media file.
  • 4. The method of claim 1, wherein determining that no version of the audio and/or video media file already exists in the distributed-media-library system comprises querying the distributed media-metadata database to determine that the metadata-derived unique identifier has not been previously recorded in the distributed media-metadata database.
  • 5. The method of claim 1, wherein when the audio and/or video media file is of a video media type, the first set of at least two metadata elements consists of a file-name metadata element and a file-size metadata element.
  • 6. The method of claim 1, wherein when the audio and/or video media file is of an image media type, the first set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a creation or modification timestamp metadata element.
  • 7. The method of claim 1, wherein when the audio and/or video media file is of an audio media type, the first set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a duration metadata element.
  • 8. A computing apparatus for introducing audio and/or video media files into a distributed-media-library system, the apparatus comprising a processor and a memory storing instructions that, when executed by the processor, configure the apparatus to: obtain an audio and/or video media file of a given media type for import into the distributed-media-library system;determine that the audio and/or video media file has not previously been processed by any device of the distributed-media-library system;obtain a predetermined set of media-type-specific, fingerprint instructions for generating unique identifiers based on metadata associated with audio and/or video media files of various media types;select a fingerprint instruction from the set of media-type-specific fingerprint instructions based at least in part on the given media type;determine a first set of at least two metadata elements associated with the audio and/or video media file;combine the first set of at least two metadata elements according to the selected fingerprint instruction to generate a deterministic, system-wide, metadata-derived unique identifier;embed the metadata-derived unique identifier in the audio and/or video media file;determine that no version of the audio and/or video media file already exists in the distributed-media-library system; andrecord the metadata-derived unique identifier in a distributed media-metadata database.
  • 9. The apparatus of claim 8, wherein the memory stores further instructions that further configure the apparatus to: generate a derivative version of the audio and/or video media file, the derivative version being associated with a second set of at least two metadata elements, the second set of at least two metadata elements differing from the first set of at least two metadata elements; andembed the metadata-derived unique identifier in the derivative version to associate the derivative version with the audio and/or video media file when the derivative version is subsequently encountered in the distributed-media-library system.
  • 10. The apparatus of claim 8, wherein the instructions that configure the apparatus to determine that the audio and/or video media file has not previously been processed by any device of the distributed-media-library system further comprise instructions configuring the apparatus to attempt, but failing to locate an embedded deterministic, system-wide, metadata-derived unique identifier within the audio and/or video media file.
  • 11. The apparatus of claim 8, wherein the instructions that configure the apparatus to determine that no version of the audio and/or video media file already exists in the distributed-media-library system further comprise instructions configuring the apparatus to query the distributed media-metadata database to determine that the metadata-derived unique identifier has not been previously recorded in the distributed media-metadata database.
  • 12. The apparatus of claim 8, wherein when the audio and/or video media file is of a video media type, the first set of at least two metadata elements consists of a file-name metadata element and a file-size metadata element.
  • 13. The apparatus of claim 8, wherein when the audio and/or video media file is of an image media type, the first set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a creation or modification timestamp metadata element.
  • 14. The apparatus of claim 8, wherein when the audio and/or video media file is of an audio media type, the first set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a duration metadata element.
  • 15. A non-transitory computer-readable storage medium having stored thereon instructions including instructions that, when executed by a processor, configure the processor to: obtain an audio and/or video media file of a given media type for import into a distributed-media-library system;determine that the audio and/or video media file has not previously been processed by any device of the distributed-media-library system;obtain a predetermined set of media-type-specific, fingerprint instructions for generating unique identifiers based on metadata associated with audio and/or video media files of various media types;select a fingerprint instruction from the set of media-type-specific fingerprint instructions based at least in part on the given media type;determine a first set of at least two metadata elements associated with the audio and/or video media file;combine the first set of at least two metadata elements according to the selected fingerprint instruction to generate a deterministic, system-wide, metadata-derived unique identifier;embed the metadata-derived unique identifier in the audio and/or video media file;determine that no version of the audio and/or video media file already exists in the distributed-media-library system; andrecord the metadata-derived unique identifier in a distributed media-metadata database.
  • 16. The non-transitory computer-readable storage medium of claim 15, having stored thereon further instructions that further configure the processor to: generate a derivative version of the audio and/or video media file, the derivative version being associated with a second set of at least two metadata elements, the second set of at least two metadata elements differing from the first set of at least two metadata elements; andembed the metadata-derived unique identifier in the derivative version to associate the derivative version with the audio and/or video media file when the derivative version is subsequently encountered in the distributed-media-library system.
  • 17. The non-transitory computer-readable storage medium of claim 15, wherein the instructions that configure the processor to determine that the audio and/or video media file has not previously been processed by any device of the distributed-media-library system further comprise instructions configuring the processor to attempt, but failing to locate an embedded deterministic, system-wide, metadata-derived unique identifier within the audio and/or video media file.
  • 18. The non-transitory computer-readable storage medium of claim 15, wherein the instructions that configure the processor to determine that no version of the audio and/or video media file already exists in the distributed-media-library system further comprise instructions configuring the processor to query the distributed media-metadata database to determine that the metadata-derived unique identifier has not been previously recorded in the distributed media-metadata database.
  • 19. The non-transitory computer-readable storage medium of claim 15, wherein when the audio and/or video media file is of a video media type, the first set of at least two metadata elements consists of a file-name metadata element and a file-size metadata element.
  • 20. The non-transitory computer-readable storage medium of claim 15, wherein when the audio and/or video media file is of an image media type, the first set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a creation or modification timestamp metadata element.
  • 21. The non-transitory computer-readable storage medium of claim 15, wherein when the audio and/or video media file is of an audio media type, the first set of at least two metadata elements consists of a file-name metadata element, a file-size metadata element, and a duration metadata element.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of priority to Provisional Patent Application No. 61/874,929; filed Sep. 6, 2013 under Attorney Docket No. REAL-2013418 (RN437P); titled MULTI-DEVICE MEDIA CONTENT IDENTIFICATION AND SOURCING SYSTEMS AND METHODS; and naming inventor Milko BOIC. The above-cited application is hereby incorporated by reference, in its entirety, for all purposes.

Provisional Applications (1)
Number Date Country
61874929 Sep 2013 US