System and method for removing contextually identical multimedia content elements

Information

  • Patent Grant
  • 11403336
  • Patent Number
    11,403,336
  • Date Filed
    Tuesday, October 18, 2016
    8 years ago
  • Date Issued
    Tuesday, August 2, 2022
    2 years ago
Abstract
A system and method for removing contextually identical multimedia content elements. The method includes analyzing a plurality of multimedia content elements to identify at least two multimedia content elements of the plurality of multimedia content elements that are contextually identical; selecting, from among the at least two contextually identical multimedia content elements, at least one optimal multimedia content element; and removing, from a storage, all multimedia content elements of the group of contextually identical multimedia content elements other than the at least one optimal multimedia content element.
Description
TECHNICAL FIELD

The present disclosure relates generally to the analysis of multimedia content, and more specifically to identifying a plurality of multimedia content elements with respect to context.


BACKGROUND

With the abundance of data made available through various means in general and through the Internet and world-wide web (WWW) in particular, a need to understand likes and dislikes of users has become essential for on-line businesses.


Existing solutions provide various tools to identify user preferences. In particular, some of these existing solutions determine user preferences based on user inputs. These existing solutions actively require an input from the user that indicates the user's interests. However, profiles generated for users based on their inputs may be inaccurate, as the users tend to provide only their current interests, or only partial information due to their privacy concerns.


Other existing solutions passively track user activity through web sites such as social networks. The disadvantage with such solutions is that typically limited information regarding the users is revealed because users provide minimal information due to, e.g., privacy concerns. For example, users creating an account on Facebook® typically provide only the mandatory information required for the creation of the account.


Further, user inputs that may be utilized to determine user preferences may be duplicative. For example, a user may provide multiple images of his or her pet to illustrate that he or she has a user preference related to dogs. Such duplicative user inputs require additional memory usage, and may obfuscate the user's true interests. For example, if the user provides 10 images of his or her pet taken around the same time, the system receiving the images typically stores all 10 images, and any user preferences determined therefrom may appear to disproportionately revolve around pets.


It would therefore be advantageous to provide a solution that overcomes the deficiencies of the prior art.


SUMMARY

A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “some embodiments” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.


Certain embodiments disclosed herein include a method for removing contextually identical multimedia content elements. The method comprises analyzing a plurality of multimedia content elements to identify at least two multimedia content elements of the plurality of multimedia content elements that are contextually identical; selecting, from among the at least two contextually identical multimedia content elements, at least one optimal multimedia content element; and removing, from a storage, all multimedia content elements of the group of contextually identical multimedia content elements other than the at least one optimal multimedia content element.


Certain embodiments disclosed herein also include a non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute a method, the method comprising: analyzing a plurality of multimedia content elements to identify at least two multimedia content elements of the plurality of multimedia content elements that are contextually identical; selecting, from among the at least two contextually identical multimedia content elements, at least one optimal multimedia content element; and removing, from a storage, all multimedia content elements of the group of contextually identical multimedia content elements other than the at least one optimal multimedia content element.


Certain embodiments disclosed herein also include system for removing contextually identical multimedia content elements. The system comprises: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: analyze a plurality of multimedia content elements to identify at least two multimedia content elements of the plurality of multimedia content elements that are contextually identical; select, from among the at least two contextually identical multimedia content elements, at least one optimal multimedia content element; and remove, from a storage, all multimedia content elements of the group of contextually identical multimedia content elements other than the at least one optimal multimedia content element.





BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter that is regarded disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.



FIG. 1 is a network diagram utilized to describe the various embodiments disclosed herein.



FIG. 2 is a schematic diagram of a system for removing contextually identical multimedia content elements according to an embodiment.



FIG. 3 is flowchart illustrating a method for identifying contextually identical multimedia content elements according to an embodiment.



FIG. 4 is a flowchart illustrating a method for generating contextual insights according to an embodiment.



FIG. 5 is a block diagram depicting the basic flow of information in the signature generator system.



FIG. 6 is a diagram showing the flow of patches generation, response vector generation, and signature generation in a large-scale speech-to-text system.





DETAILED DESCRIPTION

It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.


Certain embodiments disclosed herein include a system and method for determining whether multimedia content elements are contextually identical. A plurality of multimedia content elements to identify contextually identical multimedia content elements. In an embodiment, the analysis includes generating at least one signature for each multimedia content element. In a further embodiment, the analysis includes matching among the generated signatures to identify signatures representing multimedia content elements that are contextually identical. In another embodiment, the analysis may include determining contextual identifiers for the plurality of multimedia content elements.


Contextually identical multimedia content elements are multimedia content elements associated with the same or nearly the same content. Contextually identical multimedia content elements may be determined to be contextually identical based on, e.g., features of the multimedia content elements (e.g., people and things captured in an image or video, sounds in audio or video, etc.), contextual insights related to the multimedia content elements (e.g., time of capture or receipt, location of capture, device which captured the multimedia content elements, etc.), and the like. For example, two images taken at a concert of a singer that were captured by two users standing next to each other may be contextually identical. As another example, two audio recordings of a song performed by the singer captured at different locations in the concert venue may be contextually identical.


Removing contextually identical multimedia content elements may be useful for, e.g., eliminating duplicative multimedia content elements or multimedia content elements that otherwise include essentially the same content. This elimination may reduce the amount of storage space needed and allows for removal of unnecessary duplicate multimedia content elements. For example, if a user accidentally presses the “capture” button on a camera multiple times when trying to take a picture of a group of friends, multiple images showing essentially the same scene will be captured. As another example, multiple people in a social media group may store multiple instances of the same video. In either example, a essentially duplicate identical multimedia content elements.


In an embodiment, upon identification of contextually identical multimedia content elements, a notification may be generated and sent. In another embodiment, at least one optimal multimedia content element may be determined from among the contextually identical multimedia content elements. The notification may also include a recommendation of the determined at least one optimal multimedia content element. The optimal multimedia content element may be determined based on, but not limited to, features of the multimedia content elements (e.g., resolution, focus, clarity, frame, texture, etc.); matching with other multimedia content elements (e.g., multimedia content elements ranked highly in a social network or liked by a particular user); a combination thereof; and the like. In some embodiments, multimedia content elements that are contextually identical to the optimal multimedia content element may be removed from, e.g., a storage.


As a non-limiting example, a user of a user device captures a series of 10 images determined as self-portrait photographs, which are typically referred to as “selfies”, within a time span of a few minutes. The selfie images are analyzed. In this example, the images are analyzed by at least generating and matching signatures. Based on the analysis, it is determined that the 10 images are contextually identical. Upon determining that the 10 images are contextually identical, an optimal image from among the 10 images is determined and a recommendation of the optimal image is provided. Upon receiving a gesture from a user responsive to the recommendation, images of the contextually identical selfie images other than the optimal image are removed from the storage.



FIG. 1 shows an example network diagram 100 utilized to describe the various embodiments disclosed herein. As illustrated in FIG. 1, a network 110 is communicatively connected to a plurality of user devices (UDs) 120-1 through 120-n (hereinafter referred to individually as a user device 120 and collectively as user devices 120, merely for simplicity purposes), a server 130, a plurality of data sources (DSs) 150-1 through 150-m (hereinafter referred to individually as a data source 150 and collectively as data sources 150, merely for simplicity purposes), and a database 160. In an embodiment, the network 110 may also be communicatively connected to a signature generator system 140. The network 110 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), and other networks capable of enabling communication between the elements of the system 100.


The user device 120 may be, but is not limited to, a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a tablet computer, a smart phone, a wearable computing device, and the like. Each user device 120 may have installed therein an agent 125-1 through 125-n (hereinafter referred to individually as an agent 125 and collectively as agents 125, merely for simplicity purposes), respectively. The agent 125 may be a dedicated application, script, or any program code stored in a memory (not shown) of the user device 120 and is executable, for example, by the operating system (not shown) of the user device 120. The agent 120 may be configured to perform some or all of the processes disclosed herein.


The user device 120 is configured to capture multimedia content elements, to receive multimedia content elements, to display multimedia content elements, or a combination thereof. The multimedia content elements displayed on the user device 120 may be, e.g., downloaded from one of the data sources 150, or may be embedded in a web-page displayed on the user device 120. Each of the data sources 150 may be, but is not limited to, a server (e.g., a web server), an application server, a data repository, a database, a website, an e-commerce website, a content website, and the like. The multimedia content elements can be locally saved in the user device 120 or can be captured by the user device 120.


For example, the multimedia content elements may include an image captured by a camera (not shown) installed in the user device 120, a video clip saved in the device, an image received by the user device 120, and so on. A multimedia content element may be, but is not limited to, an image, a graphic, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, an image of signals (e.g., spectrograms, phasograms, scalograms, etc.), a combination thereof, a portion thereof, and the like.


The various embodiments disclosed herein may be realized using the server 130, a signature generator system (SGS) 140, or both.


In an embodiment, a tracking agent such as, for example, the agent 125, may be configured to collect and send a plurality of multimedia content elements captured or displayed by the user device 120 to the server 130. In an embodiment, the server 130 may be configured to receive the collected multimedia content elements and to analyze the received multimedia content elements to determine whether and which of the multimedia content elements are contextually identical. The analysis may be based on, but is not limited to, signatures generated for each multimedia content element, concepts determined based on the multimedia content elements, contextual insights for each multimedia content element, a combination thereof, and the like.


In an embodiment, the server 130 is configured to preprocess the multimedia content elements to determine similarities between multimedia content elements of the plurality of multimedia content elements, and only multimedia content elements having similarities above a predetermined threshold are analyzed to determine contextually identical multimedia content elements. In an embodiment, the preprocessing may include analyzing factors including any of the signatures generated for each multimedia content element, the concepts determined based on the multimedia content elements, and the contextual insights for each multimedia content element before analyzing the other factors. For example, it may first be checked if the multimedia content elements were captured within a time period below a predetermined threshold and, if not, the multimedia content elements may be determined not to be contextually identical without generating signatures or determining concepts.


In an embodiment, the server 130 may be configured to send the received multimedia content elements to the signature generator system 140. In an embodiment, the signature generator system 140 is configured to generate at least one signature for each of the multimedia content elements. The process for generating the signatures is explained in more detail herein below with respect to FIGS. 5 and 6. The generated signatures may be robust to noise and distortions as discussed further herein below.


In a further embodiment, the server 130 is further configured to receive the generated signatures from the signature generator system 140. In another embodiment, the server 130 may be configured to generate the at least one signature for each multimedia content element or portion thereof as discussed further herein below.


In an embodiment, whether multimedia content elements are contextually identical may be based on matching between signatures of the multimedia content elements. In a further embodiment, if the matching between the signatures is above a predetermined threshold, the signatures may be determined to be contextually identical.


It should be appreciated that signatures may be used for profiling the user's interests, because signatures allow more accurate recognition of multimedia content elements in comparison to, for example, utilization of metadata. The signatures generated by the signature generator system 140 for the multimedia content elements allow for recognition and classification of multimedia elements such as content-tracking, video filtering, multimedia taxonomy generation, video fingerprinting, speech-to-text, audio classification, element recognition, video/image search, and any other application requiring content-based signatures generation and matching for large content volumes such as web and other large-scale databases. For example, a signature generated by the signature generator system 140 for a picture showing a car enables accurate recognition of the model of the car from any angle at which the picture was taken.


In yet a further embodiment, the server 130 may be configured to match the generated signatures against a database of concepts (not shown) to identify a concept that can be associated with each signature, and hence the corresponding multimedia element.


A concept is a collection of signatures representing at least one multimedia content element and metadata describing the concept. The collection of signatures is a signature reduced cluster generated by inter-matching signatures generated for the at least one multimedia content element represented by the concept. The concept is represented using at least one signature. Generating concepts by inter-matching signatures is described further in U.S. patent application Ser. No. 14/096,901, filed on Dec. 4, 2013, assigned to the common assignee, which is hereby incorporated by reference.


In a further embodiment, matching the generated signatures against the database of concepts further includes matching the generated signatures to signatures representing the concepts. The signatures representing the concepts may be, but are not limited to, signatures included in the concepts or signature clusters representing the concepts.


In an embodiment, whether multimedia content elements are contextually identical may be based at least in part on whether the multimedia content elements are associated with the same or similar concepts. In a further embodiment, determining whether multimedia content elements are associated with the same or similar concepts may be utilized to preprocess and determine multimedia content elements that are not likely contextually identical. That is, in an embodiment, if two or more multimedia content elements are not associated with a similar concept, other factors for determining whether they are contextually identical (e.g., matching between signatures of the multimedia content elements or determination of contextual identifiers) may not be performed. As an example, if a first image is associated with concepts of “books” and “library” while a second image is associated with concepts of “flowers” and “sidewalk”, the first image and the second image may be determined to not be contextually identical without requiring matching between signatures of the first and second images or consideration of time and location of capture of the images.


In another embodiment, the server 130 is further configured to generate at least one contextual insight of the received multimedia content elements. Contextual insights are conclusions related to the context of each multimedia content element, in particular relative to other contexts. In a further embodiment, the contextual insights may be based on metadata associated with each multimedia content element. To this end, in an embodiment, the server 130 is configured to parse the multimedia content elements to determine metadata associated with each multimedia content element.


The metadata may include, but is not limited to, a time pointer associated with a capture or display of a multimedia content element, a location pointer associated with a capture of a multimedia content element, details related to a device (e.g., the user device 120) that captured the multimedia content element, combinations thereof, and the like. In an embodiment, multimedia content elements may be contextually identical if the multimedia content elements were captured or displayed by the same user device 120, at the same (or roughly the same time), at the same (or roughly the same) location, or a combination thereof. Multimedia content elements may be captured or displayed at roughly the same time or location if a difference in the time or location between captures or displays is below a predetermined threshold. For example, if 15 images were captured within a time period of 30 seconds, the 15 images may be determined to be contextually identical. As another example, if two images were captured within 15 feet of each other, the two images may be determined to be contextually identical.


Based on the analysis, the server 130 is configured to determine whether at least two of the received multimedia content elements are contextually identical. As noted above, multimedia content elements may be contextually identical if, for example, signatures of the multimedia content elements match above a predetermined threshold; the multimedia content elements are associated with the same or similar concepts; contextual insights of the multimedia content elements indicate that the multimedia content elements were captured, displayed, or received at the same or similar time; the contextual insights indicate that the multimedia content elements were captured at the same or similar location; the contextual insights indicate that the multimedia content elements were captured by the same device; or a combination thereof.


In an embodiment, when it is determined that at least two multimedia contents are contextually identical, the server 130 is configured to send a notification indicating the at least two contextually identical multimedia content elements. In a further embodiment, the server 130 may be configured to receive a selection of one of the at least two contextually identical multimedia content elements. In yet a further embodiment, the server 130 is configured to remove, from a storage (e.g., one of the data sources 160), multimedia content elements of the at least two multimedia content elements other than the selected multimedia content element. Removing unselected contextually identical multimedia content elements reduces


In a further embodiment, the server 130 may be configured to determine at least one optimal multimedia content element from among the at least two contextually identical multimedia content elements. The at least one optimal multimedia content element is a multimedia content element selected to represent the at least two contextually identical multimedia content elements. The at least one optimal multimedia content element may be determined based on, but not limited to, features of the multimedia content elements (e.g., resolution, focus, clarity, frame, texture, etc.); matching with other multimedia content elements (e.g., multimedia content elements ranked highly in a social network or liked by a particular user); a combination thereof; and the like.


In a further embodiment, the server 130 is configured to determine the optimal multimedia content based on, but not limited to, matching between signatures representing the at least two contextually identical multimedia content elements and signatures representing concepts a particular user is interested in. In yet a further embodiment, the contextually identical multimedia content element having the signature with the highest matching to the user interest concept signatures may be determined as the optimal multimedia content element.


To this end, each concept may be associated with at least one user interest. For example, a concept of flowers may be associated with a user interest in ‘flowers’ or ‘gardening.’ In an embodiment, the user interest may simply be the identified concept. In another embodiment, the user interest may be determined using an association table which associates one or more identified concepts with a user interest. For example, the concepts of ‘flowers’ and ‘spring’ may be associated, in an association table with a user interest of ‘gardening’. Such an association table may be maintained in, e.g., the server 130 or the database 160.


In an embodiment, the notification may further indicate the at least one optimal multimedia content element. In a further embodiment, the notification including the at least one optimal multimedia content element is then provided to the user device 120 and the user device 120 is prompted to confirm selection of the at least one optimal multimedia content element. When the selection is confirmed, the server 130 is configured to remove the multimedia content element(s) of the at least two contextually identical multimedia content elements which were not determined as optimal from, e.g., a storage. In an embodiment, the server 130 is configured to remove the non-optimal multimedia content elements in real-time. In another embodiment, the server 130 may be configured to automatically remove the non-optimal multimedia content elements when at least one optimal multimedia content element is determined.


Each of the server 130 and the signature generator system 140 typically includes a processing circuitry (not shown) that is coupled to a memory (not shown). The memory typically contains instructions that can be executed by the processing circuitry. The server 130 also includes an interface (not shown) to the network 110. In an embodiment, the signature generator system 140 can be integrated in the server 130. In an embodiment, the server 130, the signature generator system 140, or both may include a plurality of computational cores having properties that are at least partly statistically independent from other of the plurality of computational cores. The computational cores are discussed further herein below.



FIG. 2 is an example schematic diagram of a system for removing contextually identical multimedia content elements according to an embodiment. In the example schematic diagram shown in FIG. 2, the system is the server 130. It should be noted that, in another embodiment, the system may be the user device 120. In a further embodiment, the agent 125 installed on the user device 120 may be configured to identify contextually identical multimedia content elements as described herein.


The server 130 includes an interface 210 at least for receiving multimedia content elements captured or displayed by the user device 120 and for sending notifications indicating contextually identical multimedia content elements, optimal multimedia content elements, or both, to the user device 120. The server 130 further includes a processing circuitry 220 such as a processor coupled to a memory (mem) 230. The memory 230 contains instructions that, when executed by the processing circuitry 220, configures the server 130 to identify contextually identical multimedia content elements as further described herein.


In an embodiment, the server 130 also includes a signature generator (SG) 240. The signature generator 240 includes a plurality of computational cores having properties that are at least partly statistically independent from other of the plurality of computational cores. The signature generator 240 is configured to generate signatures for multimedia content elements. In an embodiment, the signatures are robust to noise, distortion, or both. In another embodiment, the server 130 may be configured to send, to an external signature generator (e.g., the signature generator system 140), one or more multimedia content elements and to receive, from the external signature generator, signatures generated to the sent one or more multimedia content elements.


In another embodiment, the server 130 includes a data storage 250. The data storage may store, for example, signatures of multimedia content elements, signatures of concepts, contextually identical multimedia content elements, optimal multimedia content elements, combinations thereof, and the like.



FIG. 3 is an example flowchart 300 illustrating a method for identifying and removing contextually identical multimedia content elements (MMCEs) according to an embodiment. In an embodiment, the method may be performed by the server 130, the user device 120, or both. In an embodiment, the contextually identical multimedia content elements are identified based on a plurality of received multimedia content elements. The received multimedia content elements may be, e.g., multimedia content elements captured by a user device, multimedia content elements stored on a server (e.g., a server of a social network entity), and so on.


At optional S310, the plurality of multimedia content elements may be preprocessed. The preprocessing allows for, e.g., reduced usage of computing resources. To this end, in an embodiment, S310 includes, but is not limited to, determining at least one contextual insight (e.g., time, location, or device of capture or display) for each of the plurality of multimedia content elements, determining a concept associated with each of the plurality multimedia content elements, or both. Determining contextual insights and concepts for multimedia content elements are described further herein above with respect to FIG. 1. In a further embodiment, S310 further includes determining, based on the concepts, contextual insights, or both, whether any of the plurality of multimedia content elements are potentially contextually identical. In yet a further embodiment, S310 may include filtering out any of the multimedia content elements that are not determined to be potentially contextually identical.


At S320, the multimedia content elements are analyzed to identify at least one group of contextually identical multimedia content elements. Each group of contextually identical multimedia content elements includes at least two multimedia content elements that are contextually identical to each other. In an embodiment, the analysis may be based on, but not limited to, at least one contextual insight of each multimedia content element, at least one concept associated with each multimedia content element, at least one signature of each multimedia content element, or a combination thereof. Analyzing multimedia content elements to identify contextually identical multimedia content elements is described further herein below with respect to FIG. 4.


In another embodiment, S320 may include sending, to a signature generator system (e.g., the signature generator system 140) the multimedia content elements and receiving, from the signature generator system, at least one signature for each sent multimedia content element.


At S330, it is determined, based on the analysis, whether any multimedia content elements were identified as being contextually identical to each other. If so, execution continues with S340; otherwise, execution terminates.


At S340, at least one optimal multimedia content element may be determined from among the identified contextually identical multimedia content elements. In an embodiment, the at least one optimal multimedia content element may be determined based on, but not limited to, features of the multimedia content elements (e.g., resolution, focus, clarity, frame, texture, etc.); matching with other multimedia content elements (e.g., multimedia content elements ranked highly in a social network or liked by a particular user); a combination thereof; and the like.


In a further embodiment, one optimal multimedia content element may be selected for each group of contextually identical multimedia content elements that are contextually identical to each other. As an example, if the plurality of multimedia content elements includes 3 images showing a dog that are contextually identical and 5 videos showing a cat that are contextually identical, an optimal image may be selected from among the 3 contextually identical dog images and an optimal video may be selected from among the 5 contextually identical cat videos.


At S350, for each group of contextually identical multimedia content elements, all multimedia contents of the set other than the at least one optimal multimedia content are removed from, e.g., a storage. The removal may be automatic and in real-time. Alternatively, in another embodiment, S350 may include sending, to a user device, a notification indicating the selecting optimal multimedia content elements and prompting a user to confirm selection of the optimal multimedia content elements. In a further embodiment, upon receiving confirmation of the selection of the optimal multimedia content elements, S350 includes automatically removing all non-optimal multimedia content elements. In yet a further embodiment, S350 may further include receiving a selection of at least one alternative optimal multimedia content element. In such an embodiment, all multimedia content elements other than the at least one alternative optimal multimedia content may be removed from the storage.


As a non-limiting example, a plurality of images is received. The plurality of images is stored in a web server of a social network. The plurality of images includes 10 images showing a group of friends and one image showing an ocean. The plurality of images are preprocessed by determining contextual insights for each image. Each image is parsed to identify metadata, and the metadata is analyzed to determine the contextual insights. Based on the contextual insights, it is determined that the image showing the ocean was captured one hour after the images showing the group of friends, and that the images showing the group of friends were captured within 1 minute of each other. Accordingly, the images showing the group of friends are determined to be potentially contextually identical, and the image of the ocean is filtered out.


The remaining images showing the group of friends is analyzed by generating and matching signatures for each of the images. Based on the signature matching, it is determined that all of the images showing the group of friends match above a predetermined threshold. Thus, it is determined that the 10 images of the group of friends are contextually identical. Features of the contextually identical images are analyzed. Based on the feature analysis, it is determined that one of the contextually identical images has a higher resolution than other of the contextually identical images. The higher resolution image is selected as the optimal image, and the other images of the group of friends are removed from the web server.



FIG. 4 is an example flowchart S320 illustrating a method for analyzing a plurality of multimedia content elements to identify contextually identical multimedia content elements according to an embodiment.


At S410, at least one signature for each multimedia element identified is caused to be generated. In an embodiment, S410 may further include sending, to a signature generator system, the plurality of multimedia content elements and receiving, from the signature generator system, signatures generated for the plurality of multimedia content elements. Generation of signatures is described further herein below with respect to FIGS. 5-6.


At S420, the generated signatures are matched. Matching between signatures is described further herein below with respect to FIG. 5.


At S430, it is determined, based on the signature matching, whether any of the plurality of multimedia content elements are contextually identical and, if so, execution continues with S440; otherwise, execution terminates. In an embodiment, S430 includes determining, based on the matching, whether signatures representing any of the plurality of multimedia content elements match above a predefined threshold, where two or more multimedia content elements are contextually identical to each other when signatures representing the two or more multimedia contents match above a predetermined threshold.


At S440, when it is determined that at least two of the multimedia content elements are contextually identical, at least one group of contextually identical multimedia content elements is identified. Each set includes at least two multimedia content elements that are contextually identical to each other.



FIGS. 5 and 6 illustrate the generation of signatures for the multimedia elements by the signature generator system 140 according to one embodiment. An exemplary high-level description of the process for large scale matching is depicted in FIG. 5. In this example, the matching is for a video content.


Video content segments 2 from a Master database (DB) 6 and a Target DB 1 are processed in parallel by a large number of independent computational Cores 3 that constitute an architecture for generating the Signatures (hereinafter the “Architecture”). Further details on the computational Cores generation are provided below. The independent Cores 3 generate a database of Robust Signatures and Signatures 4 for Target content-segments 5 and a database of Robust Signatures and Signatures 7 for Master content-segments 8. An exemplary and non-limiting process of signature generation for an audio component is shown in detail in FIG. 5. Finally, Target Robust Signatures and/or Signatures are effectively matched, by a matching algorithm 9, to Master Robust Signatures and/or Signatures database to find all matches between the two databases.


To demonstrate an example of signature generation process, it is assumed, merely for the sake of simplicity and without limitation on the generality of the disclosed embodiments, that the signatures are based on a single frame, leading to certain simplification of the computational cores generation. The Matching System is extensible for signatures generation capturing the dynamics in-between the frames.


The Signatures' generation process is now described with reference to FIG. 5. The first step in the process of signatures generation from a given speech-segment is to break down the speech-segment to K patches 14 of random length P and random position within the speech segment 12. The breakdown is performed by the patch generator component 21. The value of the number of patches K, random length P and random position parameters is determined based on optimization, considering the tradeoff between accuracy rate and the number of fast matches required in the flow process of the server 130 and SIGNATURE GENERATOR SYSTEM 140. Thereafter, all the K patches are injected in parallel into all computational Cores 3 to generate K response vectors 22, which are fed into a signature generator system 23 to produce a database of Robust Signatures and Signatures 4.


In order to generate Robust Signatures, i.e., Signatures that are robust to additive noise L (where L is an integer equal to or greater than 1) by the Computational Cores 3, a frame ‘i’ is injected into all the Cores 3. Then, Cores 3 generate two binary response vectors: {right arrow over (S)} which is a Signature vector, and {right arrow over (RS)} which is a Robust Signature vector.


For generation of signatures robust to additive noise, such as White-Gaussian-Noise, scratch, etc., but not robust to distortions, such as crop, shift and rotation, etc., a core Ci={ni} (1≤i≤L) may consist of a single leaky integrate-to-threshold unit (LTU) node or more nodes. The node ni equations are:







V
i

=




j




w
ij



k
j






ni


=

θ


(

Vi
-
Thx

)







where, θ is a Heaviside step function; wij is a coupling node unit (CNU) between node i and image component j (for example, grayscale value of a certain pixel j); kj is an image component ‘j’ (for example, grayscale value of a certain pixel j); Thx is a constant Threshold value, where x is ‘S’ for Signature and ‘RS’ for Robust Signature; and Vi is a Coupling Node Value.


The Threshold values ThX are set differently for Signature generation and for Robust Signature generation. For example, for a certain distribution of values (for the set of nodes), the thresholds for Signature (ThS) and Robust Signature (ThRS) are set apart, after optimization, according to at least one or more of the following criteria:

    • 1: For: Vi>ThRS
      1−p(V>ThS)−1−(1−ε)l<<1


i.e., given that l nodes (cores) constitute a Robust Signature of a certain image I, the probability that not all of these l nodes will belong to the Signature of a same, but noisy image, Ĩ is sufficiently low (according to a system's specified accuracy).

    • 2: p(Vi>ThRS)≈l/L


i.e., approximately l out of the total L nodes can be found to generate a Robust Signature according to the above definition.

    • 3: Both Robust Signature and Signature are generated for certain frame i.


It should be understood that the generation of a signature is unidirectional, and typically yields lossless compression, where the characteristics of the compressed data are maintained but the uncompressed data cannot be reconstructed. Therefore, a signature can be used for the purpose of comparison to another signature without the need of comparison to the original data. The detailed description of the Signature generation can be found U.S. Pat. Nos. 8,326,775 and 8,312,031, assigned to common assignee, and are hereby incorporated by reference for all the useful information they contain.


A Computational Core generation is a process of definition, selection, and tuning of the parameters of the cores for a certain realization in a specific system and application. The process is based on several design considerations, such as:

    • (a) The Cores should be designed so as to obtain maximal independence, i.e., the projection from a signal space should generate a maximal pair-wise distance between any two cores' projections into a high-dimensional space.
    • (b) The Cores should be optimally designed for the type of signals, i.e., the Cores should be maximally sensitive to the spatio-temporal structure of the injected signal, for example, and in particular, sensitive to local correlations in time and space. Thus, in some cases a core represents a dynamic system, such as in state space, phase space, edge of chaos, etc., which is uniquely used herein to exploit their maximal computational power.
    • (c) The Cores should be optimally designed with regard to invariance to a set of signal distortions, of interest in relevant applications.


Detailed description of the Computational Core generation and the process for configuring such cores is discussed in more detail in the U.S. Pat. No. 8,655,801 referenced above.


The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.


All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosed embodiment and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosed embodiments, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims
  • 1. A method for removing contextually identical multimedia content elements, comprising: analyzing a plurality of multimedia content elements to identify at least two contextually identical multimedia content elements of the plurality of multimedia content elements, wherein the contextually identical multimedia content elements are contextually identical;selecting, at least one of the at least two contextually identical multimedia content elements, to provide at least one selected multimedia content element; wherein the selecting is based on at least one out of (a) a texture of the at least two contextually identical multimedia content elements, (b) a combination of multiple features of the at least two contextually identical multimedia content elements; (c) a match between each one of the at least two contextually identical multimedia content elements and another multimedia content element that is a most popular multimedia element in a social network, or (d) a match between each one of the at least two contextually identical multimedia content elements and another multimedia content element liked by a particular user; andwherein when the at least two contextually identical multimedia content elements comprise, in addition to the at least one selected multimedia content element, one or more non-selected multimedia content elements, then automatically removing, from a storage, the one or more non-selected multimedia content elements;wherein the analysis is based on at least one of: contextual insights of the plurality of multimedia content elements, and concepts associated with the plurality of multimedia content elements;wherein analyzing the plurality of multimedia content elements further comprises:causing generation of at least one signature for each of the plurality of multimedia content elements; andmatching between signatures of the plurality of multimedia content elements, wherein the at least two contextually identical contextually identical multimedia content elements are identified based on the signature matching.
  • 2. The method of claim 1, wherein the multiple features comprise resolution, focus and clarity.
  • 3. The method of claim 1, wherein the multiple features comprise resolution and frame.
  • 4. The method of claim 1, wherein at least two multimedia content elements are contextually identical when signatures of the at least two multimedia content elements match above a predetermined threshold.
  • 5. The method of claim 1, comprising generating, by multiple computational cores of a signature generating system, each signature of the at least one signature for each of the plurality of multimedia content elements, wherein each computational core having properties that are at least partly statistically independent of other of the computational cores, wherein the properties of each computational core are set independently of each other core.
  • 6. The method of claim 1, wherein the analyzing of the plurality of multimedia content elements further comprises: generating, based on metadata associated with each multimedia content element, at least one contextual insight, wherein the analysis is based on the generated at least one contextual insight.
  • 7. The method of claim 1, wherein the analyzing of the plurality of multimedia content elements further comprises: causing generation of at least one signature for each of the plurality of multimedia content elements;determining, based on the generated signatures, at least one concept for each multimedia content element, wherein the analysis is based on the generated concepts, wherein each concept is a collection of signatures and metadata representing the concept.
  • 8. The method according to claim 7 wherein at least one concept is a signature reduced concept that undergone a process of reducing at least one signature from the concept.
  • 9. The method of claim 1, further comprising: preprocessing the plurality of multimedia content elements to identify a plurality of potentially contextually identical multimedia content elements, wherein the at least two contextually identical multimedia content elements are identified from among the plurality of potentially contextually identical multimedia content elements.
  • 10. The method according to claim 1 wherein the at least two contextually identical multimedia content elements are images and wherein the selecting of the at least one selected multimedia content element is based on a focus of each one of the at least two contextually identical multimedia content elements and on the texture of the at least two contextually identical multimedia content elements.
  • 11. The method according to claim 1 wherein the at least two contextually identical multimedia content elements are images; and wherein the selecting of the at least one selected multimedia content element is based on (i) the combination of the multiple features of the at least two contextually identical multimedia content elements, the multiple features comprise a resolution of the at least two contextually identical multimedia content elements; and is also based on (ii) the match between each one of the at least two contextually identical multimedia content elements and the other multimedia content element that is the most popular multimedia element in the social network.
  • 12. The method according to claim 1 wherein the selecting of the at least one selected multimedia content element is based on (i) the combination of the multiple features of the at least two contextually identical multimedia content elements, the multiple features comprise the clarity of each one of the at least two contextually identical multimedia content elements.
  • 13. The method according to claim 1 wherein the selecting of the at least one selected multimedia content element is based on the match between each one of the at least two contextually identical multimedia content elements and the other multimedia content elements that is the most popular multimedia content element in the social network.
  • 14. The method according to claim 1 wherein the selecting of the at least one selected multimedia content element is based on the match between each one of the at least two contextually identical multimedia content elements and the other multimedia content element liked by the particular user.
  • 15. The method according to claim 1 wherein the analyzing is based on timing difference between acquisitions of the plurality of multimedia content elements.
  • 16. The method according to claim 1 wherein the analyzing is based on locations of acquisition of the plurality of multimedia content elements.
  • 17. The method according to claim 1 wherein the analyzing is based on devices that acquired of the plurality of multimedia content elements.
  • 18. A system for removing contextually identical multimedia content elements, comprising: a processing circuitry; and a memory, the memory containing instructions that, when executed by the processing circuitry, configure the system to: analyze a plurality of multimedia content elements to identify at least two contextually identical multimedia content elements of the plurality of multimedia content elements, wherein the multimedia content elements of the plurality of multimedia content identical;select, at least one of the at least two contextually identical multimedia content elements, to provide at least one selected multimedia content element; wherein the selecting is based on at least one out of (a) a texture of the at least two contextually identical multimedia content elements, (b) a combination of multiple features of the at least two contextually identical multimedia content elements; (c) a match between each one of the at least two contextually identical multimedia content elements and another multimedia content element that is a most popular multimedia element in a social network, or (d) a match between each one of the at least two contextually identical multimedia content elements and another multimedia content element liked by a articular user; andwherein when the at least two contextually identical multimedia content elements comprise, in addition to the at least one selected multimedia content element, one or more non-selected multimedia content elements, then automatically remove, from a storage, the one or more non-selected multimedia content elements;wherein the analysis is based on at least one of: contextual insights of the plurality of multimedia content elements, and concepts associated with the plurality of multimedia content elements;wherein an analyzing the plurality of multimedia content elements further comprises:cause generation of at least one signature for each of the plurality of multimedia content elements; andmatch between signatures of the plurality of multimedia content elements, wherein the at least two contextually identical contextually identical multimedia content elements are identified based on the signature matching.
  • 19. The system of claim 18, wherein the multiple features comprise resolution, focus and clarity.
  • 20. The system of claim 18, wherein the multiple features comprise resolution and frame.
  • 21. The system of claim 18, wherein at least two multimedia content elements are contextually identical when signatures of the at least two multimedia content elements match above a predetermined threshold.
  • 22. The system of claim 18, wherein the comprising a signature generator that comprises a plurality of computational cores that are configured to generate each signature n of the at least one signature for each of the plurality of multimedia content elements, wherein each computational core having properties that are at least partly statistically independent of other of the computational cores, wherein the properties of each computational core are set independently of each other core.
  • 23. The system of claim 18, wherein the system is further configured to: generate, based on metadata associated with each multimedia content element, at least one contextual insight, wherein the analysis is based on the generated at least one contextual insight.
  • 24. The system of claim 18, wherein the system is further configured to: cause generation of at least one signature for each of the plurality of multimedia content elements;determine, based on the generated signatures, at least one concept for each multimedia content element, wherein the analysis is based on the generated concepts, wherein each concept is a collection of signatures and metadata representing the concept.
  • 25. The system of claim 18, wherein the system is further configured to: preprocess the plurality of multimedia content elements to identify a plurality of potentially contextually identical multimedia content elements, wherein the at least two contextually identical multimedia content elements are identified from among the plurality of potentially contextually identical multimedia content elements.
  • 26. A non-transitory computer readable medium having stored thereon instructions for causing one or more processing units to execute a method, the method comprising: analyzing a plurality of multimedia content elements to identify at least two contextually identical multimedia content elements of the plurality of multimedia content elements, wherein the contextually identical multimedia content elements are contextually identical;selecting, at least one of the at least two contextually identical multimedia content elements, to provide at least one selected multimedia content element; wherein the selecting is based on at least one out of (a) a texture of the at least two contextually identical multimedia content elements, (b) a combination of multiple features of the at least two contextually identical multimedia content elements; (c) a match between each one of the at least two contextually identical multimedia content elements and another multimedia content element that is a most popular multimedia element in a social network, or (d) a match between each one of the at least two contextually identical multimedia content elements and another multimedia content element liked by a particular user; andwherein when the at least two contextually identical multimedia content elements comprise, in addition to the at least one selected multimedia content element, one or more non-selected multimedia content elements, then automatically removing, from a storage, the one or more non-selected multimedia content elements;wherein the analysis is based on at least one of: contextual insights of the plurality of multimedia content elements, and concepts associated with the plurality of multimedia content elements;wherein analyzing the plurality of multimedia content elements further comprises:causing generation of at least one signature for each of the plurality of multimedia content elements; andmatching between signatures of the plurality of multimedia content elements, wherein the at least two contextually identical contextually identical multimedia content elements are identified based on the signature matching.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application No. 62/310,742 filed on Mar. 20, 2016. This application is a continuation-in-part of U.S. patent application Ser. No. 14/643,694 filed on Mar. 10, 2015, now pending, which is a continuation of U.S. patent application Ser. No. 13/766,463 filed on Feb. 13, 2013, now U.S. Pat. No. 9,031,999. The Ser. No. 13/766,463 application is a continuation-in-part of U.S. patent application Ser. No. 13/602,858 filed on Sep. 4, 2012, now U.S. Pat. No. 8,868,619. The Ser. No. 13/602,858 application is a continuation of U.S. patent application Ser. No. 12/603,123 filed on Oct. 21, 2009, now U.S. Pat. No. 8,266,185. The Ser. No. 12/603,123 application is a continuation-in-part of: (1) U.S. patent application Ser. No. 12/084,150 having a filing date of Apr. 7, 2009, now U.S. Pat. No. 8,655,801, which is the National Stage of International Application No. PCT/IL2006/001235 filed on Oct. 26, 2006, which claims foreign priority from Israeli Application No. 171577 filed on Oct. 26, 2005, and Israeli Application No. 173409 filed on Jan. 29, 2006; (2) U.S. patent application Ser. No. 12/195,863 filed on Aug. 21, 2008, now U.S. Pat. No. 8,326,775, which claims priority under 35 USC 119 from Israeli Application No. 185414, filed on Aug. 21, 2007, and which is also a continuation-in-part of the above-referenced U.S. patent application Ser. No. 12/084,150; (3) U.S. patent application Ser. No. 12/348,888 filed on Jan. 5, 2009, now pending, which is a continuation-in-part of the above-referenced U.S. patent application Ser. No. 12/084,150 and the above-referenced U.S. patent application Ser. No. 12/195,863; and (4) U.S. patent application Ser. No. 12/538,495 filed on Aug. 10, 2009, now U.S. Pat. No. 8,312,031, which is a continuation-in-part of the above-referenced U.S. patent application Ser. No. 12/084,150, the above-referenced U.S. patent application Ser. No. 12/195,863, and the above-referenced U.S. patent application Ser. No. 12/348,888. All of the applications referenced above are hereby incorporated by reference.

US Referenced Citations (564)
Number Name Date Kind
4733353 Jaswa Mar 1988 A
4932645 Schorey et al. Jun 1990 A
4972363 Nguyen et al. Nov 1990 A
5214746 Fogel et al. May 1993 A
5307451 Clark Apr 1994 A
5412564 Ecer May 1995 A
5436653 Ellis et al. Jul 1995 A
5568181 Greenwood et al. Oct 1996 A
5638425 Meador et al. Jun 1997 A
5745678 Herzberg et al. Apr 1998 A
5763069 Jordan Jun 1998 A
5806061 Chaudhuri et al. Sep 1998 A
5835901 Duvoisin et al. Nov 1998 A
5852435 Vigneaux et al. Dec 1998 A
5870754 Dimitrova et al. Feb 1999 A
5873080 Coden et al. Feb 1999 A
5887193 Takahashi et al. Mar 1999 A
5940821 Wical Aug 1999 A
5978754 Kumano Nov 1999 A
5987454 Hobbs Nov 1999 A
5991306 Burns et al. Nov 1999 A
6038560 Wical Mar 2000 A
6052481 Grajski et al. Apr 2000 A
6070167 Qian et al. May 2000 A
6076088 Paik et al. Jun 2000 A
6122628 Castelli et al. Sep 2000 A
6128651 Cezar Oct 2000 A
6137911 Zhilyaev Oct 2000 A
6144767 Bottou et al. Nov 2000 A
6147636 Gershenson Nov 2000 A
6163510 Lee et al. Dec 2000 A
6240423 Hirata May 2001 B1
6243375 Speicher Jun 2001 B1
6243713 Nelson et al. Jun 2001 B1
6275599 Adler et al. Aug 2001 B1
6329986 Cheng Dec 2001 B1
6363373 Steinkraus Mar 2002 B1
6381656 Shankman Apr 2002 B1
6411229 Kobayashi Jun 2002 B2
6422617 Fukumoto et al. Jul 2002 B1
6493692 Kobayashi et al. Dec 2002 B1
6493705 Kobayashi et al. Dec 2002 B1
6507672 Watkins et al. Jan 2003 B1
6523022 Hobbs Feb 2003 B1
6523046 Liu et al. Feb 2003 B2
6524861 Anderson Feb 2003 B1
6526400 Takata et al. Feb 2003 B1
6550018 Abonamah et al. Apr 2003 B1
6557042 He et al. Apr 2003 B1
6560597 Dhillon et al. May 2003 B1
6594699 Sahai et al. Jul 2003 B1
6601026 Appelt et al. Jul 2003 B2
6601060 Tomaru Jul 2003 B1
6611628 Sekiguchi et al. Aug 2003 B1
6611837 Schreiber Aug 2003 B2
6618711 Ananth Sep 2003 B1
6640015 Lafruit Oct 2003 B1
6643620 Contolini et al. Nov 2003 B1
6643643 Lee et al. Nov 2003 B1
6665657 Dibachi Dec 2003 B1
6675159 Lin et al. Jan 2004 B1
6681032 Bortolussi et al. Jan 2004 B2
6704725 Lee Mar 2004 B1
6728706 Aggarwal et al. Apr 2004 B2
6732149 Kephart May 2004 B1
6742094 Igari May 2004 B2
6751363 Natsev et al. Jun 2004 B1
6751613 Lee et al. Jun 2004 B1
6754435 Kim Jun 2004 B2
6763069 Divakaran et al. Jul 2004 B1
6763519 McColl et al. Jul 2004 B1
6774917 Foote et al. Aug 2004 B1
6795818 Lee Sep 2004 B1
6804356 Krishnamachari Oct 2004 B1
6813395 Kinjo Nov 2004 B1
6819797 Smith et al. Nov 2004 B1
6836776 Schreiber Dec 2004 B2
6845374 Oliver et al. Jan 2005 B1
6877134 Fuller et al. Apr 2005 B1
6901207 Watkins May 2005 B1
6938025 Lulich et al. Aug 2005 B1
6961463 Loui Nov 2005 B1
6963975 Weare Nov 2005 B1
6970881 Mohan et al. Nov 2005 B1
6978264 Chandrasekar et al. Dec 2005 B2
6985172 Rigney et al. Jan 2006 B1
7006689 Kasutani Feb 2006 B2
7013051 Sekiguchi et al. Mar 2006 B2
7020654 Najmi Mar 2006 B1
7023979 Wu et al. Apr 2006 B1
7043473 Rassool et al. May 2006 B1
7124149 Smith et al. Oct 2006 B2
7158681 Persiantsev Jan 2007 B2
7199798 Echigo et al. Apr 2007 B1
7215828 Luo May 2007 B2
7260564 Lynn et al. Aug 2007 B1
7277928 Lennon Oct 2007 B2
7296012 Ohashi Nov 2007 B2
7299261 Oliver et al. Nov 2007 B1
7302117 Sekiguchi et al. Nov 2007 B2
7313805 Rosin et al. Dec 2007 B1
7340358 Yoneyama Mar 2008 B2
7346629 Kapur et al. Mar 2008 B2
7353224 Chen et al. Apr 2008 B2
7376672 Weare May 2008 B2
7392238 Zhou et al. Jun 2008 B1
7406459 Chen et al. Jul 2008 B2
7433895 Li et al. Oct 2008 B2
7450740 Shah et al. Nov 2008 B2
7464086 Black et al. Dec 2008 B2
7523102 Bjarnestam et al. Apr 2009 B2
7526607 Singh et al. Apr 2009 B1
7529659 Wold May 2009 B2
7536384 Venkataraman et al. May 2009 B2
7542969 Rappaport et al. Jun 2009 B1
7548910 Chu et al. Jun 2009 B1
7555477 Bayley et al. Jun 2009 B2
7555478 Bayley et al. Jun 2009 B2
7562076 Kapur Jul 2009 B2
7574436 Kapur et al. Aug 2009 B2
7574668 Nunez et al. Aug 2009 B2
7577656 Kawai et al. Aug 2009 B2
7657100 Gokturk et al. Feb 2010 B2
7660468 Gokturk et al. Feb 2010 B2
7694318 Eldering et al. Apr 2010 B2
7801893 Gulli Sep 2010 B2
7836054 Kawai et al. Nov 2010 B2
7920894 Wyler Apr 2011 B2
7921107 Chang et al. Apr 2011 B2
7933407 Keidar et al. Apr 2011 B2
7974994 Li et al. Jul 2011 B2
7987194 Walker et al. Jul 2011 B1
7987217 Long et al. Jul 2011 B2
7991715 Schiff et al. Aug 2011 B2
8000655 Wang et al. Aug 2011 B2
8023739 Hohimer et al. Sep 2011 B2
8036893 Reich Oct 2011 B2
8098934 Vincent et al. Jan 2012 B2
8112376 Raichelgauz et al. Feb 2012 B2
8266185 Raichelgauz et al. Sep 2012 B2
8275764 Jeon Sep 2012 B2
8312031 Raichelgauz et al. Nov 2012 B2
8315442 Gokturk et al. Nov 2012 B2
8316005 Moore Nov 2012 B2
8326775 Raichelgauz et al. Dec 2012 B2
8345982 Gokturk et al. Jan 2013 B2
RE44225 Aviv May 2013 E
8457827 Ferguson et al. Jun 2013 B1
8495489 Everingham Jul 2013 B1
8527978 Sallam Sep 2013 B1
8548828 Longmire Oct 2013 B1
8634980 Urmson Jan 2014 B1
8635531 Graham et al. Jan 2014 B2
8655801 Raichelgauz et al. Feb 2014 B2
8655878 Kulkarni et al. Feb 2014 B1
8677377 Cheyer et al. Mar 2014 B2
8682667 Haughay Mar 2014 B2
8688446 Yanagihara Apr 2014 B2
8706503 Cheyer et al. Apr 2014 B2
8775442 Moore et al. Jul 2014 B2
8781152 Momeyer Jul 2014 B2
8782077 Rowley Jul 2014 B1
8799195 Raichelgauz et al. Aug 2014 B2
8799196 Raichelquaz et al. Aug 2014 B2
8818916 Raichelgauz et al. Aug 2014 B2
8868619 Raichelgauz et al. Oct 2014 B2
8868861 Shimizu et al. Oct 2014 B2
8880539 Raichelgauz et al. Nov 2014 B2
8880566 Raichelgauz et al. Nov 2014 B2
8886648 Procopio et al. Nov 2014 B1
8898568 Bull et al. Nov 2014 B2
8922414 Raichelgauz et al. Dec 2014 B2
8923551 Grosz Dec 2014 B1
8959037 Raichelgauz et al. Feb 2015 B2
8990125 Raichelgauz et al. Mar 2015 B2
8990199 Ramesh et al. Mar 2015 B1
9009086 Raichelgauz et al. Apr 2015 B2
9031999 Raichelgauz et al. May 2015 B2
9087049 Raichelgauz et al. Jul 2015 B2
9104747 Raichelgauz et al. Aug 2015 B2
9165406 Gray et al. Oct 2015 B1
9191626 Raichelgauz et al. Nov 2015 B2
9197244 Raichelgauz et al. Nov 2015 B2
9218606 Raichelgauz et al. Dec 2015 B2
9235557 Raichelgauz et al. Jan 2016 B2
9256668 Raichelgauz et al. Feb 2016 B2
9298763 Zack Mar 2016 B1
9323754 Ramanathan et al. Apr 2016 B2
9330189 Raichelgauz et al. May 2016 B2
9392324 Maltar Jul 2016 B1
9438270 Raichelgauz et al. Sep 2016 B2
9440647 Sucan Sep 2016 B1
9466068 Raichelgauz et al. Oct 2016 B2
9646006 Raichelgauz et al. May 2017 B2
9679062 Schillings et al. Jun 2017 B2
9734533 Givot Aug 2017 B1
9807442 Bhatia et al. Oct 2017 B2
9875445 Amer et al. Jan 2018 B2
9984369 Li et al. May 2018 B2
10133947 Yang Nov 2018 B2
10157291 Kenthapadi et al. Dec 2018 B1
10347122 Takenaka Jul 2019 B2
10491885 Hicks Nov 2019 B1
20010019633 Tenze et al. Sep 2001 A1
20010038876 Anderson Nov 2001 A1
20010056427 Yoon et al. Dec 2001 A1
20020010682 Johnson Jan 2002 A1
20020010715 Chinn et al. Jan 2002 A1
20020019881 Bokhari et al. Feb 2002 A1
20020032677 Morgenthaler et al. Mar 2002 A1
20020037010 Yamauchi Mar 2002 A1
20020038299 Zernik et al. Mar 2002 A1
20020042914 Walker et al. Apr 2002 A1
20020059580 Kalker et al. May 2002 A1
20020072935 Rowse et al. Jun 2002 A1
20020087530 Smith et al. Jul 2002 A1
20020087828 Arimilli et al. Jul 2002 A1
20020099870 Miller et al. Jul 2002 A1
20020103813 Frigon Aug 2002 A1
20020107827 Benitez-Jimenez et al. Aug 2002 A1
20020113812 Walker et al. Aug 2002 A1
20020123928 Eldering et al. Sep 2002 A1
20020126872 Brunk et al. Sep 2002 A1
20020129140 Peled et al. Sep 2002 A1
20020129296 Kwiat et al. Sep 2002 A1
20020143976 Barker et al. Oct 2002 A1
20020147637 Kraft et al. Oct 2002 A1
20020152087 Gonzalez Oct 2002 A1
20020152267 Lennon Oct 2002 A1
20020157116 Jasinschi Oct 2002 A1
20020159640 Vaithilingam et al. Oct 2002 A1
20020161739 Oh Oct 2002 A1
20020163532 Thomas et al. Nov 2002 A1
20020174095 Lulich et al. Nov 2002 A1
20020178410 Haitsma et al. Nov 2002 A1
20020184505 Mihcak et al. Dec 2002 A1
20030005432 Ellis et al. Jan 2003 A1
20030028660 Igawa et al. Feb 2003 A1
20030037010 Schmelzer Feb 2003 A1
20030041047 Chang et al. Feb 2003 A1
20030050815 Seigel et al. Mar 2003 A1
20030078766 Appelt et al. Apr 2003 A1
20030086627 Berriss et al. May 2003 A1
20030089216 Birmingham et al. May 2003 A1
20030093790 Logan et al. May 2003 A1
20030101150 Agnihotri May 2003 A1
20030105739 Essafi et al. Jun 2003 A1
20030115191 Copperman et al. Jun 2003 A1
20030126147 Essafi et al. Jul 2003 A1
20030182567 Barton et al. Sep 2003 A1
20030184598 Graham Oct 2003 A1
20030191764 Richards Oct 2003 A1
20030200217 Ackerman Oct 2003 A1
20030217335 Chung et al. Nov 2003 A1
20030229531 Heckerman et al. Dec 2003 A1
20040003394 Ramaswamy Jan 2004 A1
20040025180 Begeja et al. Feb 2004 A1
20040047461 Weisman Mar 2004 A1
20040059736 Willse Mar 2004 A1
20040068510 Hayes et al. Apr 2004 A1
20040091111 Levy May 2004 A1
20040095376 Graham et al. May 2004 A1
20040098671 Graham et al. May 2004 A1
20040107181 Rodden Jun 2004 A1
20040111432 Adams et al. Jun 2004 A1
20040111465 Chuang et al. Jun 2004 A1
20040117367 Smith et al. Jun 2004 A1
20040117638 Monroe Jun 2004 A1
20040119848 Buehler Jun 2004 A1
20040128142 Whitham Jul 2004 A1
20040128511 Sun et al. Jul 2004 A1
20040133927 Sternberg et al. Jul 2004 A1
20040153426 Nugent Aug 2004 A1
20040215663 Liu et al. Oct 2004 A1
20040230572 Omoigui Nov 2004 A1
20040249779 Nauck et al. Dec 2004 A1
20040260688 Gross Dec 2004 A1
20040267774 Lin et al. Dec 2004 A1
20050021394 Miedema et al. Jan 2005 A1
20050114198 Koningstein et al. May 2005 A1
20050131884 Gross et al. Jun 2005 A1
20050144455 Haitsma Jun 2005 A1
20050163375 Grady Jul 2005 A1
20050172130 Roberts Aug 2005 A1
20050177372 Wang et al. Aug 2005 A1
20050193015 Logston Sep 2005 A1
20050193093 Mathew et al. Sep 2005 A1
20050238198 Brown et al. Oct 2005 A1
20050238238 Xu et al. Oct 2005 A1
20050245241 Durand et al. Nov 2005 A1
20050249398 Khamene et al. Nov 2005 A1
20050256820 Dugan et al. Nov 2005 A1
20050262428 Little et al. Nov 2005 A1
20050281439 Lange Dec 2005 A1
20050289163 Gordon et al. Dec 2005 A1
20050289590 Cheok et al. Dec 2005 A1
20060004745 Kuhn et al. Jan 2006 A1
20060013451 Haitsma Jan 2006 A1
20060020860 Tardif et al. Jan 2006 A1
20060020958 Allamanche et al. Jan 2006 A1
20060026203 Tan et al. Feb 2006 A1
20060031216 Semple et al. Feb 2006 A1
20060033163 Chen Feb 2006 A1
20060041596 Stirbu et al. Feb 2006 A1
20060048191 Xiong Mar 2006 A1
20060064037 Shalon et al. Mar 2006 A1
20060082672 Peleg Apr 2006 A1
20060100987 Leurs May 2006 A1
20060112035 Cecchi et al. May 2006 A1
20060120626 Perlmutter Jun 2006 A1
20060129822 Snijder et al. Jun 2006 A1
20060143674 Jones et al. Jun 2006 A1
20060153296 Deng Jul 2006 A1
20060159442 Kim et al. Jul 2006 A1
20060173688 Whitham Aug 2006 A1
20060184638 Chua et al. Aug 2006 A1
20060204035 Guo et al. Sep 2006 A1
20060217818 Fujiwara Sep 2006 A1
20060217828 Hicken Sep 2006 A1
20060218191 Gopalakrishnan Sep 2006 A1
20060224529 Kermani Oct 2006 A1
20060236343 Chang Oct 2006 A1
20060242130 Sadri Oct 2006 A1
20060242139 Butterfield et al. Oct 2006 A1
20060242554 Gerace et al. Oct 2006 A1
20060247983 Dalli Nov 2006 A1
20060248558 Barton et al. Nov 2006 A1
20060251292 Gokturk Nov 2006 A1
20060251338 Gokturk Nov 2006 A1
20060251339 Gokturk Nov 2006 A1
20060253423 McLane et al. Nov 2006 A1
20060288002 Epstein et al. Dec 2006 A1
20070009159 Fan Jan 2007 A1
20070011151 Hagar et al. Jan 2007 A1
20070019864 Koyama et al. Jan 2007 A1
20070022374 Huang et al. Jan 2007 A1
20070033163 Epstein et al. Feb 2007 A1
20070038608 Chen Feb 2007 A1
20070038614 Guha Feb 2007 A1
20070042757 Jung et al. Feb 2007 A1
20070061302 Ramer et al. Mar 2007 A1
20070067304 Ives Mar 2007 A1
20070067682 Fang Mar 2007 A1
20070071330 Oostveen et al. Mar 2007 A1
20070074147 Wold Mar 2007 A1
20070083611 Farago et al. Apr 2007 A1
20070091106 Moroney Apr 2007 A1
20070130112 Lin Jun 2007 A1
20070130159 Gulli et al. Jun 2007 A1
20070156720 Maren Jul 2007 A1
20070168413 Barletta et al. Jul 2007 A1
20070174320 Chou Jul 2007 A1
20070195987 Rhoads Aug 2007 A1
20070196013 Li Aug 2007 A1
20070220573 Chiussi et al. Sep 2007 A1
20070244902 Seide et al. Oct 2007 A1
20070253594 Lu et al. Nov 2007 A1
20070255785 Hayashi et al. Nov 2007 A1
20070268309 Tanigawa et al. Nov 2007 A1
20070282826 Hoeber et al. Dec 2007 A1
20070294295 Finkelstein et al. Dec 2007 A1
20070298152 Baets Dec 2007 A1
20080046406 Seide et al. Feb 2008 A1
20080049629 Morrill Feb 2008 A1
20080049789 Vedantham et al. Feb 2008 A1
20080072256 Boicey et al. Mar 2008 A1
20080079729 Brailovsky Apr 2008 A1
20080091527 Silverbrook et al. Apr 2008 A1
20080109433 Rose May 2008 A1
20080152231 Gokturk Jun 2008 A1
20080159622 Agnihotri et al. Jul 2008 A1
20080163288 Ghosal et al. Jul 2008 A1
20080165861 Wen et al. Jul 2008 A1
20080166020 Kosaka Jul 2008 A1
20080172615 Igelman et al. Jul 2008 A1
20080189609 Larson Aug 2008 A1
20080201299 Lehikoinen et al. Aug 2008 A1
20080201314 Smith et al. Aug 2008 A1
20080201361 Castro et al. Aug 2008 A1
20080204706 Magne et al. Aug 2008 A1
20080228995 Tan et al. Sep 2008 A1
20080237359 Silverbrook et al. Oct 2008 A1
20080253737 Kimura et al. Oct 2008 A1
20080263579 Mears et al. Oct 2008 A1
20080270373 Oostveen et al. Oct 2008 A1
20080270569 McBride Oct 2008 A1
20080294278 Borgeson et al. Nov 2008 A1
20080307454 Ahanger et al. Dec 2008 A1
20080313140 Pereira et al. Dec 2008 A1
20090013414 Washington et al. Jan 2009 A1
20090022472 Bronstein Jan 2009 A1
20090024641 Quigley et al. Jan 2009 A1
20090034791 Doretto Feb 2009 A1
20090037408 Rodgers Feb 2009 A1
20090043637 Eder Feb 2009 A1
20090043818 Raichelgauz Feb 2009 A1
20090080759 Bhaskar Mar 2009 A1
20090089587 Brunk et al. Apr 2009 A1
20090119157 Dulepet May 2009 A1
20090125544 Brindley May 2009 A1
20090148045 Lee et al. Jun 2009 A1
20090157575 Schobben et al. Jun 2009 A1
20090172030 Schiff et al. Jul 2009 A1
20090175538 Bronstein et al. Jul 2009 A1
20090208106 Dunlop et al. Aug 2009 A1
20090208118 Csurka Aug 2009 A1
20090216761 Raichelgauz Aug 2009 A1
20090220138 Zhang et al. Sep 2009 A1
20090245573 Saptharishi et al. Oct 2009 A1
20090245603 Koruga et al. Oct 2009 A1
20090253583 Yoganathan Oct 2009 A1
20090254572 Redlich et al. Oct 2009 A1
20090277322 Cai et al. Nov 2009 A1
20090278934 Ecker Nov 2009 A1
20090282218 Raichelgauz et al. Nov 2009 A1
20090297048 Slotine et al. Dec 2009 A1
20100042646 Raichelgauz Feb 2010 A1
20100082684 Churchill Apr 2010 A1
20100104184 Bronstein et al. Apr 2010 A1
20100111408 Matsuhira May 2010 A1
20100125569 Nair et al. May 2010 A1
20100162405 Cook et al. Jun 2010 A1
20100173269 Puri et al. Jul 2010 A1
20100198626 Cho et al. Aug 2010 A1
20100212015 Jin et al. Aug 2010 A1
20100268524 Nath et al. Oct 2010 A1
20100284604 Chrysanthakopoulos Nov 2010 A1
20100306193 Pereira Dec 2010 A1
20100312736 Kello Dec 2010 A1
20100318493 Wessling Dec 2010 A1
20100322522 Wang et al. Dec 2010 A1
20100325138 Lee et al. Dec 2010 A1
20100325581 Finkelstein et al. Dec 2010 A1
20110029620 Bonforte Feb 2011 A1
20110038545 Bober Feb 2011 A1
20110052063 McAuley et al. Mar 2011 A1
20110055585 Lee Mar 2011 A1
20110145068 King et al. Jun 2011 A1
20110164180 Lee Jul 2011 A1
20110164810 Zang et al. Jul 2011 A1
20110202848 Ismalon Aug 2011 A1
20110208744 Chandiramani Aug 2011 A1
20110218946 Stern et al. Sep 2011 A1
20110246566 Kashef Oct 2011 A1
20110251896 Impollonia et al. Oct 2011 A1
20110276680 Rimon Nov 2011 A1
20110296315 Lin et al. Dec 2011 A1
20110313856 Cohen et al. Dec 2011 A1
20120041969 Priyadarshan et al. Feb 2012 A1
20120082362 Diem et al. Apr 2012 A1
20120131454 Shah May 2012 A1
20120133497 Sasaki May 2012 A1
20120150890 Jeong et al. Jun 2012 A1
20120167133 Carroll et al. Jun 2012 A1
20120179642 Sweeney et al. Jul 2012 A1
20120179751 Ahn Jul 2012 A1
20120185445 Borden et al. Jul 2012 A1
20120197857 Huang et al. Aug 2012 A1
20120221470 Lyon Aug 2012 A1
20120227074 Hill et al. Sep 2012 A1
20120239690 Asikainen et al. Sep 2012 A1
20120239694 Avner et al. Sep 2012 A1
20120294514 Saunders Nov 2012 A1
20120299961 Ramkumar et al. Nov 2012 A1
20120301105 Rehg et al. Nov 2012 A1
20120330869 Durham Dec 2012 A1
20120331011 Raichelgauz et al. Dec 2012 A1
20130031489 Gubin et al. Jan 2013 A1
20130066856 Ong et al. Mar 2013 A1
20130067035 Amanat et al. Mar 2013 A1
20130067364 Berntson et al. Mar 2013 A1
20130086499 Dyor et al. Apr 2013 A1
20130089248 Remiszewski et al. Apr 2013 A1
20130103814 Carrasco Apr 2013 A1
20130104251 Moore et al. Apr 2013 A1
20130137464 Kramer May 2013 A1
20130159298 Mason et al. Jun 2013 A1
20130173635 Sanjeev Jul 2013 A1
20130212493 Krishnamurthy Aug 2013 A1
20130226820 Sedota, Jr. Aug 2013 A1
20130226930 Arngren et al. Aug 2013 A1
20130273968 Rhoads Oct 2013 A1
20130283401 Pabla et al. Oct 2013 A1
20130325550 Varghese et al. Dec 2013 A1
20130332951 Gharaat et al. Dec 2013 A1
20140019264 Wachman et al. Jan 2014 A1
20140025692 Pappas Jan 2014 A1
20140059443 Tabe Feb 2014 A1
20140095425 Sipple Apr 2014 A1
20140111647 Atsmon Apr 2014 A1
20140147829 Jerauld May 2014 A1
20140152698 Kim et al. Jun 2014 A1
20140169681 Drake Jun 2014 A1
20140176604 Venkitaraman et al. Jun 2014 A1
20140188786 Raichelgauz et al. Jul 2014 A1
20140193077 Shiiyama et al. Jul 2014 A1
20140198986 Marchesotti Jul 2014 A1
20140201330 Lozano Lopez Jul 2014 A1
20140250032 Huang et al. Sep 2014 A1
20140282655 Roberts Sep 2014 A1
20140300722 Garcia Oct 2014 A1
20140310825 Raichelgauz et al. Oct 2014 A1
20140317480 Chau Oct 2014 A1
20140330830 Raichelgauz et al. Nov 2014 A1
20140341476 Kulick et al. Nov 2014 A1
20140379477 Sheinfeld Dec 2014 A1
20150033150 Lee Jan 2015 A1
20150100562 Kohlmeier et al. Apr 2015 A1
20150117784 Lin Apr 2015 A1
20150120627 Hunzinger et al. Apr 2015 A1
20150134688 Jing May 2015 A1
20150254344 Kulkarni et al. Sep 2015 A1
20150286742 Zhang et al. Oct 2015 A1
20150289022 Gross Oct 2015 A1
20150324356 Gutierrez et al. Nov 2015 A1
20150363644 Wnuk Dec 2015 A1
20160007083 Gurha Jan 2016 A1
20160026707 Ong et al. Jan 2016 A1
20160210525 Yang Jul 2016 A1
20160221592 Puttagunta Aug 2016 A1
20160283483 Jiang Sep 2016 A1
20160306798 Guo et al. Oct 2016 A1
20160342683 Lim et al. Nov 2016 A1
20160357188 Ansari Dec 2016 A1
20170017638 Satyavarta et al. Jan 2017 A1
20170032257 Sharifi Feb 2017 A1
20170041254 Agara Venkatesha Rao Feb 2017 A1
20170109602 Kim Apr 2017 A1
20170154241 Shambik et al. Jun 2017 A1
20170255620 Raichelgauz Sep 2017 A1
20170262437 Raichelgauz Sep 2017 A1
20170323568 Inoue Nov 2017 A1
20180081368 Watanabe Mar 2018 A1
20180101177 Cohen Apr 2018 A1
20180157916 Doumbouya Jun 2018 A1
20180158323 Takenaka Jun 2018 A1
20180204111 Zadeh Jul 2018 A1
20190005726 Nakano Jan 2019 A1
20190039627 Yamamoto Feb 2019 A1
20190043274 Hayakawa Feb 2019 A1
20190045244 Balakrishnan Feb 2019 A1
20190056718 Satou Feb 2019 A1
20190065951 Luo Feb 2019 A1
20190188501 Ryu Jun 2019 A1
20190220011 Della Penna Jul 2019 A1
20190317513 Zhang Oct 2019 A1
20190364492 Azizi Nov 2019 A1
20190384303 Muller Dec 2019 A1
20190384312 Herbach Dec 2019 A1
20190385460 Magzimof Dec 2019 A1
20190389459 Berntorp Dec 2019 A1
20200004248 Healey Jan 2020 A1
20200004251 Zhu Jan 2020 A1
20200004265 Zhu Jan 2020 A1
20200005631 Visintainer Jan 2020 A1
20200018606 Wolcott Jan 2020 A1
20200018618 Ozog Jan 2020 A1
20200020212 Song Jan 2020 A1
20200050973 Stenneth Feb 2020 A1
20200073977 Montemerlo Mar 2020 A1
20200090484 Chen Mar 2020 A1
20200097756 Hashimoto Mar 2020 A1
20200133307 Kelkar Apr 2020 A1
20200043326 Tao Jun 2020 A1
Foreign Referenced Citations (14)
Number Date Country
1085464 Jan 2007 EP
0231764 Apr 2002 WO
0231764 Apr 2002 WO
2003005242 Jan 2003 WO
2003067467 Aug 2003 WO
2004019527 Mar 2004 WO
2005027457 Mar 2005 WO
2007049282 May 2007 WO
WO2007049282 May 2007 WO
2014076002 May 2014 WO
2014137337 Sep 2014 WO
2016040376 Mar 2016 WO
2016070193 May 2016 WO
WO-2016127478 Aug 2016 WO
Non-Patent Literature Citations (135)
Entry
Bilyana Taneva et al. “Gathering and Ranking Photos of Named Entities with High Precision, High Recall, and Diversity”, WSDM '10 , Feb. 4-6, 2010, New York City, New York USA ACM 2010, 10 pages.
Brecheisen, et al., “Hierarchical Genre Classification for Large Music Collections”, ICME 2006, pp. 1385-1388.
Lau, et al., “Semantic Web Service Adaptation Model for a Pervasive Learning Scenario”, 2008 IEEE Conference on Innovative Technologies in Intelligent Systems and Industrial Applications Year: 2008, pp. 98-103, DOI: 10.1109/CITISIA.2008.4607342 IEEE Conference Publications.
McNamara, et al., “Diversity Decay in Opportunistic Content Sharing Systems”, 2011 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks Year: 2011, pp. 1-3, DOI: 10.1109/WoWMoM.2011.5986211 IEEE Conference Publications.
Odinaev, et al., “Cliques in Neural Ensembles as Perception Carriers”, Technion—Israel Institute of Technology, 2006 International Joint Conference on Neural Networks, Canada, 2006, pp. 285-292.
Santos, et al., “SCORM-MPEG: an Ontology of Interoperable Metadata for Multimedia and e-Leaming”, 2015 23rd International Conference on Software, Telecommunications and Computer Networks (SoftCOM) Year: 2015, pp. 224-228, DOI: 10.1109/SOFTCOM.2015.7314122 IEEE Conference Publications.
Wilk, et al.,“The Potential of Social-Aware Multimedia Prefetching on Mobile Devices”, 2015 International Conference and Workshops on Networked Systems (NetSys) Year: 2015, pp. 1-5, DOI: 10.1109/NetSys.2015.7089081 IEEE Conference Publications.
Zeevi, Y. et al.: “Natural Signal Classification by Neural Cliques and Phase-Locked Attractors”, IEEE World Congress an Computational Intelligence, IJCNN2006, Vancouver, Canada, Jul. 2006 (Jul. 2006), XP002466252.
Boari et al, “Adaptive Routing for Dynamic Applications in Massively Parallel Architectures”, 1995 IEEE, Spring 1995.
Cernansky et al., “Feed-forward Echo State Networks”; Proceedings of International Joint Conference on Neural Networks, Montreal, Canada, Jul. 31-Aug. 4, 2005; Entire Document.
Chuan-Yu Cho, et al., “Efficient Motion-Vector-Based Video Search Using Query by Clip”, 2004, IEEE, Taiwan, pp. 1-4.
Clement, et al. “Speaker Diarization of Heterogeneous Web Video Files: A Preliminary Study”, Acoustics, Speech and Signal Processing (ICASSP), 2011, IEEE International Conference on Year: 2011, pp. 4432-4435, DOI 10.1109/ICASSP.2011.5947337 IEEE Conference Publications, France.
Cococcioni, et al, “Automatic Diagnosis of Defects of Rolling Element Bearings Based on Computational Intelligence Techniques”, University of Pisa, Pisa, Italy, 2009.
Emami, et al, “Role of Spatiotemporal Oriented Energy Features for Robust Visual Tracking in Video Surveillance, University of Queensland”, St. Lucia, Australia, 2012.
Fathy et al, “A Parallel Design and Implementation For Backpropagation Neural Network Using NIMD Architecture”, 8th Mediterranean Electrotechnical Corsfe rersce, 19'96. MELECON '96, Date of Conference: May 13-16, 1996, vol. 3, pp. 1472-1475.
Foote, Jonathan, et al. “Content-Based Retrieval of Music and Audio”, 1997 Institute of Systems Science, National University of Singapore, Singapore (Abstract).
Gomes et al., “Audio Watermaking and Fingerprinting: For Which Applications?” University of Rene Descartes, Paris, France, 2003.
Gong, et al., “A Knowledge-based Mediator for Dynamic Integration of Heterogeneous Multimedia Information Sources”, Video and Speech Processing, 2004, Proceedings of 2004 International Symposium on Year: 2004, pp. 467-470, DOI: 10.1109/ISIMP.2004.1434102 IEEE Conference Publications, Hong Kong.
Guo et al, “AdOn: An Intelligent Overlay Video Advertising System”, SIGIR, Boston, Massachusetts, Jul. 19-23, 2009.
Ihab Al Kabary, et al., “SportSense: Using Motion Queries to Find Scenes in Sports Videos”, Oct. 2013, ACM, Switzerland, pp. 1-3.
International Search Authority: “Written Opinion of the International Searching Authority” (PCT Rule 43bis.1) Including International Search Report for International Patent Application No. PCT/US2008/073852; dated Jan. 28, 2009.
International Search Authority: International Preliminary Report on Patentability (Chapter I of the Patent Cooperation Treaty) including “Written Opinion of the International Searching Authority” (PCT Rule 43bis. 1) for the corresponding International Patent Application No. PCT/IL2006/001235; dated Jul. 28, 2009.
International Search Report for the corresponding International Patent Application PCT/IL2006/001235; dated Nov. 2, 2008.
IPO Examination Report under Section 18(3) for corresponding UK application No. GB1001219.3, dated Sep. 12, 2011.
Iwamoto, K.; Kasutani, E.; Yamada, A.: “Image Signature Robust to Caption Superimposition for Video Sequence Identification”; 2006 IEEE International Conference on Image Processing; pp. 3185-3188, Oct. 8-11, 2006; doi: 10.1109/ICIP.2006.313046.
Jaeger, H.: “The ”echo state“ approach to analysing and training recurrent neural networks”, GMD Report, No. 148, 2001, pp. 1-43, XP002466251 German National Research Center for Information Technology.
Jianping Fan et al., “Concept-Oriented Indexing of Video Databases: Towards Semantic Sensitive Retrieval and Browsing”, IEEE, vol. 13, No. 7, Jul. 2004, pp. 1-19.
Li, et al., “Matching Commercial Clips from TV Streams Using a Unique, Robust and Compact Signature,” Proceedings of the Digital Imaging Computing: Techniques and Applications, Feb. 2005, vol. 0-7695-2467, Australia.
Lin, C.; Chang, S.: “Generating Robust Digital Signature for Image/Video Authentication”, Multimedia and Security Workshop at ACM Mutlimedia '98; Bristol, U.K., Sep. 1998; pp. 49-54.
Lin, et al., “Robust Digital Signature for Multimedia Authentication: A Summary”, IEEE Circuits and Systems Magazine, 4th Quarter 2003, pp. 23-26.
Lin, et al., “Summarization of Large Scale Social Network Activity”, Acoustics, Speech and Signal Processing, 2009, ICASSP 2009, IEEE International Conference on Year 2009, pp. 3481-3484, DOI: 10.1109/ICASSP.2009.4960375, IEEE Conference Publications, Arizona.
Liu, et al., “Instant Mobile Video Search With Layered Audio-Video Indexing and Progressive Transmission”, Multimedia, IEEE Transactions on Year: 2014, vol. 16, Issue: 8, pp. 2242-2255, DOI: 10.1109/TMM.2014.2359332 IEEE Journals & Magazines.
Lyon, Richard F.; “Computational Models of Neural Auditory Processing”; IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '84, Date of Conference: Mar. 1984, vol. 9, pp. 41-44.
Maass, W. et al.: “Computational Models for Generic Cortical Microcircuits”, Institute for Theoretical Computer Science, Technische Universitaet Graz, Graz, Austria, published Jun. 10, 2003.
Mahdhaoui, et al, “Emotional Speech Characterization Based on Multi-Features Fusion for Face-to-Face Interaction”, Universite Pierre et Marie Curie, Paris, France, 2009.
Marti, et al, “Real Time Speaker Localization and Detection System for Camera Steering in Multiparticipant Videoconferencing Environments”, Universidad Politecnica de Valencia, Spain, 2011.
May et al., “The Transputer”, Springer-Verlag, Berlin Heidelberg, 1989, teaches multiprocessing system.
Mei, et al., “Contextual In-Image Advertising”, Microsoft Research Asia, pp. 439-448, 2008.
Mei, et al., “VideoSense—Towards Effective Online Video Advertising”, Microsoft Research Asia, pp. 1075-1084, 2007.
Mladenovic, et al., “Electronic Tour Guide for Android Mobile Platform with Multimedia Travel Book”, Telecommunications Forum (TELFOR), 2012 20th Year: 2012, pp. 1460-1463, DOI: 10.1109/TELFOR.2012.6419494 IEEE Conference Publications.
Morad, T.Y. et al.: “Performance, Power Efficiency and Scalability of Asymmetric Cluster Chip Multiprocessors”, Computer Architecture Letters, vol. 4, Jul. 4, 2005 (Jul. 4, 2005), pp. 1-4, XP002466254.
Nagy et al, “A Transputer, Based, Flexible, Real-Time Control System for Robotic Manipulators”, UKACC International Conference on Control '96, Sep. 2-5, 1996, Conference 1996, Conference Publication No. 427, IEE 1996.
Nam, et al., “Audio Visual Content-Based Violent Scene Characterization”, Department of Electrical and Computer Engineering, Minneapolis, MN, 1998, pp. 353-357.
Natsclager, T. et al.: “The “liquid computer”: A novel strategy for real-time computing on time series”, Special Issue on Foundations of Information Processing of Telematik, vol. 8, No. 1, 2002, pp. 39-43, XP002466253.
Nouza, et al., “Large-scale Processing, Indexing and Search System for Czech Audio-Visual Heritage Archives”, Multimedia Signal Processing (MMSP), 2012, pp. 337-342, IEEE 14th Intl. Workshop, DOI: 10.1109/MMSP.2012.6343465, Czech Republic.
Ortiz-Boyer et al., “CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features”, Journal of Artificial Intelligence Research 24 (2005), pp. 1-48 Submitted Nov. 2004; published Jul. 2005.
Park, et al., “Compact Video Signatures for Near-Duplicate Detection on Mobile Devices”, Consumer Electronics (ISCE 2014), The 18th IEEE International Symposium on Year: 2014, pp. 1-2, DOI: 10.1109/ISCE.2014.6884293 IEEE Conference Publications.
Raichelgauz, I. et al.: “Co-evolutionary Learning in Liquid Architectures”, Lecture Notes in Computer Science, [Online] vol. 3512, Jun. 21, 2005 (Jun. 21, 2005), pp. 241-248, XP019010280 Springer Berlin / Heidelberg ISSN: 1611-3349 ISBN: 978-3-540-26208-4.
Ribert et al. “An Incremental Hierarchical Clustering”, Visicon Interface 1999, pp. 586-591.
Scheper et al, “Nonlinear dynamics in neural computation”, ESANN'2006 proceedings—European Symposium on Artificial Neural Networks, Bruges (Belgium), Apr. 26-28, 2006, d-side publi, ISBN 2-930307-06-4.
Semizarov et al. “Specificity of Short Interfering RNA Determined through Gene Expression Signatures”, PNAS, 2003, pp. 6347-6352.
Shih-Fu Chang, et al., “VideoQ: A Fully Automated Video Retrieval System Using Motion Sketches”, 1998, IEEE, , New York, pp. 1-2.
Theodoropoulos et al, “Simulating Asynchronous Architectures on Transputer Networks”, Proceedings of the Fourth Euromicro Workshop On Parallel and Distributed Processing, 1996. PDP '96.
Vailaya, et al., “Content-Based Hierarchical Classification of Vacation Images,” I.E.E.E.: Multimedia Computing and Systems, vol. 1, 1999, East Lansing, MI, pp. 518-523.
Vallet, et al., “Personalized Content Retrieval in Context Using Ontological Knowledge,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 3, Mar. 2007, pp. 336-346.
Verstraeten et al., “Isolated word recognition with the Liquid State Machine: a case study”; Department of Electronics and Information Systems, Ghent University, Sint-Pietersnieuwstraat 41, 9000 Gent, Belgium, Available online Jul. 14, 2005; Entire Document.
Verstraeten et al.: “Isolated word recognition with the Liquid State Machine: a case study”, Information Processing Letters, Amsterdam, NL, vol. 95, No. 6, Sep. 30, 2005 (Sep. 30, 2005), pp. 521-528, XP005028093 ISSN: 0020-0190.
Wang et al. “A Signature for Content-based Image Retrieval Using a Geometrical Transform”, ACM 1998, pp. 229-234.
Wei-Te Li et al., “Exploring Visual and Motion Saliency for Automatic Video Object Extraction”, IEEE, vol. 22, No. 7, Jul. 2013, pp. 1-11.
Whitby-Strevens, “The Transputer”, 1985 IEEE, Bristol, UK.
Xian-Sheng Hua et al.: “Robust Video Signature Based on Ordinal Measure” In: 2004 International Conference on Image Processing, ICIP '04; Microsoft Research Asia, Beijing, China; published Oct. 24-27, 2004, pp. 685-688.
Yanai, “Generic Image Classification Using Visual Knowledge on the Web,” MM'03, Nov. 2-8, 2003, Tokyo, Japan, pp. 167-176.
Zang, et al., “A New Multimedia Message Customizing Framework for Mobile Devices”, Multimedia and Expo, 2007 IEEE International Conference on Year: 2007, pp. 1043-1046, DOI: 10.1109/ICME.2007.4284832 IEEE Conference Publications.
Zhou et al., “Ensembling neural networks: Many could be better than all”; National Laboratory for Novel Software Technology, Nanjing Unviersirty, Hankou Road 22, Nanjing 210093, PR China; Available online Mar. 12, 2002; Entire Document.
Zhou et al., “Medical Diagnosis With C4.5 Rule Preceded by Artificial Neural Network Ensemble”; IEEE Transactions on Information Technology in Biomedicine, vol. 7, Issue: 1, pp. 37-42, Date of Publication: Mar. 2003.
Zhu et al., Technology-Assisted Dietary Assessment. Computational Imaging VI, edited by Charles A. Bouman, Eric L. Miller, Ilya Pollak, Proc. of SPIE-IS&T Electronic Imaging, SPIE vol. 6814, 681411, Copyright 2008 SPIE-IS&T. pp. 1-10.
Johnson, John L., “Pulse-Coupled Neural Nets: Translation, Rotation, Scale, Distortion, and Intensity Signal Invariance for Images.” Applied Optics, vol. 33, No. 26, 1994, pp. 6239-6253.
The International Search Report and the Written Opinion for PCT/US2016/050471, ISA/RU, Moscow, RU, dated May 4, 2017.
The International Search Report and the Written Opinion for PCT/US2016/054634 dated Mar. 16, 2017, ISA/RU, Moscow, RU.
The International Search Report and the Written Opinion for PCT/US2017/015831, ISA/RU, Moscow, Russia, dated Apr. 20, 2017.
Hogue, “Tree Pattern Inference and Matching for Wrapper Induction on the World Wide Web”, Master's Thesis, Massachusetts institute of Technology, 2004, pp. 1-106.
Johnson et al, “Pulse-Coupled Neural Nets: Translation, Rotation, Scale, Distortion, and Intensity Signal Invariance for Images”, Applied Optics, vol. 33, No. 26, 1994, pp. 6239-6253.
McNamara et al., “Diversity Decay in opportunistic Content Sharing Systems”, 2011 IEEE International Symposium an a World of Wireless, Mobile and Multimedia Networks, pp. 1-3.
Rui, Yong et al. “Relevance feedback: a power tool for interactive content-based image retrieval.” IEEE Transactions an circuits and systems for video technology 8.5 (1998): 644-655.
“Computer Vision Demonstration Website”, Electronics and Computer Science, University of Southampton, 2005, USA.
Big Bang Theory Series 04 Episode 12, aired Jan. 6, 2011; [retrieved from Internet: ].
Boari et al, “Adaptive Routing for Dynamic Applications in Massively Parallel Architectures”, 1995 IEEE, Spring 1995, pp. 1-14.
Burgsteiner et al., “Movement Prediction from Real-World Images Using a Liquid State machine”, Innovations in Applied Artificial Intelligence Lecture Notes in Computer Science, Lecture Notes in Artificial Intelligence, LNCS, Springer-Verlag, BE, vol. 3533, Jun. 2005, pp. 121-130.
Cernansky et al, “Feed-forward Echo State Networks”, Proceedings of International Joint Conference on Neural Networks, Montreal, Canada, Jul. 31-Aug. 4, 2005, pp. 1-4.
Chinchor, Nancy A. et al.; Multimedia Analysis + Visual Analytics = Multimedia Analytics; IEEE Computer Society 2010; pp. 52-60. (Year: 2010).
Fathy et al, “A Parallel Design and Implementation For Backpropagation Neural Network Using MIMD Architecture”, 8th Mediterranean Electrotechnical Conference, 19'96. MELECON '96, Date of Conference: May 13-16, 1996, vol. 3 pp. 1472-1475, vol. 3.
Freisleben et al, “Recognition of Fractal Images Using a Neural Network”, Lecture Notes in Computer Science, 1993, vol. 6861, 1993, pp. 631-637.
Garcia, “Solving the Weighted Region Least Cost Path Problem Using Transputers”, Naval Postgraduate School, Monterey, California, Dec. 1989.
Guo et al, AdOn: An Intelligent Overlay Video Advertising System (Year: 2009).
Hogue, “Tree Pattern Inference and Matching for Wrapper Induction on the World Wide Web”, Master's Thesis, Massachusetts Institute of Technology, Jun. 2004, pp. 1-106.
Howlett et al, “A Multi-Computer Neural Network Architecture in a Virtual Sensor System Application”, International journal of knowledge-based intelligent engineering systems, 4 (2). pp. 86-93, 133N 1327-2314.
Hua et al., “Robust Video Signature Based on Ordinal Measure”, Image Processing, 2004, 2004 International Conference on Image Processing (ICIP), vol. 1, IEEE, pp. 685-688, 2004.
International Search Report and Written Opinion for PCT/US2016/050471, ISA/RU, Moscow, RU, dated May 4, 2017.
International Search Report and Written Opinion for PCT/US2016/054634, ISA/RU, Moscow, RU, dated Mar. 16, 2017.
International Search Report and Written Opinion for PCT/US2017/015831, ISA/RU, Moscow, RU, dated Apr. 20, 2017.
Johnson et al, “Pulse-Coupled Neural Nets: Translation, Rotation, Scale, Distortion, and Intensity Signal Invariance tor Images”, Applied Optics, vol. 33, No. 26, 1994, pp. 6239-6253.
Lau et al., “Semantic Web Service Adaptation Model for a Pervasive Learning Scenario”, 2008 IEEE Conference on Innovative Technologies in Intelligent Systems and Industrial Applications, 2008, pp. 98-103.
Li et al (“Matching Commercial Clips from TV Streams Using a Unique, Robust and Compact Signature” 2005) (Year: 2005).
Lin et al., “Generating robust digital signature for image/video authentication”, Multimedia and Security Workshop at ACM Multimedia '98, Bristol, U.K., Sep. 1998, pp. 245-251.
Lu et al, “Structural Digital Signature for Image Authentication: An Incidental Distortion Resistant Scheme”, IEEE Transactions on Multimedia, vol. 5, No. 2, Jun. 2003, pp. 161-173.
Lyon, “Computational Models of Neural Auditory Processing”, IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '84, Date of Conference: Mar. 1984, vol. 9, pp. 41-44.
Marian Stewart B et al., “Independent component representations for face recognition”, Proceedings of the SPIE Symposium on Electronic Imaging: Science and Technology; Conference on Human Vision and Electronic Imaging III, San Jose, California, Jan. 1998, pp. 1-12.
May et al, “The Transputer”, Springer-Verlag Berlin Heidelberg 1989, vol. 41.
McNamara et al., “Diversity Decay in opportunistic Content Sharing Systems”, 2011 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks, pp. 1-3.
Morad et al., “Performance, Power Efficiency and Scalability of Asymmetric Cluster Chip Multiprocessors”, Computer Architecture Letters, vol. 4, Jul. 4, 2005, pp. 1-4, XP002466254.
Nagy et al, “A Transputer, Based, Flexible, Real-Time Control System for Robotic Manipulators”, UKACC International Conference on CONTROL '96, Sep. 2-5, 1996, Conference Publication No. 427, IEE 1996.
Natschlager et al., “The “Liquid Computer”: A novel strategy for real-time computing on time series”, Special Issue on Foundations of Information Processing of telematik, vol. 8, No. 1, 2002, pp. 39-43, XP002466253.
Odinaev et al, “Cliques in Neural Ensembles as Perception Carriers”, Technion—Institute of Technology, 2006 International Joint Conference on neural Networks, Canada, 2006, pp. 285-292.
Ortiz-Boyer et al, “CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features”, Journal of Artificial Intelligence Research 24 (2005) Submitted Nov. 2004; published Jul. 2005, pp. 1-48.
Pandya etal. A Survey on QR Codes: in context of Research and Application. International Journal of Emerging Technology and U Advanced Engineering. ISSN 2250-2459, ISO 9001:2008 Certified Journal, vol. 4, Issue 3, Mar. 2014 (Year: 2014).
Queluz, “Content-Based Integrity Protection of Digital Images”, SPIE Conf. on Security and Watermarking of Multimedia Contents, San Jose, Jan. 1999, pp. 85-93.
Rui, Yong et al. “Relevance feedback: a power tool for interactive content-based image retrieval.” IEEE Transactions on circuits and systems for video technology 8.5 (1998): 644-655.
Santos et al., “SCORM-MPEG: an Ontology of Interoperable Metadata for multimediaand E-Learning”, 23rd International Conference on Software, Telecommunications and Computer Networks (SoftCom), 2015, pp. 224-228.
Scheper et al, “Nonlinear dynamics in neural computation”, ESANN'2006 proceedings—European Symposium on Artificial Neural Networks, Bruges (Belgium), Apr. 26-28, 2006, d-side publication, ISBN 2-930307-06-4, pp. 1-12.
Schneider et al, “A Robust Content based Digital Signature for Image Authentication”, Proc. ICIP 1996, Lausane, Switzerland, Oct. 1996, pp. 227-230.
Srihari et al., “Intelligent Indexing and Semantic Retrieval of Multimodal Documents”, Kluwer Academic Publishers, May 2000, vol. 2, Issue 2-3, pp. 245-275.
Srihari, Rohini K. “Automatic indexing and content-based retrieval of captioned images” Computer 0 (1995): 49-56.
Stolberg et al (“Hibrid-Soc: A Multi-Core Soc Architecture for Multimedia Signal Processing” 2003).
Stolberg et al, “Hibrid-Soc: A Mul Ti-Core Soc Architecture for Mul Timedia Signal Processing”, 2003 IEEE, pp. 189-194.
Theodoropoulos et al, “Simulating Asynchronous Architectures on Transputer Networks”, Proceedings of the Fourth Euromicro Workshop on Parallel and Distributed Processing, 1996. PDP '96, pp. 274-281.
Vallet et al (“Personalized Content Retrieval in Context Using Ontological Knowledge” Mar. 2007) (Year: 2007).
Verstraeten et al, “Isolated word recognition with the Liquid State Machine: a case study”, Department of Electronics and Information Systems, Ghent University, Sint-Pietersnieuwstraat 41, 9000 Gent, Belgium, Available onlline Jul. 14, 2005, pp. 521-528.
Wang et al., “Classifying Objectionable Websites Based onImage Content”, Stanford University, pp. 1-12.
Ware et al, “Locating and Identifying Components in a Robot's Workspace using a Hybrid Computer Architecture” Proceedings of the 1995 IEEE International Symposium on Intelligent Control, Aug. 27-29, 1995, pp. 139-144.
Whitby-Strevens, “The transputer”, 1985 IEEE, pp. 292-300.
Wilk et al., “The Potential of Social-Aware Multimedia Prefetching on Mobile Devices”, International Conference and Workshops on networked Systems (NetSys), 2015, pp. 1-5.
Yanagawa et al, “Columbia University's Baseline Detectors for 374 LSCOM Semantic Visual Concepts”, Columbia University ADVENT Technical Report # 222-2006-8, Mar. 20, 2007, pp. 1-17.
Yanagawa et al., “Columbia University's Baseline Detectors for 374 LSCOM Semantic Visual Concepts”, Columbia University ADVENT Technical Report #222, 2007, pp. 2006-2008.
Zhou et al, “Ensembling neural networks: Many could be better than all”, National Laboratory for Novel Software Technology, Nanjing University, Hankou Road 22, Nanjing 210093, PR China, Available online Mar. 12, 2002, pp. 239-263.
Ma Et El. (“Semantics modeling based image retrieval system using neural networks” 2005 (Year: 2005).
Zhou et al, “Medical Diagnosis With C4.5 Rule Preceded by Artificial Neural Network Ensemble”, IEEE Transactions on Information Technology in Biomedicine, vol. 7, Issue: 1, Mar. 2003, pp. 37-42.
Zhu et al., “Technology-Assisted Dietary Assesment”, Proc SPIE. Mar. 20, 2008, pp. 1-15.
Zou et al., “A Content-Based Image Authentication System with Lossless Data Hiding”, ICME 2003, pp. 213-216.
Jasinschi et al., A Probabilistic Layered Framework for Integrating Multimedia Content and Context Information, 2002, IEEE, p. 2057-2060. (Year: 2002).
Jones et al., “Contextual Dynamics of Group-Based Sharing Decisions”, 2011, University of Bath, p. 1777-1786. (Year: 2011).
Iwamoto, “Image Signature Robust to Caption Superimpostion for Video Sequence Identification”, IEEE, pp. 3185-3188 (Year: 2006).
Cooperative Multi-Scale Convolutional Neural, Networks for Person Detection, Markus Eisenbach, Daniel Seichter, Tim Wengefeld, and Horst-Michael Gross Ilmenau University of Technology, Neuroinformatics and Cognitive Robotics Lab (Year; 2016)
Chen, Yixin, James Ze Wang, and Robert Krovetz. “Clue: cluster-based retrieval of images by unsupervised learning.” IEEE transactions on Image Processing 14.8 (2005); 1187-1201. (Year: 2005).
Wusk et al (Non-Invasive detection of Respiration and Heart Rate with a Vehicle Seat Sensor; www.mdpi.com/journal/sensors; Published: May 8, 2018). (Year: 2018).
Chen, Tiffany Yu-Han, et al. “Glimpse: Continuous, real-time object recognition on mobile devices.” Proceedings of the 13th ACM Confrecene on Embedded Networked Sensor Systems. 2015. (Year: 2015).
Related Publications (1)
Number Date Country
20170046343 A1 Feb 2017 US
Provisional Applications (1)
Number Date Country
62310742 Mar 2016 US
Continuations (1)
Number Date Country
Parent 13766463 Feb 2013 US
Child 14643694 US
Continuation in Parts (1)
Number Date Country
Parent 14643694 Mar 2015 US
Child 15296551 US