The present invention relates generally to the analysis of multimedia content, and, more specifically, to a system for diagnosing a patient based on an analysis of multimedia content.
The current methods used to diagnose a disease of a medical condition usually rely on a patient's visit to a medical professional who is specifically trained to diagnose specific medical conditions that the patient may suffer from.
Today, an abundance of data relating to such medical condition is likely to be available through various sources in general and the Internet and world-wide web (WWW) in particular. This data allows the patient, if he or she is so inclined, to at least begin to understand the medical condition by searching for information about it.
The problem is that, while a person searches through the web for a self-diagnosis, the person may ignore one or more identifiers which are related to the medical condition and, therefore, may receive information that is inappropriate or inaccurate with respect to the person's specific medical condition. This inappropriate or inaccurate information often leads to a misdiagnosis by the patient, increased anxiety, and waste of a doctor or other caregiver's time as such caregiver needs to correct the misinformed patient's understanding of the medical condition.
As an example, a person may experience a rash and look up medical conditions related to rashes. Without expertise in dermatology, the person may determine that the experienced rash is similar to that caused by poison ivy. An immediate remedy may be cleaning the rash followed by calamine lotion is the only necessary treatment. However, if the rash is caused by an allergic reaction to a food, a different treatment may be require, such as exposure to epinephrine.
Moreover, a patient may receive digital content respective of the medical condition including, but not limited to, medical reports, images, and other multimedia content. However, other than being able to send such content to other advice providers, the patients cannot typically effectively use such content to aid in diagnosis. Rather, the patient can frequently only provide the content to a caregiver or someone else who is capable of adequately understanding the relevance of such content.
It would be therefore advantageous to provide a solution for identifying a plurality of disease characteristics related to patients, and providing diagnoses respective thereof.
Certain embodiments disclosed herein include a method and system for diagnosing a patient based on analysis of multimedia content. The method includes receiving at least one multimedia content element respective of the patient from a user device; generating at least one signature for the at least one multimedia content element; generating at least one identifier respective of the at least one multimedia content element using the at least one generated signature; searching a plurality of data sources for possible diagnoses respective of the one or more identifiers; and providing at least one possible diagnoses respective of the at least one multimedia content element to the user device.
The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention will be apparent from the following detailed description taken in conjunction with the accompanying drawings.
It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed inventions. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views
Certain exemplary embodiments disclosed herein enable the possible diagnosis of patients based on the analysis of multimedia content. The diagnosis may be used, for example, as a preliminary diagnostic tool by a patient or as a recommendation tool for a medical specialist. The diagnosis begins with generating signatures for the multimedia content. The generated signatures are analysis one or more identifiers related to a patient are provided. The identifiers are used in order to provide the possible diagnoses. An identifier is an element identified within the multimedia content which may be used for diagnosing the medical condition of a patient. The identifiers may be visual, for example abnormal marks on a body part or vocal, for example, hoarseness in the patient's voice. The multimedia content is analyzed and one or more matching signatures are generated respective thereto. Thereafter, the signatures generated for the identifiers are used for searching possible diagnoses through one or more data sources. The diagnoses are then provided to the user. According to another embodiment, the one or more possible diagnoses are stored in a data warehouse or a database.
As a non-limiting example, an image of a patient's face is received by a user device. One or more signatures are generated respective of the received image. An analysis of the one or more generated signatures is then performed. The analysis may include a process of matching the signatures to one or more signatures existing in a data warehouse and extraction of identifiers respective of the matching process. Identifiers may be extracted if, e.g., such identifiers are associated with signatures from the data warehouse that demonstrated matching with the one or more generated signatures. Based on the analysis of the one or more signatures, the patient is identified as an infant. In addition, abnormal skin redness is identified on the patient's face through the image.
Respective of the identifiers, a search is performed through a plurality of data sources for possible diagnoses. The search may be made by, for example, using the image as a search query as further described in U.S. patent application Ser. No. 13/773,112, assigned to common assignee, and is hereby incorporated by reference for all the useful information they contain. While searching through the plurality of data sources for a possible diagnosis, skin redness is identified as a common syndrome of the atopic dermatitis disease among infants. The possible diagnosis is then provided to the user device and then stored in a database for further use.
Further connected to the network 110 are one or more user devices (UD) 120-1 through 120-n (collectively referred to hereinafter as user devices 120 or individually as a user device 120). A user device 120 may be, for example, a personal computer (PC), a mobile phone, a smart phone, a tablet computer, a wearable device, and the like. The user devices 120 are configured to provide multimedia content elements to a server 130 which is also connected to the network 110.
The uploaded multimedia content can be locally saved in the user device 120 or can be captured by the user device 120. For example, the multimedia content may be an image captured by a camera installed in the user device 120, a video clip saved in the user device 120, and so on. A multimedia content may be, for example, an image, a graphic, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, text or image thereof, and an image of signals (e.g., spectrograms, phasograms, scalograms, etc.), and/or combinations thereof and portions thereof.
The system 100 also includes one or more web sources 150-1 through 150-m (collectively referred to hereinafter as web sources 150 or individually as a web source 150) that are connected to the network 110. Each of the web sources 150 may be, for example, a web server, an application server, a data repository, a database, a professional medical database, and the like. According to one embodiment, one or more multimedia content elements of normal (or baseline) identifiers are stored in a database such as, for example, a database 160. A baseline identifier may be, for example, a clean skin image, a normal voice recording, etc. The baseline identifiers are used as references in order to identify one or more abnormal identifiers while analyzing the generated signatures of an input multimedia content.
The server 130 and a signature generator system (SGS) 140 are core to the embodiments disclosed herein. In an embodiment, the server 130 is to generate one or more identifiers, either visual or vocal, which are used to search for one or more possible diagnoses.
The SGS 140 is configured to generate a signature respective of the multimedia content elements and/or content fed by the server 130. The process of generating the signatures is explained in more detail herein below with respect to
The server 130 is configured to receive at least one multimedia content element from, for example, the user device 120. The at least one multimedia content element is sent to the SGS 140. The SGS 140 is configured to generate at least one signature for the at least one multimedia content element or each portion thereof. The generated signature(s) may be robust to noise and distortions as discussed below. The generated signatures are then analyzed and one or more identifiers related to the content provided are generated.
As a non-limiting example, a user captures an image by taking a picture using a smart phone (e.g., a user device 120) and uploads the picture to a server 130. In this example, the picture features an image of the user's eye when the user is infected with pinkeye. The server 130 is configured to receive the image and send the image to an SGS 140. The SGS 140 generates a signature respective of the image.
The signature generated respective of the image is compared to signatures of baseline identifiers stored in a database 160. In this example, the signature is determined to demonstrate sufficient matching with an image of a normal (uninfected) human eye used as a normal identifier. Upon further analysis, it is determined that part of the image (namely, the color of the eye in the pinkeye image) is different and, therefore, is an abnormal identifier. Consequently, this abnormal identifier is provided to a data source so that a search may be performed. When the search has been completed, the server 130 returns the results of the search indicating that the user may have pinkeye.
The signature generated for an image or any multimedia content would enable accurate recognition of abnormal identifiers. This is because the signatures generated for the multimedia content, according to the disclosed embodiments, allow for recognition and classification of multimedia elements, such as by content-tracking, video filtering, multimedia taxonomy generation, video fingerprinting, speech-to-text, audio classification, element recognition, video/image search and any other application requiring content-based signatures generation and matching for large content volumes such as web and other large-scale databases.
In S230, based on the generated signatures, at least one identifier are generated and/or retrieved. In an embodiment, the identifier(s) may be retrieved from a data warehouse (e.g., the data database 160). The identifiers may be visual or vocal. In S240, respective of the identifiers, one or more possible diagnoses are searched for through one or more data sources. The data sources may be, for example, any one of the one or more web sources 150, the database 160, and so on. According to one embodiment, the identifiers may be converted to one or more text queries which will be used in order to search for possible diagnoses through one or more search engines. In another embodiment, a signature can be generated for the identifier and the search for possible diagnoses may be performed using such signature. For example, if a redness is identified in the portion of the received multimedia content element, a signature is generated for such portion of multimedia content. The search is for possible diagnoses is performed using the signature generated for the portion of the image including the redness. Identification of diagnoses based on identifiers are discussed further herein below with respect to
In S250, it is checked whether at least one possible diagnosis has been identified and, if so, execution continues with S260; otherwise, execution terminates. In S260, the one or more identified possible diagnoses are returned. According to yet another embodiment, in cases where a plurality of possible diagnosis were identified, the diagnoses may be prioritized by, for example, their commonness, the degree of match between the plurality of identifiers and the possible diagnoses, etc.
As a non-limiting example of diagnosis prioritization, if a user provides an image featuring a discoloration of the skin, the area where skin is discolored may be a visual identifier. It is determined that multiple possible diagnoses are associated with this size of skin discoloration. However, one medical condition may be identified as the highest priority diagnosis due to a high degree of matching as a result of the similarity in color between the provided discoloration and the diagnostic discoloration. As an example, an image featuring a blue discoloration may yield identification of discolorations caused by bruising as closer in color than discolorations caused by medical conditions such as eczema, chicken pox, allergic reaction, and so on, which frequently cause red discolorations. In such an example, diagnoses related to bruising (e.g., sprains, broken bones, etc.) may be prioritized over other causes of skin discoloration. In S270 it is checked whether to continue with the operation and if so, execution continues with S220; otherwise, execution terminates.
As a non-limiting example, an image of a patient's face and a recording of the patient's voice is received. The image and the recording are then analyzed by server 130 and a plurality of signatures are generated by SGS 140 respective thereto. Based on an analysis of the signatures, an abnormal redness is identified in the patient's eye and hoarseness is identified in the patient's voice. Based on the identifiers, a search for possible diagnoses is initiated. Responsive of the search, “Scarlet fever” and “Mumps disease” may be identified as possible diagnoses. As the identifiers related to the patient's eye and hoarseness of the throat are more frequent in cases of the Scarlet fever, the Scarlet fever will be provided as the more likely result. According to one embodiment, one or more advertisements may be provided to the user based on the one or more possible diagnoses. The advertisement may be received from publisher servers (not shown) and displayed together with the one or more possible diagnoses.
Video content segments 2 from a Master database (DB) 6 and a Target DB 1 are processed in parallel by a large number of independent computational Cores 3 that constitute an architecture for generating the Signatures (hereinafter the “Architecture”). Further details on the computational Cores generation are provided below. The independent Cores 3 generate a database of Robust Signatures and Signatures 4 for Target content-segments 5 and a database of Robust Signatures and Signatures 7 for Master content-segments 8. An exemplary and non-limiting process of signature generation for an audio component is shown in detail in
To demonstrate an example of signature generation process, it is assumed, merely for the sake of simplicity and without limitation on the generality of the disclosed embodiments, that the signatures are based on a single frame, leading to certain simplification of the computational cores generation. The Matching System is extensible for signatures generation capturing the dynamics in-between the frames.
The Signatures' generation process is now described with reference to
In order to generate Robust Signatures, i.e., Signatures that are robust to additive noise L (where L is an integer equal to or greater than 1) by the Computational Cores 3, a frame ‘i’ is injected into all the Cores 3. Then, Cores 3 generate two binary response vectors: {right arrow over (S)} which is a Signature vector, and {right arrow over (RS)} which is a Robust Signature vector.
For generation of signatures robust to additive noise, such as White-Gaussian-Noise, scratch, etc., but not robust to distortions, such as crop, shift and rotation, etc., a core Ci={ni} (1≦i≦L) may consist of a single leaky integrate-to-threshold unit (LTU) node or more nodes. The node ni equations are:
n
i=θ(Vi−Thx) 1.
where, θ is a Heaviside step function; wij is a coupling node unit (CNU) between node i and image component j (for example, grayscale value of a certain pixel j); kj is an image component ‘j’ (for example, grayscale value of a certain pixel j); Thx is a constant Threshold value, where x is ‘S’ for Signature and ‘RS’ for Robust Signature; and Vi is a Coupling Node Value.
The Threshold values ThX are set differently for Signature generation and for Robust Signature generation. For example, for a certain distribution of values (for the set of nodes), the thresholds for Signature (ThS) and Robust Signature (ThRS) are set apart, after optimization, according to at least one or more of the following criteria:
ii. 1: For: Vi>ThRS
1−p(V>ThS)−1−(1−ε)l>>1
i.e., given that l nodes (cores) constitute a Robust Signature of a certain image I, the probability that not all of these l nodes will belong to the Signature of a same, but noisy image, Ĩ is sufficiently low (according to a system's specified accuracy).
iii. 2: p(Vi>ThRS)≈l/L
i.e., approximately l out of the total L nodes can be found to generate a Robust Signature according to the above definition.
iv. 3: Both Robust Signature and Signature are generated for certain frame i.
It should be understood that the generation of a signature is unidirectional, and typically yields lossless compression, where the characteristics of the compressed data are maintained but the uncompressed data cannot be reconstructed. Therefore, a signature can be used for the purpose of comparison to another signature without the need of comparison to the original data. The detailed description of the Signature generation can be found U.S. Pat. Nos. 8,326,775 and 8,312,031, assigned to common assignee, and are hereby incorporated by reference for all the useful information they contain.
A Computational Core generation is a process of definition, selection, and tuning of the parameters of the cores for a certain realization in a specific system and application. The process is based on several design considerations, such as:
Detailed description of the Computational Core generation and the process for configuring such cores is discussed in more detail in the above-referenced U.S. Pat. No. 8,655,801, assigned to the common assignee, which is hereby incorporated by reference for all that it contains.
In S520, the potentially related portions of signatures and/or the full signature are compared to signatures of existing baseline identifiers. In an embodiment, such existing identifiers may be retrieved from a data warehouse (e.g., data warehouse 160). In another embodiment, this comparison may be conducted by performing signature matching between the portions of signatures and the signatures of normal identifiers. Signature matching is described further herein above with respect to
In S530, signatures of existing baseline identifiers that demonstrated sufficient matching with the portions of signatures are retrieved. Matching may be sufficient if, e.g., the matching score is above a certain threshold, the matching score of one signature is the highest among compared signatures, and so on. Optionally in S535, one or more baseline identifiers may be generated based on the matching. In a further embodiment, generation occurs if no normal identifier demonstrated sufficient matching with the portion of the multimedia content signature.
In S540, one or more baseline identifiers is determined and retrieved. In an embodiment, baseline identifiers may be determined based on differences between the retrieved normal identifier signatures and the portions of multimedia content signatures. In S550, the baseline identifiers are provided to a data source to perform a search. In S560, the results of the search are returned. In S570, it is checked whether additional multimedia content signatures or portions thereof must be analyzed. If so, execution continues with S510; otherwise, execution terminates.
As a non-limiting example, a user provides multimedia content featuring a swollen wrist. Several portions of the signature that may be relevant to diagnosis are determined. In this example, such portions may include a hand, an arm, veins, fingers, a thumb, a patch of skin demonstrating a bump, and a discolored patch of skin. The signatures of the swollen wrist are compared to signatures in a database, and a signature related to a picture of an uninjured wrist is retrieved as a normal identifier.
The portions of the signature identifying the discoloration and disproportionately large segments of the wrist are determined to be differences. Thus, the portions of the multimedia content related to those portions of signatures are determined to be relevant abnormal identifiers. The determined abnormal identifiers are retrieved and provided to a data source. In this example, the data source performs a search based on the abnormal identifiers and determines that the abnormal identifiers are typical for sprained wrists. Thus, the results of the search indicating that the user's wrist may be sprained are returned.
The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Number | Date | Country | Kind |
---|---|---|---|
171577 | Oct 2005 | IL | national |
173409 | Jan 2006 | IL | national |
185414 | Aug 2007 | IL | national |
This application claims the benefit of U.S. Provisional Application No. 61/839,871 filed on Jun. 27, 2013. This application is also a continuation-in-part (CIP) of U.S. patent application Ser. No. 13/624,397 filed on Sep. 21, 2012. The Ser. No. 13/624,397 Application is a continuation-in-part of: (a) U.S. patent application Ser. No. 13/344,400 filed on Jan. 5, 2012, which is a continuation of U.S. patent application Ser. No. 12/434,221, filed May 1, 2009, now U.S. Pat. No. 8,112,376. The Ser. No. 13/344,400 Application is also a continuation-in-part of the below-referenced U.S. patent application Ser. No. 12/195,863 and Ser. No. 12/084,150;(b) U.S. patent application Ser. No. 12/195,863, filed Aug. 21, 2008, now U.S. Pat. No. 8,326,775, which claims priority under 35 USC 119 from Israeli Application No. 185414, filed on Aug. 21, 2007, and which is also a continuation-in-part of the below-referenced U.S. patent application Ser. No. 12/084,150; and,(c) U.S. patent application Ser. No. 12/084,150 filed on Apr. 7, 2009, now U.S. Pat. No. 8,655,801, which is the National Stage of International Application No. PCT/IL2006/001235, filed on Oct. 26, 2006, which claims foreign priority from Israeli Application No. 171577 filed on Oct. 26, 2005 and Israeli Application No. 173409 filed on 29 Jan. 2006. All of the applications referenced above are herein incorporated by reference for all that they contain.
Number | Name | Date | Kind |
---|---|---|---|
4733353 | Jaswa | Mar 1988 | A |
4932645 | Schorey et al. | Jun 1990 | A |
4972363 | Nguyen et al. | Nov 1990 | A |
5307451 | Clark | Apr 1994 | A |
5568181 | Greenwood et al. | Oct 1996 | A |
5806061 | Chaudhuri et al. | Sep 1998 | A |
5852435 | Vigneaux et al. | Dec 1998 | A |
5870754 | Dimitrova et al. | Feb 1999 | A |
5873080 | Coden et al. | Feb 1999 | A |
5940821 | Wical | Aug 1999 | A |
5987454 | Hobbs | Nov 1999 | A |
6038560 | Wical | Mar 2000 | A |
6122628 | Castelli et al. | Sep 2000 | A |
6128651 | Cezar | Oct 2000 | A |
6137911 | Zhilyaev | Oct 2000 | A |
6144767 | Bottou et al. | Nov 2000 | A |
6173068 | Prokoski | Jan 2001 | B1 |
6240423 | Hirata | May 2001 | B1 |
6243375 | Speicher | Jun 2001 | B1 |
6243713 | Nelson et al. | Jun 2001 | B1 |
6329986 | Cheng | Dec 2001 | B1 |
6363373 | Steinkraus | Mar 2002 | B1 |
6381656 | Shankman | Apr 2002 | B1 |
6493692 | Kobayashi et al. | Dec 2002 | B1 |
6493705 | Kobayashi et al. | Dec 2002 | B1 |
6523022 | Hobbs | Feb 2003 | B1 |
6523046 | Liu et al. | Feb 2003 | B2 |
6526400 | Takata et al. | Feb 2003 | B1 |
6550018 | Abonamah et al. | Apr 2003 | B1 |
6560597 | Dhillon et al. | May 2003 | B1 |
6594699 | Sahai et al. | Jul 2003 | B1 |
6601060 | Tomaru | Jul 2003 | B1 |
6611628 | Sekiguchi et al. | Aug 2003 | B1 |
6611837 | Schreiber | Aug 2003 | B2 |
6618711 | Ananth | Sep 2003 | B1 |
6665657 | Dibachi | Dec 2003 | B1 |
6675159 | Lin et al. | Jan 2004 | B1 |
6704725 | Lee | Mar 2004 | B1 |
6728706 | Aggarwal et al. | Apr 2004 | B2 |
6732149 | Kephart | May 2004 | B1 |
6751613 | Lee et al. | Jun 2004 | B1 |
6754435 | Kim | Jun 2004 | B2 |
6774917 | Foote et al. | Aug 2004 | B1 |
6795818 | Lee | Sep 2004 | B1 |
6819797 | Smith et al. | Nov 2004 | B1 |
6836776 | Schreiber | Dec 2004 | B2 |
6901207 | Watkins | May 2005 | B1 |
6938025 | Lulich et al. | Aug 2005 | B1 |
6970881 | Mohan et al. | Nov 2005 | B1 |
6978264 | Chandrasekar et al. | Dec 2005 | B2 |
7013051 | Sekiguchi et al. | Mar 2006 | B2 |
7020654 | Najmi | Mar 2006 | B1 |
7047033 | Wyler | May 2006 | B2 |
7124149 | Smith et al. | Oct 2006 | B2 |
7199798 | Echigo et al. | Apr 2007 | B1 |
7260564 | Lynn et al. | Aug 2007 | B1 |
7277928 | Lennon | Oct 2007 | B2 |
7296012 | Ohashi | Nov 2007 | B2 |
7302117 | Sekiguchi et al. | Nov 2007 | B2 |
7313805 | Rosin et al. | Dec 2007 | B1 |
7340458 | Vaithilingam et al. | Mar 2008 | B2 |
7346629 | Kapur et al. | Mar 2008 | B2 |
7353224 | Chen et al. | Apr 2008 | B2 |
7376672 | Weare | May 2008 | B2 |
7376722 | Sim et al. | May 2008 | B1 |
7392238 | Zhou et al. | Jun 2008 | B1 |
7406459 | Chen et al. | Jul 2008 | B2 |
7433895 | Li et al. | Oct 2008 | B2 |
7450740 | Shah et al. | Nov 2008 | B2 |
7464086 | Black et al. | Dec 2008 | B2 |
7523102 | Bjarnestam et al. | Apr 2009 | B2 |
7526607 | Singh et al. | Apr 2009 | B1 |
7536384 | Venkataraman et al. | May 2009 | B2 |
7536417 | Walsh et al. | May 2009 | B2 |
7542969 | Rappaport et al. | Jun 2009 | B1 |
7548910 | Chu et al. | Jun 2009 | B1 |
7555477 | Bayley et al. | Jun 2009 | B2 |
7555478 | Bayley et al. | Jun 2009 | B2 |
7562076 | Kapur | Jul 2009 | B2 |
7574436 | Kapur et al. | Aug 2009 | B2 |
7574668 | Nunez et al. | Aug 2009 | B2 |
7657100 | Gokturk et al. | Feb 2010 | B2 |
7660468 | Gokturk et al. | Feb 2010 | B2 |
7660737 | Lim et al. | Feb 2010 | B1 |
7697791 | Chan et al. | Apr 2010 | B1 |
7769221 | Shakes et al. | Aug 2010 | B1 |
7788132 | Desikan et al. | Aug 2010 | B2 |
7860895 | Scofield et al. | Dec 2010 | B1 |
7904503 | Van De Sluis | Mar 2011 | B2 |
7920894 | Wyler | Apr 2011 | B2 |
7921107 | Chang et al. | Apr 2011 | B2 |
7974994 | Li et al. | Jul 2011 | B2 |
7987194 | Walker et al. | Jul 2011 | B1 |
7987217 | Long et al. | Jul 2011 | B2 |
7991715 | Schiff et al. | Aug 2011 | B2 |
8000655 | Wang et al. | Aug 2011 | B2 |
8112376 | Raichelgauz et al. | Feb 2012 | B2 |
8315442 | Gokturk et al. | Nov 2012 | B2 |
8316005 | Moore | Nov 2012 | B2 |
8326775 | Raichelgauz et al. | Dec 2012 | B2 |
8345982 | Gokturk et al. | Jan 2013 | B2 |
8548828 | Longmire | Oct 2013 | B1 |
8655801 | Raichelgauz et al. | Feb 2014 | B2 |
8799195 | Raichelgauz et al. | Aug 2014 | B2 |
8799196 | Raichelquaz et al. | Aug 2014 | B2 |
8818916 | Raichelgauz et al. | Aug 2014 | B2 |
8886648 | Procopio et al. | Nov 2014 | B1 |
9330189 | Raichelgauz et al. | May 2016 | B2 |
9438270 | Raichelgauz et al. | Sep 2016 | B2 |
20010019633 | Tenze et al. | Sep 2001 | A1 |
20020019881 | Bokhari et al. | Feb 2002 | A1 |
20020038299 | Zernik et al. | Mar 2002 | A1 |
20020059580 | Kalker et al. | May 2002 | A1 |
20020099870 | Miller et al. | Jul 2002 | A1 |
20020123928 | Eldering et al. | Sep 2002 | A1 |
20020129296 | Kwiat et al. | Sep 2002 | A1 |
20020152267 | Lennon | Oct 2002 | A1 |
20020157116 | Jasinschi | Oct 2002 | A1 |
20020159640 | Vaithilingam et al. | Oct 2002 | A1 |
20020161739 | Oh | Oct 2002 | A1 |
20020163532 | Thomas et al. | Nov 2002 | A1 |
20020174095 | Lulich et al. | Nov 2002 | A1 |
20030041047 | Chang et al. | Feb 2003 | A1 |
20030050815 | Seigel et al. | Mar 2003 | A1 |
20030086627 | Berriss et al. | May 2003 | A1 |
20030200217 | Ackerman | Oct 2003 | A1 |
20030217335 | Chung et al. | Nov 2003 | A1 |
20040003394 | Ramaswamy | Jan 2004 | A1 |
20040025180 | Begeja et al. | Feb 2004 | A1 |
20040117367 | Smith et al. | Jun 2004 | A1 |
20040133927 | Sternberg et al. | Jul 2004 | A1 |
20040153426 | Nugent | Aug 2004 | A1 |
20040215663 | Liu et al. | Oct 2004 | A1 |
20040260688 | Gross | Dec 2004 | A1 |
20040267774 | Lin et al. | Dec 2004 | A1 |
20050131884 | Gross et al. | Jun 2005 | A1 |
20050177372 | Wang et al. | Aug 2005 | A1 |
20050238238 | Xu et al. | Oct 2005 | A1 |
20050245241 | Durand et al. | Nov 2005 | A1 |
20050281439 | Lange | Dec 2005 | A1 |
20060004745 | Kuhn et al. | Jan 2006 | A1 |
20060020958 | Allamanche et al. | Jan 2006 | A1 |
20060031216 | Semple et al. | Feb 2006 | A1 |
20060041596 | Stirbu et al. | Feb 2006 | A1 |
20060048191 | Xiong | Mar 2006 | A1 |
20060112035 | Cecchi et al. | May 2006 | A1 |
20060129822 | Snijder et al. | Jun 2006 | A1 |
20060153296 | Deng | Jul 2006 | A1 |
20060184638 | Chua et al. | Aug 2006 | A1 |
20060217818 | Fujiwara | Sep 2006 | A1 |
20060224529 | Kermani | Oct 2006 | A1 |
20060236343 | Chang | Oct 2006 | A1 |
20060242554 | Gerace et al. | Oct 2006 | A1 |
20060247983 | Dalli | Nov 2006 | A1 |
20060248558 | Barton et al. | Nov 2006 | A1 |
20060253423 | McLane et al. | Nov 2006 | A1 |
20070009159 | Fan | Jan 2007 | A1 |
20070011151 | Hagar et al. | Jan 2007 | A1 |
20070019864 | Koyama et al. | Jan 2007 | A1 |
20070038608 | Chen | Feb 2007 | A1 |
20070061302 | Ramer et al. | Mar 2007 | A1 |
20070067304 | Ives | Mar 2007 | A1 |
20070074147 | Wold | Mar 2007 | A1 |
20070091106 | Moroney | Apr 2007 | A1 |
20070130112 | Lin | Jun 2007 | A1 |
20070130159 | Gulli et al. | Jun 2007 | A1 |
20070168413 | Barletta et al. | Jul 2007 | A1 |
20070174320 | Chou | Jul 2007 | A1 |
20070195987 | Rhoads | Aug 2007 | A1 |
20070220573 | Chiussi et al. | Sep 2007 | A1 |
20070244902 | Seide et al. | Oct 2007 | A1 |
20070253594 | Lu et al. | Nov 2007 | A1 |
20070255785 | Hayashi et al. | Nov 2007 | A1 |
20070268309 | Tanigawa et al. | Nov 2007 | A1 |
20070282826 | Hoeber et al. | Dec 2007 | A1 |
20070294295 | Finkelstein et al. | Dec 2007 | A1 |
20080040277 | DeWitt | Feb 2008 | A1 |
20080046406 | Seide et al. | Feb 2008 | A1 |
20080049629 | Morrill | Feb 2008 | A1 |
20080072256 | Boicey et al. | Mar 2008 | A1 |
20080152231 | Gokturk et al. | Jun 2008 | A1 |
20080163288 | Ghosal et al. | Jul 2008 | A1 |
20080172615 | Igelman et al. | Jul 2008 | A1 |
20080201299 | Lehikoinen et al. | Aug 2008 | A1 |
20080201314 | Smith et al. | Aug 2008 | A1 |
20080204706 | Magne et al. | Aug 2008 | A1 |
20080313140 | Pereira et al. | Dec 2008 | A1 |
20090022472 | Bronstein et al. | Jan 2009 | A1 |
20090037408 | Rodgers | Feb 2009 | A1 |
20090089587 | Brunk et al. | Apr 2009 | A1 |
20090125529 | Vydiswaran et al. | May 2009 | A1 |
20090125544 | Brindley | May 2009 | A1 |
20090148045 | Lee et al. | Jun 2009 | A1 |
20090157575 | Schobben et al. | Jun 2009 | A1 |
20090172030 | Schiff et al. | Jul 2009 | A1 |
20090204511 | Tsang | Aug 2009 | A1 |
20090216639 | Kapczynski et al. | Aug 2009 | A1 |
20090245603 | Koruga et al. | Oct 2009 | A1 |
20090253583 | Yoganathan | Oct 2009 | A1 |
20100023400 | DeWitt | Jan 2010 | A1 |
20100088321 | Solomon et al. | Apr 2010 | A1 |
20100106857 | Wyler | Apr 2010 | A1 |
20100125569 | Nair et al. | May 2010 | A1 |
20100191567 | Lee et al. | Jul 2010 | A1 |
20100318493 | Wessling | Dec 2010 | A1 |
20100322522 | Wang et al. | Dec 2010 | A1 |
20110035289 | King et al. | Feb 2011 | A1 |
20110106782 | Ke et al. | May 2011 | A1 |
20110145068 | King et al. | Jun 2011 | A1 |
20110202848 | Ismalon | Aug 2011 | A1 |
20110208822 | Rathod | Aug 2011 | A1 |
20120082362 | Diem | Apr 2012 | A1 |
20120150890 | Jeong et al. | Jun 2012 | A1 |
20130089248 | Remiszewski | Apr 2013 | A1 |
20130104251 | Moore et al. | Apr 2013 | A1 |
20130159298 | Mason et al. | Jun 2013 | A1 |
20130173635 | Sanjeev | Jul 2013 | A1 |
20140176604 | Venkitaraman et al. | Jun 2014 | A1 |
20140188786 | Raichelgauz et al. | Jul 2014 | A1 |
20140226900 | Saban | Aug 2014 | A1 |
20140310825 | Raichelgauz et al. | Oct 2014 | A1 |
Number | Date | Country |
---|---|---|
0231764 | Apr 2002 | WO |
03005242 | Jan 2003 | WO |
2004019527 | Mar 2004 | WO |
2007049282 | May 2007 | WO |
2014137337 | Sep 2014 | WO |
2016040376 | Mar 2016 | WO |
Entry |
---|
Boari et al, “Adaptive Routing for Dynamic Applications in Massively Parallel Architectures”, 1995 IEEE, Spring 1995. |
Cococcioni, et al, “Automatic Diagnosis of Defects of Rolling Element Bearings Based on Computational Intelligence Techniques”, University of Pisa, Pisa, Italy, 2009. |
Emami, et al, “Role of Spatiotemporal Oriented Energy Features for Robust Visual Tracking in Video Surveillance, University of Queensland”, St. Lucia, Australia, 2012. |
Garcia, “Solving the Weighted Region Least Cost Path Problem Using Transputers”, Naval Postgraduate School, Monterey, California, Dec. 1989. |
Mahdhaoui, et al, “Emotional Speech Characterization Based on Multi-Features Fusion for Face-to-Face Interaction”, Universite Pierre et Marie Curie, Paris, France, 2009. |
Marti, et al, “Real Time Speaker Localization and Detection System for Camera Steering in Multiparticipant Videoconferencing Environments”, Universidad Politecnica de Valencia, Spain, 2011. |
Nagy et al, “A Transputer, Based, Flexible, Real-Time Control System for Robotic Manipulators”, UKACC International Conference on Control '96, Sep. 2-5, 1996, Conference 1996, Conference Publication No. 427, IEE 1996. |
Scheper, et al. “Nonlinear dynamics in neural computation”, ESANN'2006 proceedings—European Symposium on Artificial Neural Networks, Bruges (Belgium), Apr. 26-28, 2006, d-side publi, ISBN 2-930307-06-4. |
Theodoropoulos et al, “Simulating Asynchronous Architectures on Transputer Networks”, Proceedings of the Fourth Euromicro Workshop on Parallel and Distributed Processing, 1996. PDP '96. |
Burgsteiner et al.: “Movement Prediction From Real-World Images Using a Liquid State Machine”, Innovations in Applied Artificial Intelligence Lecture Notes in Computer Science, Lecture Notes in Artificial Intelligence, LNCS, Springer-Verlag, BE, vol. 3533, Jun. 2005, pp. 121-130. |
Cernansky et al., “Feed-forward Echo State Networks”; Proceedings of International Joint Conference on Neural Networks, Montreal, Canada, Jul. 31-Aug. 4, 2005. |
Fathy et al., “A Parallel Design and Implementation for Backpropagation Neural Network Using NIMD Architecture”, 8th Mediterranean Electrotechnical Corsfe rersce, 19'96. MELECON '96, Date of Conference: May 13-16, 1996, vol. 3, pp. 1472-1475. |
Foote, Jonathan et al., “Content-Based Retrieval of Music and Audio”, 1997, Institute of Systems Science, National University of Singapore, Singapore (Abstract). |
Freisleben et al., “Recognition of Fractal Images Using a Neural Network”, Lecture Notes in Computer Science, 1993, vol. 6861, 1993, pp. 631-637. |
Howlett et al., “A Multi-Computer Neural Network Architecture in a Virtual Sensor System Application”, International Journal of Knowledge-based Intelligent Engineering Systems, 4 (2). pp. 86-93, 133N 1327-2314; first submitted Nov. 30, 1999; revised version submitted Mar. 10, 2000. |
International Search Authority: “Written Opinion of the International Searching Authority” (PCT Rule 43bis.1) including International Search Report for International Patent Application No. PCT/US2008/073852; Date of Mailing: Jan. 28, 2009. |
International Search Authority: International Preliminary Report on Patentability (Chapter I of the Patent Cooperation Treaty) including “Written Opinion of the International Searching Authority” (PCT Rule 43bis. 1) for the corresponding International Patent Application No. PCT/IL2006/001235; Date of Issuance: Jul. 28, 2009. |
International Search Report for the corresponding International Patent Application PCT/IL2006/001235; Date of Mailing: Nov. 2, 2008. |
IPO Examination Report under Section 18(3) for corresponding UK application No. GB1001219.3, dated Sep. 12, 2011; Entire Document. |
Iwamoto, K.; Kasutani, E.; Yamada, A.: “Image Signature Robust to Caption Superimposition for Video Sequence Identification”; 2006 IEEE International Conference on Image Processing; pp. 3185-3188, Oct. 8-11, 2006; doi: 10.1109/ICIP.2006.313046. |
Jaeger, H.: “The “echo state” approach to analysing and training recurrent neural networks”, GMD Report, No. 148, 2001, pp. 1-43, XP002466251. German National Research Center for Information Technology. |
Lin, C.; Chang, S.: “Generating Robust Digital Signature for Image/Video Authentication”, Multimedia and Security Workshop at ACM Mutlimedia '98; Bristol, U.K., Sep. 1998; pp. 49-54. |
Lyon, Richard F.; “Computational Models of Neural Auditory Processing”; IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '84, Date of Conference: Mar. 1984, vol. 9, pp. 41-44. |
Maass, W. et al.: “Computational Models for Generic Cortical Microcircuits”, Institute for Theoretical Computer Science, Technische Universitaet Graz, Graz, Austria, published Jun. 10, 2003. |
Morad, T.Y. et al.: “Performance, Power Efficiency and Scalability of Asymmetric Cluster Chip Multiprocessors”, Computer Architecture Letters, vol. 4, Jul. 4, 2005 (Jul. 4, 2005), pp. 1-4, XP002466254. |
Natsclager, T. et al.: “The “liquid computer”: A novel strategy for real-time computing on time series”, Special Issue on Foundations of Information Processing of Telematik, vol. 8, No. 1, 2002, pp. 39-43, XP002466253. |
Ortiz-Boyer et al., “CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features”, Journal of Artificial Intelligence Research 24 (2005) 1-48 Submitted Nov. 2004; published Jul. 2005. |
Raichelgauz, I. et al.: “Co-evolutionary Learning in Liquid Architectures”, Lecture Notes in Computer Science, [Online] vol. 3512, Jun. 21, 2005 (Jun. 21, 2005), pp. 241-248, XP019010280 Springer Berlin / Heidelberg ISSN: 1611-3349 ISBN: 978-3-540-26208-4. |
Ribert et al. “An Incremental Hierarchical Clustering”, Visicon Interface 1999, pp. 586-591. |
Verstraeten et al., “Isolated word recognition with the Liquid State Machine: a case study”; Department of Electronics and Information Systems, Ghent University, Sint-Pietersnieuwstraat 41, 9000 Gent, Belgium, Available online Jul. 14, 2005. |
Verstraeten et al.: “Isolated word recognition with the Liquid State Machine: a case study”, Information Processing Letters, Amsterdam, NL, vol. 95, No. 6, Sep. 20, 2005 (Sep. 30, 2005), pp. 521-528, XP005028093 ISSN: 0020-0190. |
Ware et al., “Locating and Identifying Components in a Robot's Workspace using a Hybrid Computer Architecture”; Proceedings of the 1995 IEEE International Symposium on Intelligent Control, Aug. 27-29, 1995, pp. 139-144. |
Xian-Sheng Hua et al.: “Robust Video Signature Based on Ordinal Measure” In: 2004 International Conference on Image Processing, ICIP '04; Microsoft Research Asia, Beijing, China; published Oct. 24-27, 2004, pp. 685-688. |
Zeevi, Y. et al.: “Natural Signal Classification by Neural Cliques and Phase-Locked Attractors”, IEEE World Congress on Computational Intelligence, IJCNN2006, Vancouver, Canada, Jul. 2006 (Jul. 2006), XP002466252. |
Zhou et al., “Ensembling neural networks: Many could be better than all”; National Laboratory for Novel Software Technology, Nanjing Unviersirty, Hankou Road 22, Nanjing 210093, PR China; Available online Mar. 12, 2002. |
Zhou et al., “Medical Diagnosis With C4.5 Rule Preceded by Artificial Neural Network Ensemble”; IEEE Transactions on Information Technology in Biomedicine, vol. 7, Issue: 1, pp. 37-42, Date of Publication: Mar. 2003. |
Chuan-Yu Cho, et al., “Efficient Motion-Vector-Based Video Search Using Query by Clip”, 2004, IEEE, Taiwan, pp. 1-4. |
Ihab Al Kabary, et al., “SportSense: Using Motion Queries to Find Scenes in Sports Videos”, Oct. 2013, ACM, Switzerland, pp. 1-3. |
Jianping Fan et al., “Concept-Oriented Indexing of Video Databases: Towards Semantic Sensitive Retrieval and Browsing”, IEEE, vol. 13, No. 7, Jul. 2004, pp. 1-19. |
Shih-Fu Chang, et al., “VideoQ: A Fully Automated Video Retrieval System Using Motion Sketches”, 1998, IEEE, , New York, pp. 1-2. |
Wei-Te Li et al., “Exploring Visual and Motion Saliency for Automatic Video Object Extraction”, IEEE, vol. 22, No. 7, Jul. 2013, pp. 1-11. |
Brecheisen, et al., “Hierarchical Genre Classification for Large Music Collections”, ICME 2006, pp. 1385-1388. |
Lau, et al., “Semantic Web Service Adaptation Model for a Pervasive Learning Scenario”, 2008 IEEE Conference on Innovative Technologies in Intelligent Systems and Industrial Applications Year: 2008, pp. 98-103, DOI: 10.1109/CITISIA.2008.4607342 IEEE Conference Publications. |
McNamara, et al., “Diversity Decay in Opportunistic Content Sharing Systems”, 2011 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks Year: 2011, pp. 1-3, DOI: 10.1109/WoWMoM.2011.5986211 IEEE Conference Publications. |
Santos, et al., “SCORM-MPEG: an Ontology of Interoperable Metadata for Multimedia and e-Learning”, 2015 23rd International Conference on Software, Telecommunications and Computer Networks (SoftCOM) Year: 2015, pp. 224-228, DOI: 10.1109/SOFTCOM.2015.7314122 IEEE Conference Publications. |
Wilk, et al., “The Potential of Social-Aware Multimedia Prefetching on Mobile Devices”, 2015 International Conference and Workshops on Networked Systems (NetSys) Year: 2015, pp. 1-5, DOI: 10.1109/NetSys.2015.7089081 IEEE Conference Publications. |
Odinaev, et al., “Cliques in Neural Ensembles as Perception Carriers”, Technion—Israel Institute of Technology, 2006 International Joint Conference on Neural Networks, Canada, 2006, pp. 285-292. |
The International Search Report and the Written Opinion for PCT/US2016/054634 dated Mar. 16, 2017, ISA/RU, Moscow, RU. |
Number | Date | Country | |
---|---|---|---|
20140310020 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
61839871 | Jun 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12434221 | May 2009 | US |
Child | 13344400 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13624397 | Sep 2012 | US |
Child | 14314567 | US | |
Parent | 13344400 | Jan 2012 | US |
Child | 13624397 | US | |
Parent | 12195863 | Aug 2008 | US |
Child | 13624397 | US | |
Parent | 12084150 | US | |
Child | 12195863 | US |