Systems and Methods of Synchronizing EEG Data with a Patient's Associated Audio and Video Data

Information

  • Patent Application
  • 20240324934
  • Publication Number
    20240324934
  • Date Filed
    March 27, 2024
    7 months ago
  • Date Published
    October 03, 2024
    a month ago
Abstract
Systems and methods of enabling synchronization of a patient's EEG data with associated video and audio data for analysis in a computing device include a synchronization device which generates, synchronously, first data indicative of a first differential electrical signal, second data indicative of a second differential electrical signal, a plurality of visible light signals that are acquired by a video acquisition device to generate third data and a plurality of audio signals that are acquired by an audio acquisition device to generate fourth data. The computing device compares the first data with the third data to calculate a first time compensation, compares the second data with the fourth data to calculate a second time compensation and applies the first compensation time to the patient's video data and the second compensation time to the patient's audio data.
Description
FIELD

The present specification is related generally to the field of neurophysiological monitoring. More specifically, the present specification is related to systems and methods for synchronizing EEG data with a patient's associated audio and video over an extended period of time.


BACKGROUND

In diagnosing epilepsy using EEG (Electroencephalography) recordings, it is frequently beneficial to simultaneously record the audio and video of a patient. There are many reasons for this, including EEG quality assessment and collecting diagnostic information. Specifically, for EEG quality assessment, the use of video and audio data enables the differentiation of true EEG activity from artifacts. For instance, a chewing activity may create seizure-like EEG activity, when in fact, it is not a seizure.


Diagnostic information, or being able to study the patient's behavior before, during, and after a seizure may help classify as well as localize the seizure. For example, if the seizure starts with moving the fingers of the left hand, the corresponding part of the motor cortex is suspected to be involved in the seizure. In pseudo seizures (psychosomatic seizures), the seizure may be seen on video, but not the EEG. Audio and/or video may also be used to validate an EEG. If the recorded video seizure activity before the EEG signal, it may be concluded that the recorded EEG is not reflecting the source of the start of the seizure but represents secondary activation.


For at least the above reasons, it is imperative to be able to accurately and tightly synchronize and correlate the EEG data, video data, and audio data. However, the EEG, the video, and the audio data streams of the patient are frequently created separately and combined using EEG software implemented by a computer. Consequently, each data stream may have different delays introduced before reaching the computer and ultimately, the EEG software.


There is often no time information integrated into the audio and video data streams of the patient, and even if there is integrated clock information, the drift in time over the duration of a recording, that can span over multiple days, is much larger than is clinically acceptable.


Accordingly, there is a need for systems and methods to enable accurate synchronization and/or correlation of EEG, audio and video data streams of a patient recorded over an extended period of time.


SUMMARY

The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools and methods, which are meant to be exemplary and illustrative, and not limiting in scope. The present application discloses numerous embodiments.


In some embodiments, the present specification describes a method of enabling synchronization of a patient's EEG data with associated video and audio data for analysis in a computing device, wherein the patient's EEG, video and audio data are acquired over a period of time of monitoring the patient, comprising: acquiring, using a plurality of sensors positioned on the patient's scalp, the EEG data of the patient; generating, using a device, first data indicative of a first differential electrical signal, second data indicative of a second differential electrical signal, a plurality of visible light signals and a plurality of audio signals, wherein the first data, the second data, the plurality of visible light signals and the plurality of audio signals are generated synchronously; receiving, by a multi-channel amplifier, the first data and second data from the device and the EEG data of the patient from the plurality of sensors; acquiring, by a video acquisition device, the plurality of visible light signals; generating, by the video acquisition device, the patient's video data and third data based on the plurality of visible light signals; acquiring, by an audio acquisition device, the plurality of audio signals; generating, by the audio acquisition device, the patient's audio data and fourth data based on the plurality of audio signals; receiving, by the computing device, the first data, the second data and the patient's EEG data from the multi-channel amplifier, the third data and the patient's video data from the video acquisition device and the fourth data and the patient's audio data from the audio acquisition device; comparing, by the computing device, the first data with the third data in order to calculate a first time compensation; comparing, by the computing device, the second data with the fourth data in order to calculate a second time compensation; and applying, by the computing device, the first time compensation to the patient's video data and the second time compensation to the patient's audio data.


Optionally, each of the first, second, third and fourth data is encoded with a predefined unique pattern of a plurality of pulses. Optionally, each of the plurality of pulses has an associated start time, end time and duration time.


Optionally, the first and third data are determined to be out of sync with each other if a start time of a pulse in the first data is different from a start time of a corresponding pulse in the third data. Optionally, the first time compensation is calculated as a difference between the start time of the pulse in the first data and the start time of the corresponding pulse in the third data.


Optionally, the second and fourth data are determined to be out of sync with each other if a start time of a pulse in the second data is different from a start time of a corresponding pulse in the fourth data. Optionally, the second time compensation is calculated as a difference between the start time of the pulse in the second data and the start time of the corresponding pulse in the fourth data.


Optionally, the predefined unique pattern has a total time duration. Optionally, the predefined unique pattern of said total time duration is repeated continuously over at least the period of time of monitoring the patient. Optionally, the predefined unique pattern includes a first stream of pulses that is followed by a second stream of pulses and that is followed by a third stream of pulses. Optionally, the first stream includes a first number of single pulses that are spaced apart from each other by a first time interval, and wherein the first stream has a first time duration. Optionally, the second stream includes a second number of dual pulses that are spaced apart from each other by a second time interval, and wherein the second stream has a second time duration. Optionally, the third stream includes a third number of triple pulses that are spaced apart from each other by a third time interval, and wherein the third stream has a third time duration. Optionally, the total time duration is a sum of the first, second and third time durations.


Optionally, the device includes at least one light emitting diode for generating the plurality of visible light signals and an acoustic generator for generating the plurality of audio signals.


Optionally, each of the first and second differential signals has a frequency ranging from 20 Hz to 40 Hz and an amplitude of 1 mV.


Optionally, the first data is compared with the third data in order to calculate a first time compensation if the first and third data are determined be out of sync with each other.


Optionally, the second data is compared with the fourth data in order to calculate a second time compensation if the second and fourth data are determined be out of sync with each other.


In some embodiments, the present specification describes a system of enabling synchronization of a patient's EEG data with associated video and audio data for analysis in a computing device, wherein the patient's EEG, video and audio data are acquired over a period of time of monitoring the patient, comprising: a plurality of sensors positioned on the patient's scalp configured to acquire the EEG data of the patient; a device configured to generate first data indicative of a first differential electrical signal, second data indicative of a second differential electrical signal, a plurality of visible light signals and a plurality of audio signals, wherein the first data, the second data, the plurality of visible light signals and the plurality of audio signals are generated mutually synchronously; a multi-channel amplifier configured to receive the first data and second data from the device and the EEG data of the patient from the plurality of sensors; a video acquisition device configured to acquire the plurality of visible light signals, wherein the video acquisition device generates the patient's video data and third data based on the plurality of visible light signals; an audio acquisition device configured to acquire the plurality of audio signals, wherein the audio acquisition device generates the patient's audio data and fourth data based on the plurality of audio signals; and one or more processors in the computing device, said one or more processors configured to execute a plurality of executable programmatic instructions to: receive the first data, the second data and the patient's EEG data from the multi-channel amplifier, the third data and the patient's video data from the video acquisition device along with the fourth data and the patient's audio data from the audio acquisition device; compare the first data with the third data in order to calculate a first time compensation if the first and third data are determined be out of sync with each other; compare the second data with the fourth data in order to calculate a second time compensation if the second and fourth data are determined be out of sync with each other; and apply the first time compensation to the patient's video data and the second time compensation to the patient's audio data.


Optionally, each of the first, second, third and fourth data is encoded with a predefined unique pattern of a plurality of pulses. Optionally, each of the plurality of pulses has an associated start time, end time and duration time. Optionally, the first and third data are determined to be out of sync with each other if a start time of a pulse in the first data is different from a start time of a corresponding pulse in the third data. Optionally, the first time compensation is calculated as a difference between the start time of the pulse in the first data and the start time of the corresponding pulse in the third data. Optionally, the second and fourth data are determined to be out of sync with each other if a start time of a pulse in the second data is different from a start time of a corresponding pulse in the fourth data. Optionally, the second time compensation is calculated as a difference between the start time of the pulse in the second data and the start time of the corresponding pulse in the fourth data. Optionally, the predefined unique pattern has a total time duration. Optionally, the predefined unique pattern of said total time duration is repeated continuously over at least the period of time of monitoring the patient.


Optionally, the predefined unique pattern includes a first stream of pulses that is followed by a second stream of pulses and that is followed by a third stream of pulses. Optionally, the first stream includes a first number of single pulses that are spaced apart from each other by a first time interval, and wherein the first stream has a first time duration. Optionally, the second stream includes a second number of dual pulses that are spaced apart from each other by a second time interval, and wherein the second stream has a second time duration. Optionally, the third stream includes a third number of triple pulses that are spaced apart from each other by a third time interval, and wherein the third stream has a third time duration. Optionally, the total time duration is a sum of the first, second and third time durations.


Optionally, the device includes at least one light emitting diode for generating the plurality of visible light signals and an acoustic generator for generating the plurality of audio signals.


Optionally, each of the first and second differential signals has a frequency ranging from 20 Hz to 40 Hz and an amplitude of 1 mV.


In some embodiments, the present specification describes a system of enabling synchronization of a patient's EEG data with associated video and audio data for analysis in a computing device, wherein the patient's EEG, video and audio data are acquired over a period of time of monitoring the patient, comprising: a plurality of sensors positioned on the patient's scalp configured to acquire the EEG data of the patient; a device configured to generate first data indicative of a first differential electrical signal, second data indicative of a second differential electrical signal, a plurality of visible light signals and a plurality of audio signals, wherein the first data, the second data, the plurality of visible light signals and the plurality of audio signals are generated mutually synchronously; a multi-channel amplifier configured to receive the first data and second data from the device and the EEG data of the patient from the plurality of sensors; a video acquisition device configured to acquire the plurality of visible light signals, wherein the video acquisition device generates the patient's video data and third data based on the plurality of visible light signals; an audio acquisition device configured to acquire the plurality of audio signals, wherein the audio acquisition device generates the patient's audio data and fourth data based on the plurality of audio signals, and wherein each of the first, second, third and fourth data is encoded with a predefined unique pattern of a plurality of pulses; and one or more processors in the computing device, said one or more processors configured to execute a plurality of executable programmatic instructions to: receive the first data, the second data and the patient's EEG data from the multi-channel amplifier, the third data and the patient's video data from the video acquisition device along with the fourth data and the patient's audio data from the audio acquisition device; compare the first data with the third data in order to calculate a first time compensation if the first and third data are determined be out of sync with each other, wherein the first and third data are determined to be out of sync with each other if a start time of a pulse in the first data is different from a start time of a corresponding pulse in the third data, and wherein the first time compensation is calculated as a difference between the start time of the pulse in the first data and the start time of the corresponding pulse in the third data; compare the second data with the fourth data in order to calculate a second time compensation if the second and fourth data are determined be out of sync with each other, wherein the second and fourth data are determined to be out of sync with each other if a start time of a pulse in the second data is different from a start time of a corresponding pulse in the fourth data, and wherein the second time compensation is calculated as a difference between the start time of the pulse in the second data and the start time of the corresponding pulse in the fourth data; and apply the first time compensation to the patient's video data and the second time compensation to the patient's audio data.


Optionally, the predefined unique pattern has a total time duration, and wherein the predefined unique pattern of said total time duration is repeated continuously over at least the period of time of monitoring the patient. Optionally, the predefined unique pattern includes a first stream of pulses that is followed by a second stream of pulses and that is followed by a third stream of pulses. Optionally, the first stream includes a first number of single pulses that are spaced apart from each other by a first time interval, and wherein the first stream has a first time duration. Optionally, the second stream includes a second number of dual pulses that are spaced apart from each other by a second time interval, and wherein the second stream has a second time duration. Optionally, the third stream includes a third number of triple pulses that are spaced apart from each other by a third time interval, and wherein the third stream has a third time duration. Optionally, the total time duration is a sum of the first, second and third time durations.


Optionally, the device includes at least one light emitting diode for generating the plurality of visible light signals and an acoustic generator for generating the plurality of audio signals.


The aforementioned and other embodiments of the present specification shall be described in greater depth in the drawings and detailed description provided below.





BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings illustrate various embodiments of systems, methods, and embodiments of various other aspects of the disclosure. Any person with ordinary skills in the art will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. It may be that in some examples one element may be designed as multiple elements or that multiple elements may be designed as one element. In some examples, an element shown as an internal component of one element may be implemented as an external component in another and vice versa. Furthermore, elements may not be drawn to scale. Non-limiting and non-exhaustive descriptions are described with reference to the following drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating principles.



FIG. 1 illustrates an electroencephalography (EEG) system 100 configured for synchronizing detected EEG data with associated audio and video data of a patient, in accordance with some embodiments of the present specification;



FIG. 2 is a GUI (graphical user interface) generated by EEG and DSA (Data Synchronization Analysis) modules, in accordance with some embodiments of the present specification; and



FIG. 3 is a flowchart of a plurality of exemplary steps of a method for synchronizing a patient's EEG, audio, and video data acquired over a period of time while monitoring the patient, in accordance with some embodiments of the present specification.





DETAILED DESCRIPTION

The present specification is directed towards systems and methods for synchronization of EEG data, video data, and audio data. Because there is often no time information integrated into the audio and video data streams of the patient, the time drift over the duration of a recording, that can span over multiple days, is much larger than is clinically acceptable. Thus, the systems and methods of the present specification are used to allow for accurate synchronization and/or correlation of EEG data, audio data, and video data streams of a patient recorded over an extended period of time. The methods of the present specification are employed to apply a first compensation time to the patient's video data and a second compensation time to the patient's audio data in order to synchronize the patient's EEG data with the video data and audio data of the patient. In embodiments, the first compensation time may be indicative of a drift (lead or delay/lag) in time of the patient's video data with respect to the patient's EEG data. In embodiments, the second compensation time may be indicative of a drift (lead or delay/lag) in time of the patient's audio data with respect to the patient's EEG data.


The present specification is directed towards multiple embodiments. The following disclosure is provided in order to enable a person having ordinary skill in the art to practice the invention. Language used in this specification should not be interpreted as a general disavowal of any one specific embodiment or used to limit the claims beyond the meaning of the terms used therein. The general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the invention. Also, the terminology and phraseology used is for the purpose of describing exemplary embodiments and should not be considered limiting. Thus, the present invention is to be accorded the widest scope encompassing numerous alternatives, modifications and equivalents consistent with the principles and features disclosed. For purpose of clarity, details relating to technical material that is known in the technical fields related to the invention have not been described in detail so as not to unnecessarily obscure the present invention.


In various embodiments, a computing device includes an input/output controller, at least one communications interface, and a system memory. The system memory includes at least one random access memory (RAM) and at least one read-only memory (ROM). These elements are in communication with a central processing unit (CPU) to enable operation of the computing device. In various embodiments, the computing device may be a conventional standalone computer or alternatively, the functions of the computing device may be distributed across multiple computer systems and architectures.


In some embodiments, execution of a plurality of sequences of programmatic instructions or code enable or cause the CPU of the computing device to perform various functions and processes. In alternate embodiments, hard-wired circuitry may be used in place of, or in combination with, software instructions for implementation of the processes of systems and methods described in this application. Thus, the systems and methods described are not limited to any specific combination of hardware and software.


The term “module”, “application” or “engine” used in this disclosure may refer to computer logic utilized to provide a desired functionality, service or operation by programming or controlling a general purpose processor. Stated differently, in some embodiments, a module, application or engine implements a plurality of instructions or programmatic code to cause a general purpose processor to perform one or more functions. In various embodiments, a module, application or engine can be implemented in hardware, firmware, software or any combination thereof. The module, application or engine may be interchangeably used with unit, logic, logical block, component, or circuit, for example. The module, application or engine may be the minimum unit, or part thereof, which performs one or more particular functions.


In the description and claims of the application, each of the words “comprise”, “include”, “have”, “contain”, and forms thereof, are not necessarily limited to members in a list with which the words may be associated. Thus, they are intended to be equivalent in meaning and be open-ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It should be noted herein that any feature or component described in association with a specific embodiment may be used and implemented with any other embodiment unless clearly indicated otherwise.


It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the preferred, systems and methods are now described.


Overview


FIG. 1 illustrates an electroencephalography (EEG) system 100 configured for synchronizing detected EEG data with associated audio and video data of a patient, in accordance with some embodiments of the present specification. The system 100 comprises a multi-channel EEG amplifier 105 in data communication with a synchronization device 110 over a first communication link 126. In addition, the EEG amplifier 105 is in data communication with a computing device 112 over a second communication link. Further, the EEG amplifier 105 is in data communication, over a third communication link, with a plurality of EEG sensors or electrodes spatially positioned on a layer of tissue such as the scalp of a patient 140. In turn, the computing device 112 is in data communication with the synchronization device 110 over a fourth communication link, a video acquisition device 115 such as a camera over a fifth communication link, and an audio acquisition device 120 such as a microphone over a sixth communication link. In some embodiments, instead of a separate video acquisition device 115 for the acquisition of video and a separate audio acquisition device 120 for the acquisition of audio, the system 100 may use a single device, such as, for example, a camera, for acquisition of both audio and video. In embodiments, the audio data stream may be captured by a microphone integrated into the video camera or a separate microphone.


It should be noted herein that each form of data is recorded into discrete data channels. For example, when a video camera is recording video data and audio data, the discrete portions of data or data sets are recorded into different channels (or streams of data). The systems and methods of the present specification contemplate synchronization of any and/or all of the data sets and/or streams with each other, therefore, any and all audio streams may be synchronized with the video data and EEG data streams using the synchronization device 110. In various embodiments, the first, second, third, fourth, fifth and sixth and any other communication links may be wired or wireless links.


It should also be noted herein that any of the data sets described throughout this specification can be evaluated absolutely (or relative to any other data set) to determine a degree of time lag and a time compensation factor to make up for the lag and that time compensation factor can then be applied to any other data set absolutely (or relative to any other data set) to synchronize the other data set with the first data set.


The plurality of EEG sensors or electrodes record electrical signals (EEG signals) from the patient's brain and communicate the analog signals to the multi-channel amplifier 105 that is configured to amplify the signals, convert the signals from an analog EEG signal to digital EEG data, and communicate the resultant digital EEG data to the computing device 112. Simultaneously, the video acquisition device 115 records video data of the patient 140 and the audio acquisition device 120 records audio data of the patient 140. The recorded video and audio data of the patient 140 are communicated to the computing device 112.


In some embodiments, the computing device 112 is configured to implement an EEG module, engine, or application 130 and a data synchronization analysis (DSA) module, engine, or application 132. In some embodiments, the EEG module 130 and DSA module 132 are configured to receive, synchronize and combine the EEG data, video data and audio data of the patient 140 for further processing, analyses and display on a monitor or display screen associated with the computing device 112.


Synchronization Device 110

In accordance with aspects of the present specification, the synchronization device 110 is configured to generate, synchronously, a first event, a second event, and a third event. That is, the first event, second event, and third event are generated simultaneously and therefore have the same start times.


The first event corresponds to generation of first data to be used for video synchronization and second data to be used for audio synchronization. In some embodiments, the first data is indicative of a first differential electrical signal and the second data is indicative of a second differential electrical signal. The first data and second data are received by the multi-channel amplifier 105 (via the first communication link 126) and digitized by the multi-channel amplifier 105. The first data and second data, along with the EEG data of the patient, is communicated, by the multi-channel amplifier 105 to the computing device 112. In some embodiments, the first differential electrical signal and second differential electrical signal have a frequency in a range of 20 Hz to 40 Hz and an amplitude in a range of 0.5 mV to 2.5 mV and preferably of 1 mV. In some embodiments, the synchronization device 110 includes at least one electrical circuit component for generating the first data and second data.


In some embodiments, if the audio and video streams are synchronous, for instance when the audio or sound is recorded directly by the camera 115 (to generate combined audio and video stream), then the EEG and video synchronization is enabled by using the audio signal, in the combined audio and video stream, for synchronization since it is sampled with a much higher frequency (kHz instead of Hz) compared to the video recording, thereby, allowing for a more accurate synchronization.


The second event corresponds to generation of a plurality of visible light signals. In some embodiments, the plurality of visible light signals includes one or more blinking or pulsating visual indicators such as, for example LEDs 122 (light emitting diode). In some embodiments, infrared light is used (in order to minimize disturbing people in the vicinity) to generate a plurality of light signals in which case the video acquisition device 115 is capable of recording infrared light. The plurality of visible light signals is recorded by the video acquisition device 115 and digitized in order to generate third data indicative of the plurality of visible light signals. The video acquisition device 115 communicates the third data to the computing device 112. In some embodiments, the synchronization device 110 includes at least one LED for generating the plurality of visible light signals.


The third event corresponds to generation of a plurality of audio signals. In some embodiments, the plurality of audio signals includes a beeping or pulsating sequence of audio or sound. The plurality of audio signals is used to separately identify different instances of synchronization sounds to avoid synchronizing a wrong event with a wrong time within the EEG data. Thus, the plurality of audio signals act as a “fingerprint” to separate different sound events from one another. In some embodiments, the sound is generated by an acoustics generator and output by, for example, a speaker 124. The plurality of audio signals is recorded by the audio acquisition device 120 and digitized in order to generate fourth data indicative of the plurality of audio signals. The audio acquisition device 120 communicates the fourth data to the computing device 112. In some embodiments, the synchronization device 110 includes a speaker and acoustics generator 124 for generating the plurality of audio signals. In some embodiments, the audio or sound has a high frequency, ranging from 500 Hz to 5 kHz, to optimize synchronization and minimize disturbance from the sound.


In some embodiments, the synchronization device 110 also generates a trigger pulse or signal that is communicated to the multi-channel amplifier 105 (via the first communication link 126) either through a dedicated trigger input or using the differential amplifier inputs (similar inputs to where the patient electrodes are connected). In some embodiments, the trigger pulse or signal is generated synchronously with the first differential signal and the second differential signal corresponding to the first event. Thus, when the synchronization device 110 generates the first differential signal, the second differential signal, the plurality of visible light signals, and the plurality of audio signals it also, at the same time or simultaneously, delivers the trigger pulse or signal through the first communication link 126 to the multi-channel amplifier 105. In accordance with aspects of the present specification, the trigger pulse or signal tags the first, second, third and fourth data with a very precise point in time and marks the trigger pulse or signal with a corresponding fourth event that is displayed as a discrete point in time (for example, shown as an event 270 in FIG. 2) in the EEG data recording.


Thus, in embodiments, the first event, second event, and third event are acquired or recorded separately by the multi-channel amplifier 105, the video acquisition device 115 and the audio acquisition device 120, respectively. However, the first differential electrical signal corresponding to the first data, the second differential electrical signal corresponding to the second data, the plurality of visible light signals corresponding to the third data, and the plurality of audio signals corresponding to the fourth data are generated synchronously by the synchronization device 110. That is, the start times of each of these signals is same.


It should be appreciated that in various embodiments the synchronization device 110 may be configured to integrate the respective components for generating the first event, second event, and third event. For example, the synchronization device 110 may integrate an electrical circuit for generating the first data and second data, at least one LED for generating the plurality of visible light signals and a speaker/acoustics generator for generating the plurality of audio signals. In alternate embodiments, the respective components for generating the first data, second data, third data, and fourth data may be separate or distributed.


In some embodiments, the first data, second data, third data, and fourth data possess the following plurality of characteristics:

    • A) Each of the first, second, third and fourth data stream is encoded with a predefined unique pattern or sequence of pulses. The unique pattern or sequence of pulses have an overall predefined time duration of ‘T’ seconds. It should be noted that the pulses of each of the first and second data correspond to a plurality of pulses of each of the first differential electrical signal and second differential electrical signal, the pulses of the third data correspond to periodic flashes or pulsation of the visible light (that flash or pulsate in tandem with the plurality of pulses of the first and second differential electrical signals) and the pulses of the fourth data correspond to periodic beeping of the audio or sound (that beeps in tandem with the plurality of pulses of the first and second differential electrical signals). Also, in embodiments, each pulse (of the first, second, third and fourth data) has an associated start time, an end time and a time duration that are predefined.
    • B) The unique pattern or sequence of pulses, over the predefined time duration of ‘T’ seconds, is same and synchronous for each of the first, second, third and fourth data. Synchronous refers to a situation where, for example, each pulse of each of the first, second, third and fourth data has the same start time.
    • C) The unique pattern or sequence of pulses, having an overall predefined time duration of ‘T’ seconds, is repeated over an extended period of time. In some embodiments, the extended period of time corresponds to at least the period of time during which the patient's EEG, video and audio data is recorded.
    • D) The unique pattern or sequence of pulses, having an overall predefined time duration of ‘T’ seconds, allows uniquely recognizing a specific synchronization event (of the first synchronization event, second synchronization event, and third synchronization event) over the extended period of time.
    • E) The unique pattern or sequence of pulses, having an overall predefined time duration of ‘T’ seconds, enables unique identification of the recording as well as the point in time over an extended period of time. The unique pattern or sequence of pulses enables unique identification of the recording so that the EEG recording can automatically be linked to the corresponding video and audio streams. It should be appreciated that the unique pattern or sequence of pulses convey different codes that allow the EEG module 130 and DSA module 132 (which are configured accordingly) to analyze the codes in the unique pattern or sequence of pulses in the video signal, for example, and the EEG data to detect which codes in the different data streams belongs together. For instance, in a non-limiting example, the synchronization device 110 may generate visible light signals and audio signals comprising code 1 (corresponding to a first pattern, sequence or stream of pulses), then code 2 (corresponding to a second pattern, sequence or stream of pulses), and then code 3 (corresponding to a third pattern, sequence or stream of pulses). The visible light signals and audio signals may be recorded, for example, by the camera 115 in order to generate corresponding video and audio streams and stored together with the EEG data. The EEG module 130 and DSA module 132 then analyzes, as they are configured to do so, both the EEG and video data and match the right codes or the pattern, sequence or stream of pulses to each other.



FIG. 2 is a GUI (graphical user interface) 200 generated by the EEG and DSA modules 130, 132 (FIG. 1), in accordance with some embodiments of the present specification. As shown, the GUI 200 has a first area or window 250 that displays graphical illustrations of first data 202 indicative of a plurality of pulses of the first differential electrical signal that are used for video synchronization and second data 204 indicative of a plurality of pulses of the second differential electrical signal that are used for audio synchronization. A second window 260 displays the synchronization device 110 (the first data 202 and second data 204 which are also generated by the synchronization device 110) that generates third data 206 indicative of a plurality of visible light signals and fourth data indicative of a plurality of audio signals. Additionally, the synchronization device 110 generates the trigger pulse or signal (synchronously with the first and second differential electrical signals) that corresponds to the fourth event 270. The trigger pulse or signal along with the first, second, third and fourth data are synchronous at the time of generation.


In embodiments, each of the first, second, third and fourth data is encoded with a predefined unique pattern or sequence of pulses having an overall predefined time duration of ‘T’ seconds. In various embodiments, the predefined unique pattern or sequence of pulses includes a plurality of streams of pulses wherein each of the plurality of streams has a predefined time duration and includes a predefined number and type of pulses. Each of the pulses, for each of the first, second, third and fourth data, is synchronous at the time of generation. Stated differently, a start time of each corresponding pulse is same for each of the first, second, third and fourth data.


In one non-limiting embodiment, the predefined unique pattern or sequence of pulses includes a first stream of pulses 212 that is followed by a second stream of pulses 214, which, in turn, is followed by a third stream of pulses 216. The first stream 212 includes ‘p’ number of singular pulses 213 that are spaced apart from each other by a predefined time interval. The first stream 212 has a total time duration Tp. The second stream 214 includes ‘q’ number of dual pulse sets 215 (that is, two pulses generated in quick succession) such that the dual pulse sets are spaced apart from each other by a predefined time interval. The second stream 214 has a total time duration Tq. The third stream 216 includes ‘r’ number of triple pulse sets 217 (that is, three pulses generated in quick succession) such that the triple pulse sets are spaced apart from each other by a predefined time interval. The third stream 216 has a total time duration Tr. The overall total predefined time duration of the predefined unique pattern or sequence of pulses T=Tp+Tq+Tr.


In various embodiments, p, q and r may or may not be equal or of the same number. Similarly, in various embodiments, Tp, Tq and Tr may or may not be equal or of the same duration. It should also be appreciated that while FIG. 2 illustrates three pulse streams 212, 214, 216, in alternate embodiments, the number of streams of pulses may vary from 2 to 6 streams. Also, the sequence of the pulse streams may vary in embodiments. In various embodiments, but not limited to such embodiments, the number of pulses in each of the first, second and third stream of pulses varies from 1 to 9, each unique pattern or sequence of pulses consists of three trains or streams, time between pulses is 0.2 seconds and time between the trains or streams is 2 seconds. In one embodiment, p=4, q=2, r=3 and Tp=Tq=Tr=10 seconds. Therefore, an overall predefined time duration of the predefined unique pattern or sequence of pulses T=Tp+Tq+Tr=10+10+10=30 seconds.


In embodiments, the predefined unique pattern or sequence of pulses comprising the first pulse stream 212, the second pulse stream 214, and the third pulse stream 216 is repeated over at least a time duration for which the patient's EEG, video and audio data is recorded.


Synchronization Method

The DSA module 132 is configured to receive and analyze the first data, second data, third data and fourth data in order to determine if the patient's EEG data, audio data, and video data is in synchrony over a period of time during which the patient's EEG data, video data and audio data is recorded. It should be appreciated that in some embodiments the DSA module 132 is separate from the EEG module 130. However, in alternate embodiments, the functionalities of the DSA module 132 may be integrated in the EEG module 130 itself.



FIG. 3 is a flowchart of a plurality of exemplary steps of a method 300 of enabling synchronization of a patient's EEG data, audio data, and video data acquired over a period of time during monitoring of a patient, in accordance with some embodiments of the present specification. In embodiments, the DSA module 132, in tandem with the EEG module 130, is configured to implement a plurality of instructions or programmatic code in order to execute the method 300.


Referring now to FIGS. 1 and 3 simultaneously, at step 302, the EEG data of the patient is acquired using a plurality of sensors positioned on the patient's scalp.


At step 304, first data, second data, a plurality of visible light signals and a plurality of audio signals are generated by the synchronization device 110. In some embodiments, the first data is indicative of a first differential electrical signal and the second data is indicative of a second differential electrical signal. In embodiments, the first data, the second data, the plurality of visible light signals and the plurality of audio signals are generated mutually synchronously. In some embodiments, each of the first and second differential signals has a frequency ranging from 20 Hz to 40 Hz and an amplitude of 1 mV.


In some embodiments, the synchronization device also generates a trigger pulse or signal that is communicated to the multi-channel amplifier 105 (on the first communication link 126) either through a dedicated trigger input or using the differential amplifier inputs (similar inputs to where the patient electrodes are connected). In some embodiments, the trigger pulse or signal is generated synchronously with the first and second differential signals corresponding to the first event. Thus, when the synchronization device 110 generates the first differential signal, the second differential signal, the plurality of visible light signals and the plurality of audio signals it also, at the same time or simultaneously, delivers the trigger pulse or signal through the first communication link 126 to the multi-channel amplifier 105.


In some embodiments, the synchronization device 110 includes at least one light emitting diode for generating the plurality of visible light signals and an acoustic generator for generating the plurality of audio signals along with a speaker for outputting the plurality of audio signals.


At step 306, the first data and the second data are received by the multi-channel amplifier 105 from the synchronization device 110 while the EEG data of the patient is received by the multi-channel amplifier from the plurality of sensors.


At step 308, the plurality of visible light signals generated by the synchronization device 110 is acquired or received by a video acquisition device 115. At step 310, the patient's video data and the third data corresponding to the plurality of visible light signals are generated by the video acquisition device 115.


At step 312, the plurality of audio signals generated by the synchronization device 110 is acquired or received by the audio acquisition device 120. At step 314, the patient's audio data and fourth data corresponding to the plurality of audio signals are generated by the audio acquisition device 120.


In some embodiments, each of the first data, second data, third data and fourth data is encoded with a predefined unique pattern of a plurality of pulses, wherein each of the plurality of pulses has an associated start time, end time, and duration. In some embodiments, the predefined unique pattern has a total time duration. In some embodiments, the predefined unique pattern of the total time duration is repeated continuously over at least the period of time of monitoring the patient.


In some embodiments, the predefined unique pattern includes a first stream of pulses that is followed by a second stream of pulses and that is followed by a third stream of pulses. In some embodiments, the first stream includes a first number of single pulses that are spaced apart from each other by a first time interval, and has a first time duration. In some embodiments, the second stream includes a second number of dual pulses that are spaced apart from each other by a second time interval, and has a second time duration. In some embodiments, the third stream includes a third number of triple pulses that are spaced apart from each other by a third time interval, and wherein the third stream has a third time duration. In embodiments, the total time duration of the predefined unique pattern is a sum of the first time duration, second time duration, and third time durations.


At step 316, the computing device 112 receives the first data, the second data and the patient's EEG data from the multi-channel amplifier 105, the third data and the patient's video data from the video acquisition device 115 along with the fourth data and the patient's audio data from the audio acquisition device 120.


At step 318, the computing device 112 compares the first data with the third data in order to calculate a first time compensation if the first and third data are determined be out of sync with each other. In some embodiments, the first and third data are determined to be out of sync with each other (asynchronous) if a start time of a pulse in the first data is different from a start time of a corresponding pulse in the third data. In some embodiments, the first time compensation is calculated as a difference between the start time of the pulse in the first data and the start time of the corresponding pulse in the third data.


At step 320, the computing device 112 compares the second data with the fourth data in order to calculate a second time compensation if the second and fourth data are determined be out of sync with each other. In some embodiments, the second and fourth data are determined to be out of sync with each other if a start time of a pulse in the second data is different from a start time of a corresponding pulse in the fourth data. In some embodiments, the second time compensation is calculated as a difference between the start time of the pulse in the second data and the start time of the corresponding pulse in the fourth data.


In some embodiments, steps 318 and 320 are implemented at the beginning and end of the EEG monitoring, video recording, and audio recording of the patient. In some embodiments, the steps 318 and 320 are implemented additionally or alternatively on a regular basis to ensure a high degree of synchronization across the whole duration of the EEG, video and audio recording of the patient.


At step 322, the computing device 112 applies the first compensation time to the patient's video data and the second compensation time to the patient's audio data in order to synchronize the patient's EEG data with the video and audio data of the patient. It should be appreciated that the first compensation time may be indicative of a drift (lead or delay/lag) in time of the patient's video data with respect to the patient's EEG data. Also, the second compensation time may be indicative of a drift (lead or delay/lag) in time of the patient's audio data with respect to the patient's EEG data.


The above examples are merely illustrative of the many applications of the systems and methods of the present specification. Although only a few embodiments of the present invention have been described herein, it should be understood that the present invention might be embodied in many other specific forms without departing from the spirit or scope of the invention. Therefore, the present examples and embodiments are to be considered as illustrative and not restrictive, and the invention may be modified within the scope of the appended claims.

Claims
  • 1. A method of enabling synchronization of a patient's EEG data with associated video and audio data for analysis in a computing device, wherein the patient's EEG, video and audio data are acquired over a period of time of monitoring the patient, comprising: acquiring, using a plurality of sensors positioned on the patient's scalp, the EEG data of the patient;generating, using a device, first data indicative of a first differential electrical signal, second data indicative of a second differential electrical signal, a plurality of visible light signals and a plurality of audio signals, wherein the first data, the second data, the plurality of visible light signals and the plurality of audio signals are generated synchronously;receiving, by a multi-channel amplifier, the first data and second data from the device and the EEG data of the patient from the plurality of sensors;acquiring, by a video acquisition device, the plurality of visible light signals;generating, by the video acquisition device, the patient's video data and third data based on the plurality of visible light signals;acquiring, by an audio acquisition device, the plurality of audio signals;generating, by the audio acquisition device, the patient's audio data and fourth data based on the plurality of audio signals;receiving, by the computing device, the first data, the second data and the patient's EEG data from the multi-channel amplifier, the third data and the patient's video data from the video acquisition device and the fourth data and the patient's audio data from the audio acquisition device;comparing, by the computing device, the first data with the third data in order to calculate a first time compensation;comparing, by the computing device, the second data with the fourth data in order to calculate a second time compensation; andapplying, by the computing device, the first time compensation to the patient's video data and the second time compensation to the patient's audio data.
  • 2. The method of claim 1, wherein each of the first, second, third and fourth data is encoded with a predefined unique pattern of a plurality of pulses.
  • 3. The method of claim 2, wherein each of the plurality of pulses has an associated start time, end time and duration time.
  • 4. The method of claim 3, wherein the first and third data are determined to be out of sync with each other if a start time of a pulse in the first data is different from a start time of a corresponding pulse in the third data.
  • 5. The method of claim 4, wherein the first time compensation is calculated as a difference between the start time of the pulse in the first data and the start time of the corresponding pulse in the third data.
  • 6. The method of claim 3, wherein the second and fourth data are determined to be out of sync with each other if a start time of a pulse in the second data is different from a start time of a corresponding pulse in the fourth data.
  • 7. The method of claim 6, wherein the second time compensation is calculated as a difference between the start time of the pulse in the second data and the start time of the corresponding pulse in the fourth data.
  • 8. The method of claim 2, wherein the predefined unique pattern has a total time duration.
  • 9. The method of claim 8, wherein the predefined unique pattern of said total time duration is repeated continuously over at least the period of time of monitoring the patient.
  • 10. The method of claim 8, wherein the predefined unique pattern includes a first stream of pulses that is followed by a second stream of pulses and that is followed by a third stream of pulses.
  • 11. The method of claim 10, wherein the first stream includes a first number of single pulses that are spaced apart from each other by a first time interval, and wherein the first stream has a first time duration.
  • 12. The method of claim 11, wherein the second stream includes a second number of dual pulses that are spaced apart from each other by a second time interval, and wherein the second stream has a second time duration.
  • 13. The method of claim 12, wherein the third stream includes a third number of triple pulses that are spaced apart from each other by a third time interval, and wherein the third stream has a third time duration.
  • 14. The method of claim 13, wherein the total time duration is a sum of the first, second and third time durations.
  • 15. The method of claim 1, wherein the device includes at least one light emitting diode for generating the plurality of visible light signals and an acoustic generator for generating the plurality of audio signals.
  • 16. The method of claim 1, wherein the each of the first and second differential signals has a frequency ranging from 20 Hz to 40 Hz and an amplitude of 1 mV.
  • 17. The method of claim 1, wherein the first data is compared with the third data in order to calculate a first time compensation if the first and third data are determined be out of sync with each other.
  • 18. The method of claim 1, wherein the second data is compared with the fourth data in order to calculate a second time compensation if the second and fourth data are determined be out of sync with each other.
  • 19. (canceled)
  • 20. (canceled)
  • 21. (canceled)
  • 22. (canceled)
  • 23. (canceled)
  • 24. (canceled)
  • 25. (canceled)
  • 26. (canceled)
  • 27. (canceled)
  • 28. (canceled)
  • 29. (canceled)
  • 30. (canceled)
  • 31. (canceled)
  • 32. (canceled)
  • 33. (canceled)
  • 34. (canceled)
  • 35. (canceled)
  • 36. (canceled)
  • 37. (canceled)
  • 38. (canceled)
  • 39. (canceled)
  • 40. (canceled)
  • 41. (canceled)
  • 42. (canceled)
CROSS-REFERENCE

The present specification relies on U.S. Provisional Patent Application No. 63/492,893, titled “Systems and Methods of Synchronizing EEG Data with a Patient's Associated Audio and Video Data”, filed on Mar. 29, 2023, for priority, the entirety of which is herein incorporated by reference in its entirety.

Provisional Applications (1)
Number Date Country
63492893 Mar 2023 US