Many conventional medical monitors require attachment of a sensor to a patient in order to detect physiologic signals from the patient and transmit detected signals through a cable to the monitor. These monitors process the received signals and determine vital signs such as the patient's pulse rate, respiration rate, and arterial oxygen saturation. For example, a pulse oximeter is a finger sensor that may include two light emitters and a photodetector. The sensor emits light into the patient's finger and transmits the detected light signal to a monitor. The monitor includes a processor that processes the signal, determines vital signs (e.g., pulse rate, respiration rate, arterial oxygen saturation), and displays the vital signs on a display.
Other monitoring systems include other types of monitors and sensors, such as electroencephalogram (EEG) sensors, blood pressure cuffs, temperature probes, air flow measurement devices (e.g., spirometer), and others. Some wireless, wearable sensors have been developed, such as wireless EEG patches and wireless pulse oximetry sensors.
Video-based monitoring is a new field of patient monitoring that uses a remote video camera to detect physical attributes of the patient. This type of monitoring may also be called “non-contact” monitoring in reference to the remote video sensor, which does not contact the patient. The remainder of this disclosure offers solutions and improvements in this new field.
In an embodiment described herein, a method of determining tidal volume of a patient, includes receiving, by a processor, at least one image including depth information for at least part of the patient. The method further includes determining, by the processor, a reference point on the patient. The method further includes determining, by the processor, a region of interest based at least in part on the reference point. The region of interest corresponds to a trunk area of the patient. The method further includes monitoring changes in the depth information in the region of interest over time. The method further includes mapping the monitored changes in depth information to a tidal volume for the patient.
In some embodiments, the region of interest is further defined based on at least one body coordinate determined from the reference point.
In some embodiments, each of the at least one body coordinates correspond to a location on a body of the patient, and the location on the body of the at least one body coordinate is at least one of a shoulder, a hip, a neck, a chest, and a waist.
In some embodiments, the region of interest is further determined based on a distance of various portions of the patient from a camera that captures the at least one image.
In some embodiments, the region of interest is further determined by discarding various portions of a flood fill in response to determining that the patient is rotated such that the patient is not orthogonal to a line of sight of a camera that captures the at least one image.
In some embodiments, the region of interest is further determined by determining that the trunk area of the patient is partially obscured and excluding a partially obscured region from the region of interest.
In some embodiments, the at least one image is captured by a first camera, and at least a second image comprising at least part of the patient is captured by a second camera.
In some embodiments, the method further includes determining, by the processor, a second region of interest of the patient based on at least the second image.
In some embodiments, the method further includes determining, by the processor, a second region of interest of the patient from the at least one image.
In some embodiments, the region of interest is a different size than the second region of interest.
In another embodiment described herein, a video-based method of monitoring a patient includes receiving, by a processor, a video feed including a plurality of images captured at different times. At least a portion of a patient is captured by the video feed. The method further includes determining, by the processor, a region of interest of the human patient on the video feed. The region of interest corresponds to a trunk area of the patient. The method further includes measuring, by the processor, changes to the region of interest over time. The method further includes determining, by the processor, based on the changes to the region of interest, a tidal volume of the patient.
In some embodiments, the method further includes comparing, by the processor, the tidal volume determined based on the changes to the region of interest to an output of an air flow measurement device and calibrating, by the processor, the tidal volume determination based on the comparison.
In some embodiments, the method further includes receiving, by the processor, demographic information about the patient and adjusting the tidal volume determination based on the demographic information.
In some embodiments, the demographic information comprises at least one of a sex, height, weight, body mass index (BMI), and age of the patient.
In some embodiments, a size of the region of interest is at least partially dependent on a distance of the patient from a camera that captures the video feed.
In some embodiments, the method further includes determining, using the processor, a change in the tidal volume of the patient over time.
In some embodiments, the method further includes determining, using the processor, based on the change in the tidal volume of the patient, a potential hypoventilation condition.
In some embodiments, the region of interest is configured based on an orientation of the patient with respect to a camera that captures the video feed.
In some embodiments, the tidal volume of the patient is determined based on an orientation of the patient with respect to a camera that captures the video feed.
In some embodiments, the video feed is captured by a first camera, and a second video feed is captured by a second camera, and at least a second portion of the patient is captured by the second video feed.
In some embodiments, the method further includes determining, by the processor, a second region of interest of the patient based on the second video feed.
In some embodiments, the tidal volume is further determined based on changes to the second region of interest over time.
In some embodiments, the method further includes determining, by the processor, a second region of interest of the patient from the video feed.
In some embodiments, the region of interest is a different size than the second region of interest.
In some embodiments, the tidal volume is further determined based on changes to the second region of interest over time.
In a further aspect, which may be provided independently, there is provided an apparatus for determining tidal volume of a patient, the apparatus comprising a processor configured to: receive at least one image comprising depth information for at least a portion of the patient; determine a reference point on the patient; determine a region of interest based at least in part on the reference point, wherein the region of interest corresponds to a trunk area of the patient; monitor changes in the depth information in the region of interest over time; and map the monitored changes in depth information to a tidal volume for the patient.
In a further aspect, which may be provided independently, there is provided an apparatus for video-based monitoring of a patient, the apparatus comprising a processor configured to: receive a video feed comprising a plurality of images captured at different times, wherein at least a portion of a patient is captured within the video feed; determine a region of interest of the patient on the video feed, wherein the region of interest corresponds to a trunk area of the patient; measure changes to the region of interest over time, and determine a tidal volume of the patient based on the changes to the region of interest.
The present invention relates to the field of medical monitoring, and in particular non-contact monitoring of patient with regard to respiratory monitoring. Systems, methods, and computer readable media are described herein for determining a region of interest of a patient and monitoring that region of interest to determine tidal volume of the patient. The systems, methods, and computer readable media disclosed herein have the potential to improve recordkeeping, improve patient care, reduce errors in vital sign measurements, increase frequency and accuracy of respiratory monitoring, help healthcare providers better characterize and respond to adverse medical conditions indicated by decreased tidal volume (e.g., hypoventilation), and generally improve monitoring of patients, along with many other potential advantages discussed below. Tidal volume measurement/monitoring can further be helpful in the following areas: respiratory compromise, non-invasive ventilation, volume capnography, neonatal monitoring, pain management, post-surgery monitoring/treatment, and more. In particular, arterial blood oxygen saturation is a lagging indicator of respiratory compromise; it may take 60 seconds or longer for oxygen saturation levels to drop after a patient stops breathing. By monitoring breathing as disclosed herein, patients who have slow, shallow, or stopped breathing can be attended to more quickly, potentially saving lives and leading to better treatment.
Improvements disclosed herein can greatly increase the ability to detect or measure respiratory compromise, thereby increasing the level of care healthcare professionals can provide to patients. For example, the ability to determine the nature of respiration of a patient allows for the determination of progression of a disease state and/or impending complication including imminent respiratory arrest.
Beneficially, the systems, methods, and computer readable media disclosed herein provide for enhanced ways of measuring tidal volume of a patient using non-contact monitoring. With contact-based monitoring, tidal volume can be measured by utilizing an obtrusive mask incorporating a specialized flow measurement device. These masks and flow devices can be bulky and uncomfortable, and accordingly, this type of device may not be routinely used on patients. Additionally, even when it is used, it may not be used for long periods of time, and therefore may not be suitable for long term monitoring of tidal volume of a patient.
As described herein, non-contact video monitoring can be utilized to determine a volume of airflow indicative of tidal volume of a patient. For example, this may be accomplished using a depth sensing camera to monitor a patient and determine movements of their chest and/or other body parts as the patient breathes. This sensing of movement can be used to determine a tidal volume measurement. Accordingly, disclosed herein are systems, methods, and computer readable media for determining a tidal volume measurement using non-contact video monitoring of a patient. Furthermore, the systems, methods, and computer readable media disclosed herein accommodate patients with different characteristics and disease states, enabling more accurate patient-specific measurements across many different clinical scenarios.
The camera 214 generates a sequence of images over time. The camera 214 may be a depth sensing camera, such as a Kinect camera from Microsoft Corp. (Redmond, Wash.). A depth sensing camera can detect a distance between the camera and objects in its field of view. Such information can be used, as disclosed herein, to determine that a patient is within the field of view of the camera 214 and determine a region of interest (ROI) to monitor on the patient. Once an ROI is identified, that ROI can be monitored over time, and the change in depth of points within the ROI can represent movements of the patient associated with breathing. Accordingly, those movements, or changes of points within the ROI, can be used to determine tidal volume as disclosed herein.
In some embodiments, the system determines a skeleton outline of a patient to identify a point or points from which to extrapolate an ROI. For example, a skeleton may be used to find a center point of a chest, shoulder points, waist points, and/or any other points on a body. These points can be used to determine an ROI. For example, an ROI may be defined by filling in area around a center point of the chest. Certain determined points may define an outer edge of an ROI, such as shoulder points. In other embodiments, instead of using a skeleton, other points are used to establish an ROI. For example, a face may be recognized, and a chest area inferred in proportion and spatial relation to the face. In other embodiments as described herein, the system may establish the ROI around a point based on which parts are within a certain depth range of the point. In other words, once a point is determined that an ROI should be developed from, the system can utilize the depth information from a depth sensing camera to fill out the ROI as disclosed herein. For example, if a point on the chest is selected, depth information is utilized to determine an ROI area around the determined point that is a similar distance from the depth sensing camera as the determined point. This area is likely to be a chest. Using threshold depths in relation to a determined point is further shown and described below at least with respect to
In another example, a patient may wear a specially configured piece of clothing that identifies points on the body such as shoulders or the center of the chest. A system may identify those points by identifying the indicating feature of the clothing. Such identifying features could be a visually encoded message (e.g., bar code, QR code, etc.), or a brightly colored shape that contrasts with the rest of the patient's clothing, etc. In some embodiments, a piece of clothing worn by the patient may have a grid or other identifiable pattern on it to aid in recognition of the patient and/or their movement. In some embodiments, the identifying feature may be stuck on the clothing using a fastening mechanism such as adhesive, a pin, etc. For example, a small sticker may be placed on a patient's shoulders and/or center of the chest that can be easily identified from an image captured by a camera. In some embodiments, the indicator may be a sensor that can transmit a light or other information to a camera that enables its location to be identified in an image so as to help define an ROI. Therefore, different methods can be used to identify the patient and define an ROI.
In some embodiments, the system may receive a user input to identify a starting point for defining an ROI. For example, an image may be reproduced on an interface, allowing a user of the interface to select a patient for monitoring (which may be helpful where multiple humans are in view of a camera) and/or allowing the user to select a point on the patient from which the ROI can be determined (such as a point on the chest). Other methods for identifying a patient, points on the patient, and defining an ROI may also be used, as described further below.
In various embodiments, the ROI or portions of the ROI may be determined to move in accordance with respiratory patterns, to determine a tidal volume of the patient, as described further below.
The detected images are sent to a computing device through a wired or wireless connection 220. The computing device includes a processor 218, a display 222, and hardware memory 226 for storing software and computer instructions. Sequential image frames of the patient are recorded by the video camera 214 and sent to the processor 218 for analysis. The display 222 may be remote from the camera 214, such as a video screen positioned separately from the processor and memory. Other embodiments of the computing device may have different, fewer, or additional components than shown in
In some embodiments, the image capture device 385 is a remote sensing device such as a video camera. In some embodiments, the image capture device 385 may be some other type of device, such as a proximity sensor or proximity sensor array, a heat or infrared sensor/camera, a sound/acoustic or radiowave emitter/detector, or any other device that may be used to monitor the location of a patient and an ROI of a patient to determine tidal volume. Body imaging technology may also be utilized to measure tidal volume according to the methods disclosed herein. For example, backscatter x-ray or millimeter wave scanning technology may be utilized to scan a patient, which can be used to define an ROI and monitor movement for tidal volume calculations. Advantageously, such technologies may be able to “see” through clothing, bedding, or other materials while giving an accurate representation of the patient's skin. This may allow for more accurate tidal wave measurements, particularly if the patient is wearing baggy clothing or is under bedding. The image capture device 385 can be described as local because it is relatively close in proximity to a patient so that at least a part of a patient is within the field of view of the image capture device 385. In some embodiments, the image capture device 385 can be adjustable to ensure that the patient is captured in the field of view. For example, the image capture device 385 may be physically movable, may have a changeable orientation (such as by rotating or panning), and/or may be capable of changing a focus, zoom, or other characteristic to allow the image capture device 385 to adequately capture a patient for ROI determination and tidal volume monitoring. In various embodiments, after an ROI is determined, a camera may focus on the ROI, zoom in on the ROI, center the ROI within a field of view by moving the camera, or otherwise may be adjusted to allow for better and/or more accurate tracking/measurement of the movement of a determined ROI.
The server 325 includes a processor 335 that is coupled to a memory 330. The processor 335 can store and recall data and applications in the memory 330. The processor 335 is also coupled to a transceiver 340. With this configuration, the processor 335, and subsequently the server 325, can communicate with other devices, such as the computing device 300 through the connection 370.
The devices shown in the illustrative embodiment may be utilized in various ways. For example, any of the connections 370 and 380 may be varied. Any of the connections 370 and 380 may be a hard-wired connection. A hard-wired connection may involve connecting the devices through a USB (universal serial bus) port, serial port, parallel port, or other type of wired connection that can facilitate the transfer of data and information between a processor of a device and a second processor of a second device. In another embodiment, any of the connections 370 and 380 may be a dock where one device may plug into another device. In other embodiments, any of the connections 370 and 380 may be a wireless connection. These connections may take the form of any sort of wireless connection, including, but not limited to. Bluetooth connectivity, Wi-Fi connectivity, infrared, visible light, radio frequency (RF) signals, or other wireless protocols/methods. For example, other possible modes of wireless communication may include near-field communications, such as passive radio-frequency identification (RFID) and active RFID technologies. RFID and similar near-field communications may allow the various devices to communicate in short range when they are placed proximate to one another. In yet another embodiment, the various devices may connect through an internet (or other network) connection. That is, any of the connections 370 and 380 may represent several different computing devices and network components that allow the various devices to communicate through the internet, either through a hard-wired or wireless connection. Any of the connections 370 and 380 may also be a combination of several modes of connection.
The configuration of the devices in
The image includes a patient 390 and a region of interest (ROI) 395. The ROI 395 can be used to determine a volume measurement from the chest of the patient 390. The ROI 395 is located on the patient's chest. In this example, the ROI 395 is a square box. In various embodiments, other ROIs may be different shapes. Because the image includes depth data, such as from a depth sensing camera, information on the spatial location of the patient 390, and therefore the patient's chest and the ROI 395, can also be determined. This information can be contained within a matrix, for example. As the patient 390 breathes, the patient's chest moves toward and away from the camera, changing the depth information associated with the images over time. As a result, the location information associated with the ROI 395 changes over time. The position of individual points within the ROI 395 may be integrated across the area of the ROI 395 to provide a change in volume over time as shown in
V(t)=∫∫H(x,y,t)dxdy [1]
The initial values of H may be set to zero when the analysis of the box is first activated. Therefore, a volume signal V(t) such as the one shown in
The method 600 further includes measuring changes to the ROI over time at 620. This may be accomplished in various ways as disclosed herein. The method 600 further includes determining, based on the changes to the region of interest, a tidal volume of the patient at 625. This determination can be performed in using any of the methods, systems, and computer readable media disclosed herein.
In some embodiments, the volume signal from the non-contact system may need to be calibrated to provide an absolute measure of volume. For example, the volume signal obtained from integrating points in a ROI over time may accurately track a patient's tidal volume and may be adjusted by a calibration factor. The calibration or correction factor could be a linear relationship such as a linear slope and intercept, a coefficient, or other relationships. As an example, the volume signal obtained from a video camera may under-estimate the total tidal volume of a patient, due to underestimating the volume of breath that expands a patient's chest backward, away from the camera, or upward orthogonal to the line of sight of the camera. Thus, the non-contact volume signal may be adjusted by simply adding or applying a correction or calibration factor. This correction factor can be determined in a few different ways. In one embodiment, an initial reference measurement is taken with a separate flow measurement device. For example, the tidal volume of the patient may be measured using a flow measurement device (e.g. a spirometer) to produce a reference tidal volume over a short calibration or test time frame (such as 3 to 4 breaths). The V(t) signal (also referred to herein as the volume signal, the tidal volume, and/or the tidal volume signal) over the same time frame is compared to the reference tidal volume, and a calibration factor is determined so that the range of V(t) matches the reference tidal volume measured by the flow measurement device. After a few calibration breaths through the flow measurement device, it may be removed from the patient. The V(t) volume signal measured thereafter from the video feed is adjusted using the calibration factor determined during the initial calibration phase.
In some embodiments, demographic data about a patient may be used to calibrate the volume signal. From a knowledge of the patient's demographic data, which may include height, weight, chest circumference, BMI, age, sex, etc., a mapping from the measured V(t) to an actual tidal volume signal may be determined. For example, patients of smaller height and/or weight may have less of a weighting coefficient for adjusting measured V(t) for a given ROI box size than patients of greater height and/or weight. Different corrections or mappings may also be used for other factors, such as whether the patient is under bedding, type/style of clothing worn by a patient (e.g., t-shirt, sweatshirt, hospital gown, dress, v-neck shirt/dress, etc.), thickness/material of clothing/bedding, a posture of the patient, and/or an activity of the patient (e.g., eating, talking, sleeping, awake, moving, walking, running, etc.).
VTrue=K·VROI+C [2]
where K and C are constants, then K and/or C may be varied according to demographic information. Note that C may be zero or non-zero.
Alternatively, the ROI size may be set according to the patient demographics, i.e., patients of smaller height and/or weight may use a smaller ROI size than patients of greater height and/or weight, such as shown in
The ROI sizes may also differ according to the distance of the patient from the camera system. The ROI dimensions may vary linearly with the distance of the patient from the camera system. This ensures that the ROI scales according with the patient and covers the same part of the patient regardless of the patient's distance from the camera. When the ROI is scaled correctly based on the patient's position in the field of view, the resulting tidal volume calculation from the volume signal V(t) can be maintained, regardless of where the patient is in the field of view. That is, a larger ROI when the patient is closer to the camera, and a smaller ROI when the same patient is further from the camera, should result in the same V(t) calculation. This is accomplished by applying a scaling factor that is dependent on the distance of the patient (and the ROI) from the camera. In order to properly measure the tidal volume of a patient, the actual size of an ROI (the area of the ROI) is determined. Then movements of that ROI (see, e.g.,
Instead of a box of a preset or scaled size, the ROI may instead have a more complex morphology to capture the whole chest region of the patient. An example of this is shown in
Another type of smart ROI determination may use respiration rate (RR) modulations power analysis. This compares a power while breathing to a power while not breathing to filter noise and determine more accurate ROIs and tidal volumes. In a method, a center of the chest is located based on an image of the patient captured by the camera. A small area in the center of the chest is identified where a good respiratory modulation can be extracted. To do so, the chest may be monitored over time to determine a point where that good respiratory modulation is located. The movement of various points on the chest may be compared with a known or expected respiration rate to ensure that a good point is selected. Then, the full frame/field processing can be performed. A quality metric using a power ratio (Prr/Pnot-rr) will yield a heatmap which can be reduced to an ROI by using a dynamic threshold. Points that modulate at the respiration rate and above a threshold amplitude are added to the ROI, and points that do not modulate at that rate or at that amplitude are discarded. This ROI can be updated dynamically, so that the ROI is continually refreshing to capture the portions of the chest that are moving with breaths, or to track the chest as the patient moves across the field of view. Because the distance to the camera of each point on the chest is known, expected dimensions of the ROI may also be inferred. That is, because the general shape of a chest is known, a system may also make sure that portions of an image included in an ROI fit into an expected human chest or trunk shape. The portions singled out as likely to be human/chest trunk may be determined based on the depth information from the image. The system may also include in an ROI points on the chest that fit into a predetermined distance threshold from the camera, as discussed herein (see, e.g., discussion regarding
Where a center point is used to derive an ROI, the center point on the chest may become blocked in some instances, such as when a hand moves in front of the determined center point of the chest. In that instance, the ROI may erroneously track the hand, instead of the chest. In order to counteract this, the system may monitor the center point to ensure that it has good respiratory modulation, i.e. that the center point moves similarly to a human breathing. If that center point (or any other point used) ceases to move with a frequency akin to human respiratory modulation, a new center point may be sought, where human respiratory modulation is occurring. Once such a new point is identified, the region around that point can be filled in to form a new ROI. In some embodiments, that method may be used to find a point around which the ROI should be filled-in in the first instance (rather than attempting to locate a center point of the chest).
In some embodiments, multiple points that show a characteristic similar to respiratory modulations may be selected and used to fill out one or more ROIs on a body. This can advantageously result in identifying any part of the body, not just a chest area, that moves as a result of breathing. Additionally, this method can advantageously provide multiple ROIs that may be monitored together to measure tidal volume or respiration rate, or extrapolated to measure tidal volume as if there were only a single ROI. For example, an arm blocking a camera's view of a chest may extend all the way across the chest. The system can then identify at least two points typical of respiratory modulations, one above the arm on the chest and one below the arm on the chest. Two ROIs can be filled out from those points to extend to cover the chest that is not visible to the camera.
That measured data can then be extrapolated to account for the amount of chest blocked by the arm to get a more accurate tidal volume measurement. The extrapolation may also account for the portion of the chest that is being blocked. This may be helpful because different parts of the chest will move to different degrees than others during a breath. The two ROIs above and below may be utilized to determine which part of the chest is being blocked by the arm. For example, if the top ROI is very small and the bottom ROI is comparatively larger, the system can determine that the arm is blocking a higher portion of the chest closer to the neck. If opposite (large top ROI and small bottom ROI), the system can determine that the portion of the chest being blocked is further down toward the waist. Therefore, the system can account for which part of the chest is being blocked when calculating tidal volume.
In order to extract accurate volume changes from a breathing patient using a depth sensing camera, it is important to correctly select the sampling region, which is then used to aggregate the volume changes. An ROI that encompasses as much of the patient's trunk as possible can advantageously be more accurate than a smaller ROI in capturing complete respiratory motion of a patient. Accordingly, an ROI may be dynamically selected, so that an optimum sampling region based on depth data and skeleton coordinates is continually determined and refreshed as described below.
With respect to
A line is fitted to the data. This line may be in the form of a linear regression line with the form of Equation 3 below:
TVm=m×TVr+c [3]
where TVm is the measured tidal volume using the non-contact camera system, TVr is the reference tidal (true) volume, m is the gradient and c is a constant. In such a method, a regression may be used where the line is forced through the origin of the graph in
TVm=m×TVr [4]
and the gradient m becomes a simple multiplier constant. Alternatively, a more complex, non-linear equation may be fitted to the data. Alternatively, a piecewise function may also be fitted, or any other relationship. In various embodiments, a series of relationships depending on other factors may be utilized. For example, different curves or fits may be utilized for various respiratory rates, various patient postures, modes of breathing (chest or abdominal), patient demographics (BMI, age, sex, height, weight, etc.), or any other factor.
The tidal volume measurement (TVm) may also be used to determine whether a patient is exhibiting hypoventilation.
A threshold minute volume may also be determined as shown in
CD=MV/MVthreshold [5]
or alternatively using the measured tidal volume and the respiratory compromise threshold (e.g., the threshold tidal volume) as shown below in Equation 6:
CD=TV/TVthreshold [6]
It can be seen that these ratios are the same when a data point falls on the fitted line and the fit is linear and goes through the origin. However, they may differ due to a data spread or if other non-linear forms are used. These graphs may be generated on a patient by patient basis to generate custom lines and thresholds, or curves may be applied to tidal volumes measured through non-contact video monitoring that are most likely to fit a patient as disclosed herein.
As mentioned above, the volume signal V(t) from the video image may need to be calibrated or adjusted to obtain a true tidal volume. For example, the image in
If the patient is sitting at an angle to the camera, a motion vector associated with respiration of the patient may not be in line with the camera's line of sight.
An improved method is disclosed herein for correcting this movement of the flood fill region caused by a non-orthogonal angle of the plane of the chest to the line of sight of the camera.
di,j=d*i,j/cos(θ) [7]
The true tidal volume in the direction of the line of sight may now be calculated by numerically integrating these values according to Equation 8 below:
TVc=ΣiΣjdi,jΔ [8]
where Δ is the area of the i-j grid tiles. This type of measurement can also be performed if the patient is reclining; that is, if the rotation of the plane of the chest is along a different axis or plane (e.g. along an x axis rather than a y axis as in
The embodiments described above with respect to
In some embodiments, the flood field depth range may be increased in magnitude by using the angle of incidence and/or the location of the peripheral (shoulder) point on the skeleton as illustrated in
In particular, the thresholds H and L of
H2=MAX(H,DISTANCE(SEED,FAR SHOULDER)+TOLERANCEAMOUNT) [9]
L2=MAX(L,DISTANCE(SEED,NEAR SHOULDER)+TOLERANCEAMOUNT) [10]
In a second example, H2 and L2 are adjusted according to a relative amount (e.g., 10%), such as by adjusting H2 and L2 according to Equations 11 and 12, respectively, below:
H2=MAX(H,DISTANCE(SEED,FAR SHOULDER)*1.1) [11]
L2=MAX(L,DISTANCE(SEED,NEAR SHOULDER)*1.1) [12]
This helps ensure that the motion of the chest is properly captured and that the ROI is properly determined such that tidal volume can be accurately calculated.
The discussion below with respect to
In an embodiment, a 3D body scan calibration process is performed at the start of measurement for the patient.
In a first embodiment, the ratio of the original unobscured ROI (Au) to the visible area may be used to estimate the true tidal volume (TVe) from the measured tidal volume from the visible area (TVv)) as follows in Equation 13:
TVe=TVv(Au/(Au−Ao)) [13]
where Ao is the obscured area. This is shown schematically in
In other embodiments, the excursions around the obscured area may be used to estimate the excursions within the obscured area which are then multiplied by the obscured areas to provide the contribution to measured tidal volume from the unobscured area. This is shown schematically in
TVc=Ao×Δave [14]
where Ao is the area of the obscured region. Alternatively, the relative excursions during the pre-obscured time within the obscured region are determined and used to estimate the excursions during the obscured time. This may be done by assigning excursion pro-rata based on proportional excursions across the mesh during the pre-obscured period.
In another embodiment, the data from the last previously unobstructed breath can be saved as a map of relative contribution to the measured tidal volume. An obstructed region's contribution can be calculated using this historical unobstructed map of ratios. Moreover, a confidence metric of the estimate can be deduced using this map, where, for example, C=1−Sum(Obstructed contributions). In this way, obstruction of low contribution areas would affect confidence less than obstruction of areas known to contribute more to the measured volume. In the absence of previous unobstructed breath, a generic map of contribution can be used which would be built based on accumulated patient data.
In another embodiment, feature points measurements (e.g., skeletal points such as shown in
In another embodiment, a reconstructed region is displayed in a different color scheme to the normal depth data. This provides a visual feedback to the operator which indicates the region that is based on estimated calculation. This is shown in
Also disclosed herein are various systems, methods, and computer readable media for improving tidal volume measurements using non-contact video monitoring. For example, a volume signal may be corrupted with noise due to movement of the patient. In another example, certain movement of a patient related to respiration may not always be visible to a camera. Disclosed herein and discussed below with respect to
VC(t)=V1(t)−V2(t) [15]
The initial values of VC(t) may be set to zero when the analysis is first activated. Alternatively, the minimum value of VC(t) may be set to zero. The method is outlined schematically in
A multiple camera system may also be beneficial to track and measure shoulder movement. For example, in some patients, tidal volume may be measured at least in part by monitoring the movement/displacement of the shoulders. A depth sensing camera oriented generally orthogonal to a patient's chest may be able to detect some shoulder movement for the purpose of measuring tidal volume. However, one or more additional cameras (e.g., above a patient, to the right or left of a patient, behind a patient) may be able to capture additional movement in the shoulders that can be used to measure tidal volume.
Multiple camera systems can also be advantageously used to remove non-clinically relevant data. For example, patients may move throughout a room or in bed in a way that would impact the measurements made by a single camera and make it difficult to measure tidal volume. By utilizing multiple cameras, the movement of the patient can be tracked. For example, if a patient moves toward one camera and away from another, the depth vector measurements from the two cameras will capture that movement data in opposite directions and cancel one another out, leaving the movement associated with breathing to be measured as tidal volume. In such an embodiment, the system may determine an ROI on the chest of the patient using data from the first camera and a second ROI on the back of the patient using data from the second camera. Systems using more than two cameras in a similar way may also be used, and may add further robustness to the system.
In order to use two or more cameras to assess the patient's movement, position, and volume changes, in an embodiment, the cameras are able to determine where they are positioned and oriented with respect to each other. For example, in order to combine the depth measurements from each camera, the system needs to know if the two cameras are viewing in opposite directions, orthogonal directions, or any other angle or orientation. Because the tidal volume calculations can be made based on vectors in x, y, and z axes, the system can identify a calibration point(s) in the room to adequately define the axes, which may be particularly useful in embodiment where multiple cameras do not have line of sights that are orthogonal to one another. The cameras can determine their relative orientation by viewing a common object or calibration point in the room. That is, in one embodiment, an object or point in the room is visible within the field of view of both (or all) cameras. A calibration point may be a point on the patient such as a top of the head, or may be something in the room. The point identified in the room may be a specially configured device such as a sticker or sign with a bar code or other feature on it that can be recognizable from data captured by a camera. By identifying the same point or points in the room and using depth sensing data to determine where the camera is compared to the known object, point, or points, the system can accurately determine how measurements from each depth sensing camera can be mapped into vectors on the x, y, and z axes. In other words, the point(s) in the room can be used to identify where the cameras are actually located, and where the cameras are located with respect to one another.
In some embodiments, the cameras may send communications that can be captured by one another in order to calibrate them. For example, a camera may flash a light or send another signal to indicate its position. In another example, the depth sensing camera may capture data indicative of a camera so that the system can determine the location of a camera within another camera's field of view. This information can also be used to synchronize the data captured, i.e., make sure movement captured by the cameras are mapped as vectors onto the same axes so that tidal volume can be accurately determined. A three-dimensional object in the room may also be identified and used to calibrate/locate the cameras. In other words, information about the object in the room can be used to figure out where the cameras are in relation to the object and therefore in relation to one another. If a camera moves or is adjusted in a way that affects its field of view, zoom, etc., that movement/adjustment can be tracked and accounted for when calibrating/locating the cameras and subsequently in tidal volume calculations.
In some embodiments, multiple cameras may be able to see an entire room or more. The system may include logic to use or prioritize data from certain cameras that have a better view of a patient or ROI. In this way, more accurate measurements can be made. If multiple cameras are used to determine ROI and/or tidal volume, some cameras may be determined to have a better view of the patient or otherwise can make more accurate measurements. In such cases, the system may weight the data from those cameras more heavily (assign it a higher weight) or assign it higher confidence levels, so that the data that is more likely to be accurate is prioritized when calculating a tidal volume or other metric.
Similarly, various embodiments may also utilize full 3D reconstruction using multiple depth cameras. The real time reconstruction of a 3D volume based on multiple depth cameras can be used to track the overall volume of a patient in real time. In other words, rather than determining ROIs on the patient's body, the system may track the entire body of a patient. The tidal volume is a component of the patient's overall volume and may be extracted as a proportion of the total volume change. The motion (skeleton detection/tracking) data provided by the various embodiments disclosed herein can be used to mitigate against changes caused by patient motion.
In various embodiments, a multiple ROI method using a single camera may also be used. A larger ROI may be used as well as a smaller ROI (e.g., the chest only ROI). The mean movement of the larger ROI may be used to filter out the global body motions from the chest ROI hence leaving the respiratory signal intact. This may be done by using an adaptive filter to remove from the chest ROI signal the non-respiratory motions identified in the larger ROI signal. The larger ROI may or may not include the chest ROI. An example of this embodiment is shown schematically in
Other filtering/processing may be performed to exclude information that is non-clinically relevant. For example, when patients are talking or eating they may have unusual tidal volumes and respiration patterns that are harder to track and may not be clinically relevant. Accordingly, the systems, methods, and computer readable media disclosed herein may be configured to identify periods where a patient is talking or eating or doing another activity which is desirable to exclude. For example, data from a depth sensing camera may indicate that the patient is talking: movement of mouth/lips, irregular respiration rate, etc. Other sensors may be used in conjunction with the camera to determine that a patient is talking, such as an audio sensor. If an audio sensor picks up audio typical of the human voice and the respiration rate is abnormal, for example, the system may identify that the patient is talking and not use the data collected to attempt to monitor or calculate tidal volume. Other irregular situations may also be identified, such as while a patient is eating. Depth sensing camera data may be used to determine that the patient is eating, for example through movement of the jaw similar to chewing, neck movement indicating swallowing, hands moving periodically to the mouth to feed, appearance of a straw-like shape in front of the patient's face, etc. By identifying instances where irregular breathing is likely, the system can filter out data collected during those periods so as not to affect tidal volume measurements, averages, or other calculations. Additionally, the determinations of scenarios like eating and talking where breathing is expected to be irregular may also be beneficial for alarm conditions. For example, in a scenario when a patient is talking, any alarm related to a tidal volume measurement may be suppressed by the system.
Various embodiments may include filtering out non-physiological signals as disclosed herein. For example, an expected spectral bandwidth of breathing may be known and used to filter out non-respiratory signals from a volume signal. For example, a raw volume signal may be band-pass filtered between 0.10 and 0.66 Hz (corresponding to 10 second and 1.5 second breaths or 6 and 40 breaths per minute). Where movement falls outside of the frequency range, it may be excluded because it is unlikely to be movement associated with respiratory movement.
In some embodiments, the systems, methods, and computer readable media disclosed herein may be used to measure volumetric CO2. For example, when used in conjunction with a nasal cannula or other capnography device, volumetric CO2 can be determined. In particular, a capnography device measures the percentage of carbon dioxide in the air being breathed out by a patient. With a tidal volume measurement as disclosed herein, the percentage of carbon dioxide in the air can be multiplied by the tidal volume to determine the volumetric CO2 of the patient (i.e., how much total volume of carbon dioxide the patient is breathing out).
Various other data processing and filtering processes may be used on data gathered using depth sensing cameras or other devices for monitoring a patient. For example, trends may be monitored in the data, moving averages, weighted averages, and filtering to remove non-conforming data may all be utilized. Confidence levels may also be utilized to determine whether to include data. For example, a non-conforming behavior like talking may be identified to a predetermined threshold confidence level. If the non-conforming behavior is identified to that certain confidence level, then the data collected during that time can be excluded from trends, averages, and other data processing and/or gathering operations performed by the system. The system may also calculate confidence levels with respect to the tidal volume being measured. For example, if a robust ROI is determined, the system may have a higher confidence level with respect to the tidal volume calculated. If the patient is too obstructed, too far away, or other factors that are known to cause issues with tidal volume measurement is present, the system may associate a low confidence level with the tidal volume measurement. If a confidence level falls below a particular threshold level, the data collected during that time can be excluded from certain calculations with respect to the patient and their tidal volume. In some embodiments, confidence level thresholds may also be used to determine whether to propagate an alarm or not. For example, if a patient has left the room, the system will measure zero tidal volume. However, the system may recognize that it has not identified an ROI, giving a zero-confidence level in that measurement. Accordingly, alarm conditions with respect to the zero-tidal volume measurement will be suppressed. In more nuanced examples, the system may recognize when irregular situations are occurring, and use confidence levels to determine whether data collected is valid or invalid (i.e., should it be used in various calculations and/or recordkeeping of the system). By determining whether certain data is valid or invalid, the system can determine whether to use that data collected to calculate tidal volume of a patient.
Disclosed herein are also various types of alerts that may be used in accordance with tidal volume monitoring systems, methods, and computer readable media. For example, an alert may be triggered when a hypoventilation as described herein is detected. An alert may also be triggered if a tidal volume falls below a predetermined threshold. An alert may be triggered if a minute volume falls below a predetermined threshold. An alert may be triggered if no breathing activity is detected, or if no breathing activity is detected for at least a certain duration of time.
A system may also distinguish certain types of movement. For example, a patient's breathing patterns may change while sleeping. Accordingly, the system may determine if a patient is sleeping, how long they sleep, whether and how much they wake up in the night, etc. The determination of certain types of movement may also be patient specific. That is, certain patients may move in different ways for different types of movement. For example, a sleeping patient A may move differently than a sleeping patient B. The system may be able to identify differences in sleep patterns between patients. The system may also be able to identify sleep and awake states of a patient, even if those states vary in movement signatures by patient. For example, the system may identify that a patient is awake based on breathing patterns, tidal volume, respiration rate, minute volume, and/or other factors. By monitoring those factors, the system may be able to detect a change in those factors indicating that a patient is likely asleep. The system can then study the sleeping times for trends to determine a signature of that particular patient while they are sleeping. The system can then watch for data or signals similar to that signature in the future to determine that the patient is asleep.
The systems and methods described herein may be provided in the form of tangible and non-transitory machine-readable medium or media (such as a hard disk drive, hardware memory, etc.) having instructions recorded thereon for execution by a processor or computer. The set of instructions may include various commands that instruct the computer or processor to perform specific operations, such as the methods and processes of the various embodiments described herein. The set of instructions may be in the form of a software program or application. The computer storage media may include volatile and non-volatile media, and removable and non-removable media, for storage of information such as computer-readable instructions, data structures, program modules or other data. The computer storage media may include, but are not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid-state memory technology, CD-ROM, DVD, or other optical storage, magnetic disk storage, or any other hardware medium which may be used to store desired information and that may be accessed by components of the system. Components of the system may communicate with each other via wired or wireless communication. The components may be separate from each other, or various combinations of components may be integrated together into a medical monitor or processor, or contained within a workstation with standard computer hardware (for example, processors, circuitry, logic circuits, memory, and the like). The system may include processing devices such as microprocessors, microcontrollers, integrated circuits, control units, storage media, and other hardware.
Although the present invention has been described and illustrated in respect to exemplary embodiments, it is to be understood that it is not to be so limited, since changes and modifications may be made therein which are within the full intended scope of this invention as hereinafter claimed.
The present application claims priority to U.S. Provisional Patent Application No. 62/614,763, filed Jan. 8, 2018, the disclosure of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5107845 | Guern et al. | Apr 1992 | A |
5408998 | Mersch | Apr 1995 | A |
5704367 | Ishikawa et al. | Jan 1998 | A |
5800360 | Kisner et al. | Sep 1998 | A |
5995856 | Mannheimer et al. | Nov 1999 | A |
6668071 | Minkin et al. | Dec 2003 | B1 |
6920236 | Prokoski | Jul 2005 | B2 |
7431700 | Aoki et al. | Oct 2008 | B2 |
7558618 | Williams | Jul 2009 | B1 |
8149273 | Liu et al. | Apr 2012 | B2 |
8754772 | Horng et al. | Jun 2014 | B2 |
8792969 | Bernal et al. | Jul 2014 | B2 |
8971985 | Bernal et al. | Mar 2015 | B2 |
9226691 | Bernal et al. | Jan 2016 | B2 |
9282725 | Jensen-Jarolim et al. | Mar 2016 | B2 |
9301710 | Mestha et al. | Apr 2016 | B2 |
9402601 | Berger et al. | Aug 2016 | B1 |
9436984 | Xu et al. | Sep 2016 | B2 |
9443289 | Xu et al. | Sep 2016 | B2 |
9504426 | Kyal et al. | Nov 2016 | B2 |
9662022 | Kyal et al. | May 2017 | B2 |
9693693 | Farag et al. | Jul 2017 | B2 |
9693710 | Mestha et al. | Jul 2017 | B2 |
9697599 | Prasad et al. | Jul 2017 | B2 |
9750461 | Telfort | Sep 2017 | B1 |
9839756 | Klasek | Dec 2017 | B2 |
9943371 | Bresch et al. | Apr 2018 | B2 |
10278585 | Ferguson et al. | May 2019 | B2 |
10376147 | Wood et al. | Aug 2019 | B2 |
10398353 | Addison et al. | Sep 2019 | B2 |
10523852 | Tzvieli et al. | Dec 2019 | B2 |
10588779 | Vorhees et al. | Mar 2020 | B2 |
10650585 | Kiely | May 2020 | B2 |
10667723 | Jacquel et al. | Jun 2020 | B2 |
10702188 | Addison et al. | Jul 2020 | B2 |
10874331 | Kaiser et al. | Dec 2020 | B2 |
10939824 | Addison et al. | Mar 2021 | B2 |
10939834 | Khwaja et al. | Mar 2021 | B2 |
20020137464 | Dolgonos et al. | Sep 2002 | A1 |
20040001633 | Caviedes | Jan 2004 | A1 |
20040258285 | Hansen et al. | Dec 2004 | A1 |
20050203348 | Shihadeh et al. | Sep 2005 | A1 |
20070116328 | Sablak et al. | May 2007 | A1 |
20080001735 | Tran | Jan 2008 | A1 |
20080108880 | Young et al. | May 2008 | A1 |
20080279420 | Masticola et al. | Nov 2008 | A1 |
20080295837 | McCormick et al. | Dec 2008 | A1 |
20090024012 | Li et al. | Jan 2009 | A1 |
20090304280 | Aharoni et al. | Dec 2009 | A1 |
20100210924 | Parthasarathy et al. | Aug 2010 | A1 |
20100023655 | Jafari et al. | Sep 2010 | A1 |
20100236553 | Jafari et al. | Sep 2010 | A1 |
20100249630 | Droitcour et al. | Sep 2010 | A1 |
20100324437 | Freeman et al. | Dec 2010 | A1 |
20110144517 | Cervantes | Jun 2011 | A1 |
20110150274 | Patwardhan et al. | Jun 2011 | A1 |
20120075464 | Derenn | Mar 2012 | A1 |
20120065533 | Carrillo, Jr. et al. | May 2012 | A1 |
20120243797 | Di Venuto Dayer et al. | Sep 2012 | A1 |
20130267873 | Fuchs | Oct 2013 | A1 |
20130271591 | Van Leest et al. | Oct 2013 | A1 |
20130272393 | Kirenko et al. | Oct 2013 | A1 |
20130275873 | Shaw et al. | Oct 2013 | A1 |
20130324830 | Bernal et al. | Dec 2013 | A1 |
20130324876 | Bernal et al. | Dec 2013 | A1 |
20140023235 | Cennini et al. | Jan 2014 | A1 |
20140052006 | Lee et al. | Feb 2014 | A1 |
20140053840 | Liu | Feb 2014 | A1 |
20140139405 | Ribble et al. | May 2014 | A1 |
20140140592 | Lasenby et al. | May 2014 | A1 |
20140235976 | Bresch et al. | Aug 2014 | A1 |
20140267718 | Govro et al. | Sep 2014 | A1 |
20140272860 | Peterson et al. | Sep 2014 | A1 |
20140275832 | Muehlsteff et al. | Sep 2014 | A1 |
20140276104 | Tao et al. | Sep 2014 | A1 |
20140330336 | Errico et al. | Nov 2014 | A1 |
20140334697 | Kersten et al. | Nov 2014 | A1 |
20140358017 | Op Den Buijs et al. | Dec 2014 | A1 |
20140378810 | Davis et al. | Dec 2014 | A1 |
20140379369 | Kokovidis et al. | Dec 2014 | A1 |
20150003723 | Huang et al. | Jan 2015 | A1 |
20150094597 | Mestha et al. | Apr 2015 | A1 |
20150131880 | Wang et al. | May 2015 | A1 |
20150157269 | Lisogurski et al. | Jun 2015 | A1 |
20150223731 | Sahin | Aug 2015 | A1 |
20150238150 | Subramaniam | Aug 2015 | A1 |
20150265187 | Bernal et al. | Sep 2015 | A1 |
20150282724 | McDuff et al. | Oct 2015 | A1 |
20150301590 | Furst et al. | Oct 2015 | A1 |
20150317814 | Johnston et al. | Nov 2015 | A1 |
20160000335 | Khachaturian et al. | Jan 2016 | A1 |
20160049094 | Gupta et al. | Feb 2016 | A1 |
20160082222 | Garcia et al. | Mar 2016 | A1 |
20160140828 | Deforest | May 2016 | A1 |
20160143598 | Rusin et al. | May 2016 | A1 |
20160151022 | Berlin et al. | Jun 2016 | A1 |
20160156835 | Ogasawara et al. | Jun 2016 | A1 |
20160174887 | Kirenko et al. | Jun 2016 | A1 |
20160210747 | Hay et al. | Jul 2016 | A1 |
20160235344 | Auerbach | Aug 2016 | A1 |
20160310084 | Banerjee et al. | Oct 2016 | A1 |
20160317041 | Porges et al. | Nov 2016 | A1 |
20160345931 | Xu et al. | Dec 2016 | A1 |
20160367186 | Freeman et al. | Dec 2016 | A1 |
20170007342 | Kasai et al. | Jan 2017 | A1 |
20170007795 | Pedro et al. | Jan 2017 | A1 |
20170055877 | Niemeyer | Mar 2017 | A1 |
20170065484 | Addison et al. | Mar 2017 | A1 |
20170071516 | Bhagat et al. | Mar 2017 | A1 |
20170095215 | Watson et al. | Apr 2017 | A1 |
20170095217 | Hubert et al. | Apr 2017 | A1 |
20170119340 | Nakai et al. | May 2017 | A1 |
20170147772 | Meehan et al. | May 2017 | A1 |
20170164904 | Kirenko | Jun 2017 | A1 |
20170172434 | Amelard et al. | Jun 2017 | A1 |
20170173262 | Veltz | Jun 2017 | A1 |
20170238805 | Addison et al. | Aug 2017 | A1 |
20170238842 | Jacquel et al. | Aug 2017 | A1 |
20170311887 | Leussler et al. | Nov 2017 | A1 |
20170319114 | Kaestle | Nov 2017 | A1 |
20180042486 | Yoshizawa et al. | Feb 2018 | A1 |
20180042500 | Liao et al. | Feb 2018 | A1 |
20180049669 | Vu | Feb 2018 | A1 |
20180053392 | White et al. | Feb 2018 | A1 |
20180104426 | Oldfield et al. | Apr 2018 | A1 |
20180106897 | Shouldice et al. | Apr 2018 | A1 |
20180169361 | Dennis et al. | Jun 2018 | A1 |
20180217660 | Dayal et al. | Aug 2018 | A1 |
20180228381 | LeBoeuf et al. | Aug 2018 | A1 |
20180310844 | Tezuka et al. | Nov 2018 | A1 |
20180325420 | Gigi | Nov 2018 | A1 |
20180333050 | Greiner et al. | Nov 2018 | A1 |
20190050985 | Den Brinker et al. | Feb 2019 | A1 |
20190133499 | Auerbach | May 2019 | A1 |
20190142274 | Addison et al. | May 2019 | A1 |
20190199970 | Greiner et al. | Jun 2019 | A1 |
20190209046 | Addison et al. | Jul 2019 | A1 |
20190209083 | Wu et al. | Jul 2019 | A1 |
20190307365 | Addison et al. | Oct 2019 | A1 |
20190311101 | Nienhouse | Oct 2019 | A1 |
20190343480 | Shute et al. | Nov 2019 | A1 |
20190380599 | Addison et al. | Dec 2019 | A1 |
20190380807 | Addison et al. | Dec 2019 | A1 |
20200046302 | Jacquel et al. | Feb 2020 | A1 |
20200187827 | Addison et al. | Jun 2020 | A1 |
20200202154 | Wang et al. | Jun 2020 | A1 |
20200205734 | Mulligan et al. | Jul 2020 | A1 |
20200237225 | Addison et al. | Jul 2020 | A1 |
20200242790 | Addison et al. | Jul 2020 | A1 |
20200250406 | Wang et al. | Aug 2020 | A1 |
20200253560 | De Haan | Aug 2020 | A1 |
20200289024 | Addison et al. | Sep 2020 | A1 |
20200329976 | Chen et al. | Oct 2020 | A1 |
20210068670 | Redtel | Mar 2021 | A1 |
20210153746 | Addison et al. | May 2021 | A1 |
20210235992 | Addison | Aug 2021 | A1 |
Number | Date | Country |
---|---|---|
106725410 | May 2017 | CN |
111728602 | Oct 2020 | CN |
112233813 | Jan 2021 | CN |
19741982 | Oct 1998 | DE |
2428162 | Mar 2012 | EP |
2772828 | Sep 2014 | EP |
2793189 | Oct 2014 | EP |
3207862 | Aug 2017 | EP |
3207863 | Aug 2017 | EP |
3384827 | Oct 2018 | EP |
2009544080 | Dec 2009 | JP |
2011130996 | Jul 2011 | JP |
101644843 | Aug 2016 | KR |
RS 20120373 | Apr 2014 | RU |
2004100067 | Nov 2004 | WO |
2010034107 | Apr 2010 | WO |
2010036653 | Apr 2010 | WO |
WO2015059700 | Apr 2015 | WO |
2015078735 | Jun 2015 | WO |
2015110859 | Jul 2015 | WO |
2016065411 | May 2016 | WO |
2016178141 | Nov 2016 | WO |
2016209491 | Dec 2016 | WO |
WO2017060463 | Apr 2017 | WO |
WO2017089139 | Jun 2017 | WO |
WO-2017100188 | Jun 2017 | WO |
2017144934 | Aug 2017 | WO |
2018042376 | Mar 2018 | WO |
2019094893 | May 2019 | WO |
2019135877 | Jul 2019 | WO |
2019240991 | Dec 2019 | WO |
2020033613 | Feb 2020 | WO |
2021044240 | Mar 2021 | WO |
Entry |
---|
Yu MC, Liou JL, Kuo SW, Lee MS, Hung YP. Noncontact respiratory measurement of volume change using depth camera. In2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society Aug. 28, 2012 (pp. 2371-2374). IEEE. |
Bartula M, Tigges T, Muehlsteff J. Camera-based system for contactless monitoring of respiration. In2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) Jul. 3, 2013 (pp. 2672-2675). IEEE. |
Reyes BA, Reljin N, Kong Y, Nam Y, Chon KH. Tidal volume and instantaneous respiration rate estimation using a volumetric surrogate signal acquired via a smartphone camera. IEEE journal of biomedical and health informatics. Feb. 25, 2016;21(3):764-77. |
Li MH, Yadollahi A, Taati B. A non-contact vision-based system for respiratory rate estimation. In2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society Aug. 26, 2014 (pp. 2119-2122). IEEE. |
Transue S, Nguyen P, Vu T, Choi MH. Real-time tidal volume estimation using iso-surface reconstruction. In2016 IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies (CHASE) Jun. 27, 2016 (pp. 209-218). IEEE. |
Harte JM, Golby CK, Acosta J, Nash EF, Kiraci E, Williams MA, Arvanitis TN, Naidu B. Chest wall motion analysis in healthy volunteers and adults with cystic fibrosis using a novel Kinect-based motion tracking system. Medical & biological engineering & computing. Nov. 2016;54(11):1631-40. |
Al-Naji A, Gibson K, Lee SH, Chahl J. Real time apnoea monitoring of children using the Microsoft Kinect sensor: a pilot study. Sensors. Feb. 3, 2017;17(2):286. |
International Search Report and Written Opinion for International Application No. PCT/US2018/065492, dated Mar. 8, 2019, 12 pages. |
Armanian, A. M., “Caffeine administration to prevent apnea in very premature infants”, Pediatrics & Neonatology, 57 (5), 2016, pp. 408-412, 5 pages. |
Di Fiore, J.M., et al., “Intermittent hypoxemia and oxidative stress in preterm infants”, Respiratory Physiology & Neurobiology, No. 266, 2019, pp. 121-129, 25 pages. |
Grimm, T., et al., “Sleep position classification from a depth camera using bed aligned maps”, 23rd International Conference on Pattern Recognition (ICPR), Dec. 2016, pp. 319-324, 6 pages. |
Liu, S., et al., “In-bed pose estimation: Deep learning with shallow dataset. IEEE journal of translational engineering in health and medicine”, IEEE Journal of Translational Engineering in Health and Medicine, No. 7, 2019, pp. 1-12, 12 pages. |
Wulbrand, H., et al., “Submental and diaphragmatic muscle activity during and at resolution of mixed and obstructive apneas and cardiorespiratory arousal in preterm infants”, Pediatric Research, No. 38(3), 1995, pp. 298-305, 9 pages. |
Aarts, Lonneke A.M. et al. “Non-contact heart rate monitoring utilizing camera photoplethysmography in the neonatal intensive care unit—A pilot study”, Early Human Development 89, 2013, pp. 943-948. |
Abbas A. K. et al., “Neonatal non-contact respiratory monitoring based on real-time infrared thermography,” Biomed. Eng. Online, vol. 10, No. 93, 2011, 17 pages. |
Addison, P. S. et al., “Video-based Heart Rate Monitoring across a Range of Skin Pigmentations during an Acute Hypoxic Challenge,” J Clin Monit Comput, Nov. 9, 2017, 10 pages. |
Addison, Paul S. PhD, “A Review of Signal Processing Used in the Implementation of the Pulse Oximetry Phtoplethysmographic Fluid Responsiveness Parameter”, International Anesthesia Research Society, Dec. 2014, vol. 119, No. 6, pp. 1293-1306. |
Addison, Paul S., et al., “Developing an algorithm for pulse oximetry derived respiratory rate (RRoxi): a healthy volunteer study”, J Clin Monit Comput (2012) 26, pp. 45-51. |
Addison,Paul S., et al., “Pulse oximetry-derived respiratory rate in general care floor patients”, J Clin Monit Comput, 2015, 29, pp. 113-120. |
Bhattacharya, S. et al., “A Novel Classification Method for Predicting Acute Hypotensive Episodes in Critical Care,” 5th ACM Conference on Bioinformatics, Computational Biology and Health Informatics (ACM-BCB 2014), Newport Beach, USA, 2014, 10 pages. |
Bhattacharya, S. et al., “Unsupervised learning using Gaussian Mixture Copula models,” 21st International Conference on Computational Statistics (COMPSTAT 2014), Geneva, Switzerland, 2014, 8 pages. |
Bickler, Philip E. et al., “Factors Affecting the Performance of 5 Cerebral Oximeters During Hypoxia in Healthy Volunteers”, Society for Technology in Anesthesia, Oct. 2013, vol. 117, No. 4, pp. 813-823. |
Bousefsaf, Frederic, et al., “Continuous wavelet filtering on webcam photoplethysmographic signals to remotely assess the instantaneous heart rate”, Biomedical Signal Processing and Control 8, 2013, pp. 568-574. |
Bruser, C. et al., “Adaptive Beat-to-Beat Heart Rate Estimation in Ballistocardiograms,” IEEE Transactions Information Technology in Biomedicine, vol. 15, No. 5, Sep. 2011, pp. 778-786. |
BSI Standards Publication, “Medical electrical equipment, Part 2-61: Particular requirements for basic safety and essential performance of pulse oximeter equipment”, BS EN ISO 80601-2-61:2011, 98 pages. |
Cennini, Giovanni, et al., “Heart rate monitoring via remote phtoplethysmography with motion artifacts reduction”, Optics Express, Mar. 1, 2010, vol. 18, No. 5, pp. 4867-4875. |
Colantonio, S. “A smart mirror to promote a healthy lifestyle,” Biosystems Engineering, vol. 138, Oct. 2015, pp. 33-43, Innovations in Medicine and Healthcare. |
Cooley et al. “An Algorithm for the Machine Calculation of Complex Fourier Series,” Aug. 17, 1964, pp. 297-301. |
European Search Report; European Patent Application No. 17156334.9; Applicant: Covidien LP; dated Jul. 13, 2017, 10 pgs. |
European Search Report; European Patent Application No. 17156337.2; Applicant: Covidien LP; dated Jul. 13, 2017, 10 pgs. |
Fei J. et al., “Thermistor at a distance: unobtrusive measurement of breathing,” IEEE Transactions on Biomedical Engineering, vol. 57, No. 4, pp. 988-998, 2010. |
George et al., “Respiratory Rate Measurement From PPG Signal Using Smart Fusion Technique,” International Conference on Engineering Trends and Science & Humanities (ICETSH-2015), 5 pages, 2015. |
Goldman, L. J., “Nasal airflow and thoracoabdominal motion in children using infrared thermographic video processing,” Pediatric Pulmonology, vol. 47, No. 5, pp. 476-486, 2012. |
Guazzi, Alessandro R., et al., “Non-contact measurement of oxygen saturation with an RGB camera”, Biomedical Optics Express, Sep. 1, 2015, vol. 6, No. 9, pp. 3320-3338. |
Han, J. et al., “Visible and infrared image registration in man-made environments employing hybrid visual features,” Pattern Recognition Letters, vol. 34, No. 1, pp. 42-51, 2013. |
Huddar, V. et al. “Predicting Postoperative Acute Respiratory Failure in Critical Care using Nursing Notes and Physiological Signals,” 36th Annual Intl Conf of IEEE Engineering in Medicine and Biology Society IEEE EMBC2014, Chicago, 2014, pp. 2702-2705. |
Javadi M. et al., Diagnosing Pneumonia in Rural Thailand: “Digital Cameras versus Film Digitizers For Chest Radiograph Teleradiology,” International Journal of Infectious Disease, Mar. 2006;10(2), pp. 129-135. |
Jopling, Michael W., et al., “Issues in the Laboratory Evaluation of Pulse Oximeter Performance”, Anesth. Analg. 2002; 94, pp. S62-S68. |
Kastle, Siegfried W., et al., “Determining the Artifact Sensitivity of Recent Pulse Oximeters During Laboratory Benchmarking”, Journal of Clinical Monitoring and Computing, vol. 16, No. 7, 2000, pp. 509-522. |
Klaessens J. H. G. M. et al., “Non-invasive skin oxygenation imaging using a multi-spectral camera system: Effectiveness of various concentration algorithms applied on human skin,” Proc. Of SPIE vol. 7174 717408-1, 2009, 14 pages. |
Kong, Lingqin, et al., “Non-contact detection of oxygen saturation based on visible light imaging device using ambient light”, Optics Express, Jul. 29, 2013, vol. 21, No. 15, pp. 17464-17471. |
Kortelainen, J. et al., “Sleep staging based on signals acquired through bed sensor,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, No. 3, pp. 776-785, May 2010. |
Kumar, M. et al., “Distance PPG: Robust non-contact vital signs monitoring using a camera,” Biomedical optics express 2015, 24 pages. |
Kwon, Sungjun, et al., “Validation of heart rate extraction using video imaging on a built-in camera system of a smartphone”, 34th Annual International Conference of the IEEE EMBS, San Diego, CA, USA, Aug. 28-Sep. 1, 2012, pp. 2174-2177. |
Lai, C. J. et al. “Heated humidified high-flow nasal oxygen prevents intraoperative body temperature decrease in non-intubated thoracoscopy.” Journal of Anesthesia. Oct. 15, 2018. 8 pages. |
Li et al., “A Non-Contact Vision-Based System for Respiratory Rate Estimation”, 978-1-4244-7929-0/14, 2014, 4 pages. |
Liu H. et al., “A Novel Method Based on Two Cameras For Accurate Estimation of Arterial Oxygen Saturation,” BioMedical Engineering OnLine, 2015, 17 pages. |
Liu, C. et al., “Motion magnification” ACM Transactions on Graphics (TOG), vol. 24, No. 3, pp. 519-526, 2005. |
Lv, et al., “Class Energy Image Analysis for Video Sensor-Based Gait Recognition: A Review”, Sensors 2015, 15, pp. 932-964. |
McDuff, Daniel J., et al., “A Survey of Remote Optical Photoplethysmographic Imaging Methods”, 978-1-4244-9270-1/15, IEEE, 2015, pp. 6398-6404. |
Mestha, L.K. et al., “Towards Continuous Monitoring of Pulse Rate in Neonatal Intensive Care Unit with a Webcam” in Proc. of 36th Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society, Chicago, Il pp. 1-5, 2014. |
Pereira, C. et al. “Noncontact Monitoring of Respiratory Rate in Newborn Infants Using Thermal Imaging.” IEEE Transactions on Biomedical Engineering. Aug. 23, 2018. 10 pages. |
Poh et al., “Non-contact, automated cardiac pulse measurements using video imaging and blind source separation,” Opt. Express 18,10762-10774 (2010), 14 pages. |
Poh, et al., “Advancements in Noncontact, Multiparameter Physiological Measurements Using a Webcam”, IEEE Transactions on Biomedical Engineering, vol. 58, No. 1, Jan. 2011, 5 pages. |
Rajan, V. et al., “Clinical Decision Support for Stroke using Multiview Learning based Models for NIHSS Scores,” PAKDD 2016 Workshop: Predictive Analytics in Critical Care (PACC), Auckland, New Zealand, 10 pages. |
Rajan, V. et al., “Dependency Clustering of Mixed Data with Gaussian Mixture Copulas,” 25th International Joint Conference on Artificial Intelligence IJCAI 2016, New York, USA, 7 pages. |
Reisner, A. et al., “Utility of the Photoplethysmogram in Circulatory Monitoring”. American Society of Anesthesiologist, May 2008, pp. 950-958. |
Rougier, Caroline, et al., “Robust Video Surveillance for Fall Detection Based on Human Shape Deformation”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, No. 5, May 2011, pp. 611-622. |
Rubinstein, M., “Analysis and Visualization of Temporal Variations in Video”, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Feb. 2014, 118 pages. |
Scalise, Lorenzo, et al., “Heart rate measurement in neonatal patients using a web camera.”, 978-1-4673-0882-3/12, IEEE, 2012, 4 pages. |
Sengupta, A. et al., “A Statistical Model for Stroke Outcome Prediction and Treatment Planning,” 38th Annual International Conference of the IEEE Engineering in Medicine and Biology (Society IEEE EMBC 2016), Orlando, USA,2016, 4 pages. |
Shah, Nitin, et al., “Performance of three new-generation pulse oximeters during motion and low perfursion in volunteers”, Journal of Clinical Anesthesia, 2012, 24, pp. 385-391. |
Shao, Dangdang, et al., “Noncontact Monitoring Breathing Pattern, Exhalation Flow Rate and Pulse Transit Time”, IEEE Transactions on Biomedical Engineering, vol. 61, No. 11, Nov. 2014, pp. 2760-2767. |
Shrivastava, H. et al., “Classification with Imbalance: A Similarity-based Method for Predicting Respiratory Failure,” 2015 IEEE International Conference on Bioinformatics and Biomedicine (IEEE BIBM2015), Washington DC, USA, 8 pages. |
Sun, Yu, et al., “Noncontact imaging phtoplethysmography to effectively access pulse rate variability”, Journal of Biomedical Optics, Jun. 2013, vol. 18(6), 10 pages. |
Tamura et al., “Wearable Photoplethysmographic Sensors—Past & Present,” Electronics, 2014, pp. 282-302. |
Tarassenko, L., et al., “Non-contact video-based vital sign monitoring using ambient light and auto-regressive models”, Institute of Physics and Engineering in Medicine, 2014, pp. 807-831. |
Teichmann, D. et al., “Non-contact monitoring techniques—Principles and applications,” in Proc. of IEEE International Conference of the Engineering in Medicine and Biology Society (EMBC), San Diego, CA, 2012, 4 pages. |
Verkruysse, Wim, et al., “Calibration of Contactless Pulse Oximetry”, Anesthesia & Analgesia, Jan. 2017, vol. 124, No. 1, pp. 136-145. |
Villarroel, Mauricio, et al., “Continuous non-contact vital sign monitoring in neonatal intensive care unit”, Healthcare Technology Letters, 2014, vol. 1, Issue 3, pp. 87-91. |
Wadhwa, N. et al., “Phase-Based Video Motion Processing,” MIT Computer Science and Artificial Intelligence Lab, Jul. 2013, 9 pages. |
Wadhwa, N. et al., “Riesz pyramids for fast phase-based video magnification.” in Proc. of IEEE International Conference on Computational Photography (ICCP), Santa Clara, CA, pp. 1-10, 2014. |
Wang, W. et al., “Exploiting spatial redundancy of image sensor for motion robust rPPG.” IEEE Transactions on Biomedical Engineering, vol. 62, No. 2, pp. 415-425, 2015. |
Wu, H.Y. et al., “Eulerian video magnification for revealing subtle changes in the world,” ACM Transactions on Graphics (TOG), vol. 31, No. 4, pp. 651-658, 2012. |
Yu Sun et al. “Motion-compensated noncontact imaging photoplethysmography to monitor cardiorespiratory status during exercise,” Journal of Biomedical Optics, vol. 16, No. 7, Jan. 1, 2011, 10 pages. |
Zhou, J. et al., “Maximum parsimony analysis of gene copy number changes in tumor phylogenetics,” 15th International Workshop on Algorithms in Bioinformatics WABI 2015, Atlanta, USA, 13 pages. |
Ni et al. “RGBD-Camera Based Get-Up Event Detection for Hospital Fall Prevention.” Acoustics, Speech and Signal Processing (ICASSP). 2012 IEEE International Conf., Mar. 2012: pp. 1405-1408. |
Amazon, “Dockem Koala Tablet Wall Mount Dock for iPad Air/Mini/Pro, Samsung Galaxy Tab/Note, Nexus 7/10, and More (Black Brackets, Screw-in Version)”, https://www.amazon.com/Tablet-Dockem-Samsung-Brackets-Version-dp/B00JV75FC6?th=1, First available Apr. 22, 2014, viewed on Nov. 16, 2021, Apr. 22, 2014, 4 pages. |
Gsmarena, “Apple iPad Pro 11 (2018)”, https://www.gsmarena.com/apple_ipad_pro_11_(2018)-9386.pjp, viewed on Nov. 16, 2021, 1 page. |
Hyvarinen, A. et al., “Independent Component Analysis: Algorithms and Applications”, Neural Networks, vol. 13, No. 4, 2000, pp. 411-430, 31 pages. |
International Search Report and Written Opinion for International Application No. PCT/US19/035433, dated Nov. 11, 2019, 17 pages. |
International Application No. PCT/US2019/035433 Invitation to Pay Additional Fees and Partial International Search Report dated Sep. 13, 2019, 16 pages (MD60009PCT). |
International Application No. PCT/US2019/045600 International Search Report and Written Opinion dated Oct. 23, 2019, 19 pages (MD60002PCT). |
Litong Feng, et al. Dynamic ROI based on K-means for remote photoplethysmography, IEEE International Conference on Accoustics, Speech and Signal Processing (ICASSP), Apr. 2015, p. 1310-1314 (Year 2015) (5 pp.). |
Nguyen, et al., “3D shape, deformation and vibration measurements using infrared Kinect sensors and digital image correlation”, Applied Optics, vol. 56, No. 32, Nov. 8, 2017, 8 pages. |
Povsi, et al., Real-Time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction, Physiological Measurement, vol. 36, No. 7, May 28, 2015, pp. 1497-1516. |
Prochazka et al., “Microsoft Kinect Visual and Depth Sensors for Breathing and Heart Rate Analysis”, Sensors, vol. 16, No. 7, Jun. 28, 2016, 11 pages. |
Schaerer, et al., “Multi-dimensional respiratory motion tracking from markerless optical surface imaging based on deformable mesh registration”, Physics in Medicine and Biology, vol. 57, No. 2, Dec. 14, 2011, 18 pages. |
Zaunseder, et al. “Spatio-temporal analysis of blood perfusion by imaging photoplethysmography,” Progress in Biomedical Optics and Imaging, SPIE—International Society for Optical Engineering, vol. 10501, Feb. 20, 2018, 15 pages. |
Amelard, et al., “Non-contact transmittance photoplethysmographic imaging (PPGI) for long-distance cardiovascular monitoring,” ResearchGate, Mar. 23, 2015, pp. 1-13, XP055542534 [Retrieved online Jan. 15, 2019]. |
Nisar, et al. “Contactless heart rate monitor for multiple persons in a video”, IEEE International Conference on Consumer Electronics—Taiwan (ICCE-TW), May 27, 2016, pp. 1-2, XP032931229 [Retrieved on Jul. 25, 2016]. |
International Search Report and Written Opinion for International Application No. PCT/US2018/060648, dated Jan. 28, 2019, 17 pages. |
Barone, et al., “Computer-aided modelling of three-dimensional maxillofacial tissues through multi-modal imaging”, Journal of Engineering in Medicine, Part H. vol. 227, No. 2, Feb. 1, 2013, pp. 89-104. |
Barone, et al., “Creation of 3D Multi-body Orthodontic Models by Using Independent Imaging Sensors”, Senros MDPI AG Switzerland, vol. 13, No. 2, Jan. 1, 2013, pp. 2033-2050. |
“International Search Report and Written Opinion”, International Application No. PCT/US2021/015669, dated Apr. 12, 2021, 15 pages. |
Fischer, et al., “ReMoteCare: Health Monitoring with Streaming Video”, OCMB '08, 7th International Conference on Mobile Business, IEEE, Piscataway, NJ,, Jul. 7, 2008, pp. 280-286. |
Lawrence, E., et al., “Data Collection, Correlation and Dissemination of Medical Sensor information in a WSN”, IEEE 2009 Fifth International Conference on Networking and Services, 978-0-7695-3586-9/09, Apr. 20, 2009, pp. 402-408, 7 pages. |
Mukherjee, S., et al., “Patient health management system using e-health monitoring architecture”, IEEE, International Advance Computing Conference (IACC), 978-1-4799-2572-8/14, Feb. 21, 2014, pp. 400-405, 6 pages. |
Srinivas, J., et al., “A Mutual Authentication Framework for Wireless Medical Sensor Networks”, Journal of Medical Systems, 41:80, 2017, pp. 1-19, 19 pages. |
Number | Date | Country | |
---|---|---|---|
20190209046 A1 | Jul 2019 | US |
Number | Date | Country | |
---|---|---|---|
62614763 | Jan 2018 | US |