This patent is directed to a system and method used to detect or differentiate tissue or an artifact, such as a vessel, and in particular to a system and method used to detect or differentiate tissue or an artifact, where the system includes at least one light emitter and at least one light sensor disposed at a distal end of a shaft.
Systems and methods that identify artifacts, and in particular vessels, in the surgical field during a surgical procedure provide valuable information to the surgeon or surgical team. U.S. hospitals lose billions of dollars annually in unreimbursable costs because of inadvertent vascular damage during surgery. In addition, the involved patients face a mortality rate of up to 32%, and likely will require corrective procedures and remain in the hospital for an additional nine days, resulting in tens, if not hundreds, of thousands of dollars in added costs of care. Consequently, there is significant value to be obtained from methods and systems that permit accurate determination of the presence of vessels, such as blood vessels, in the surgical field, such that these costs may be reduced or avoided.
Systems and methods that provide information regarding the presence of blood vessels in the surgical field are particularly important during minimally invasive surgical procedures. Traditionally, surgeons have relied upon tactile sensation during surgical procedures both to identify blood vessels and to avoid inadvertent damage to these vessels. Because of the shift towards minimally invasive procedures, including laparoscopic and robotic surgeries, surgeons have lost the ability to use direct visualization and the sense of touch to make determinations as to the presence of blood vessels in the surgical field. Consequently, surgeons must make the determination whether blood vessels are present in the surgical field based primarily on convention and experience. Unfortunately, anatomical irregularities frequently occur because of congenital anomalies, scarring from prior surgeries, and body habitus (e.g., obesity). Systems and methods that would permit surgeons to determine the presence and/or the characteristics of vessels in the surgical field during surgery (potentially in real time or near real time) under such conditions would be a significant advantage.
On the other hand, while it would be advantageous to include systems and methods that provide information regarding the presence of blood vessels in the surgical field, the adoption of such systems and methods would be impeded if these systems and methods made the surgical procedure more complicated. Consequently, it is advantageous that the systems and methods do not involve significant thought on the part of the surgeon or surgical team in using the system or method for detecting or differentiating tissue or an artifact, like a vessel, or significant preparation of the surgical field or the patient for use of the system or method.
As set forth in more detail below, the present disclosure describes a user interface embodying advantageous alternatives to the existing systems and methods, which may provide for improved identification for avoidance or isolation of tissue or artifacts, such as vessels, without undue complication of the surgical instrument or surgical procedure.
According to an aspect of the present disclosure, a surgical system includes a surgical instrument, at least one light emitter disposed at a working end of the surgical instrument, an array of light sensors disposed at the working end of the surgical instrument, individual light sensors in the array of light sensors adapted to generate a signal comprising a non-pulsatile component, and a controller coupled to the array of light sensors. The controller includes an analyzer configured to determine a curve of the non-pulsatile components of the signals of each of the individual light sensors in the array of light sensors, smooth the curve to generate a smoothed curve, calculate a derivative of the smoothed curve, invert the smoothed curve to generate an inverted smoothed curve, calculate a derivative of the inverted smoothed curve, take a difference between the derivative of the inverted smoothed curve and the derivative of the smoothed curve to generate a resultant curve, smooth the resultant curve to generate a smoothed, resultant curve, estimate zero crossings of the smoothed, resultant curve, apply a signum function to points adjacent each zero crossing, if any, to generate a result; and identify a region of interest, if any, based on the result for points adjacent each zero crossing, if any.
The disclosure will be more fully understood from the following description taken in conjunction with the accompanying drawings. Some of the figures may have been simplified by the omission of selected elements for the purpose of more clearly showing other elements. Such omissions of elements in some figures are not necessarily indicative of the presence or absence of particular elements in any of the exemplary embodiments, except as may be explicitly delineated in the corresponding written description. None of the drawings is necessarily to scale.
The embodiments described herein provide a system and method for detecting or differentiating regions of interest in a curve, which system and method also involve detecting or differentiating tissue or artifacts, such as vessels, and which system and method can be used with or in surgical systems or instruments. Such a system may include at least one light emitter disposed at a working end of the surgical instrument and an array of light sensors disposed at the working end of the surgical instrument, individual light sensors in the array of light sensors adapted to generate a signal comprising a non-pulsatile component. The system may also include a controller coupled to the array of light sensors, the controller including an analyzer. The analyzer is configured to determine a curve of the non-pulsatile components of the signals of each of the individual light sensors in the array of light sensors, smooth the curve to generate a smoothed curve, calculate a derivative of the smoothed curve, invert the smoothed curve to generate an inverted smoothed curve, calculate a derivative of the inverted smoothed curve, take a difference between the derivative of the inverted smoothed curve and the derivative of the smoothed curve to generate a resultant curve, smooth the resultant curve to generate a smoothed, resultant curve, estimate zero crossings of the smoothed, resultant curve, apply a signum function to points adjacent each zero crossing, if any, to generate a result, and identify a region of interest, if any, based on the result for points adjacent each zero crossing, if any.
Turning first to
According to the illustrated embodiments, the working end 104 of the surgical instrument 106 is also a distal end of a shaft 108. Consequently, the working end and the distal end will be referred to as working end 104 or distal end 104. The shaft 108 also has a proximal end 110, and a grip or handle 112 (referred to herein interchangeably as grip 112) is disposed at the proximal end 110 of the shaft 108. The grip 112 is designed in accordance with the nature of the instrument 106; as to the thermal ligation device illustrated in
While the working or distal end 104 and the proximal end 110 with grip 112 are illustrated as disposed at opposite-most ends of the shaft 108, it will be recognized that certain surgical instruments have working ends (where a tool tip is attached, for example) disposed on the opposite-most ends of the shaft and a gripping region disposed intermediate to the opposite working ends. In accordance with the terms “distal” and “proximal” as used herein, the working ends of such an instrument are referred to herein as the distal ends and the gripping region as the proximal end. Relative to the illustrated embodiments, however, the distal and proximal ends are located at opposite-most (or simply opposite) ends of the shaft 108.
As mentioned above, according to the preferred embodiments illustrated, the surgical system 100 includes a sensor with at least one light emitter 120 (or simply the light emitter 120) and one or more light sensors or detectors 122 (or simply the light sensors 122). See
The light emitter 120 is disposed at the working end 104 of the surgical instrument 106. The light sensors 122 are also disposed at the working end 104 of the surgical instrument 106. In either case, the phrase “disposed at” may refer to the placement of the emitter 120 or sensor 122 physically at the working end 104, or placement of a fiber or other light guide at the working end 104, which light guide is coupled to the emitter 120 or sensor 122 which then may be placed elsewhere. The system 100 may operate according to a transmittance-based approach, such that the light sensors 122 are disposed opposite and facing the light emitter(s) 120, for example on opposite jaws of a surgical instrument 106 as illustrated in
The light emitter 120 may be configured to emit light of at least one wavelength. For example, the light emitter 120 may emit light having a wavelength of 660 nm. This may be achieved with a single element, or a plurality of elements (which elements may be arranged or configured into an array, for example, as explained in detail below). In a similar fashion, the light sensor 122 is configured to detect light at the at least one wavelength (e.g., 660 nm). According to the embodiments described herein, the light sensor 122 includes a plurality of elements, which elements are arranged or configured into an array.
According to certain embodiments, the light emitter 120 may be configured to emit light of at least two different wavelengths, and the light sensor 122 may be configured to detect light at the at least two different wavelengths. As one example, the light emitter 120 may emit and the light sensor 122 may detect light in the visible range and light in the near-infrared or infrared range. Specifically, the light emitter 120 may emit and the light sensor 122 may detect light at 660 nm and at 910 nm. Such an embodiment may be used, for example, to ensure optimal penetration of blood vessel V and the surrounding tissue T under in vivo conditions.
Depending upon the effect of changes in blood flow, light of a third wavelength may also be emitted and sensed. That is, if the method of detection is found to be sensitive to varying rates of blood flow in the vessel of interest, light at 810 nm (i.e., at the isobestic point) may be emitted and sensed to permit normalization of the results to limit or eliminate the effects of changes in blood flow rate. Additional wavelengths of light may also be used.
According to certain embodiments, each individual light sensor 122 is configured to generate a signal comprising a first pulsatile component and a second non-pulsatile component. It will be recognized that the first pulsatile component may be an alternating current (AC) component of the signal, while the second non-pulsatile component may be a direct current (DC) component. Where the light sensor 122 is in the form of an array, the pulsatile and non-pulsatile information may be generated for each element of the array, or at least for each element of the array that defines at least one row of the array.
As to the pulsatile component, it will be recognized that a blood vessel may be described as having a characteristic pulsation of approximately 60 pulses (or beats) per minute. While this may vary with the patient's age and condition, the range of pulsation is typically between 60 and 100 pulses (or beats) per minute. The light sensor 122 will produce a signal (that is passed to the controller 124) with a particular AC waveform that corresponds to the movement of the blood through the vessel. In particular, the AC waveform corresponds to the light absorbed or reflected by the pulsatile blood flow within the vessel. On the other hand, the DC component corresponds principally to light absorbed, reflected and/or scattered by the tissues, and thus the vessel may appear as a “shadow” or “dip” in the curve formed of the DC signals from each of the sensors in a sensor array.
According to such embodiments, the controller 124 is coupled to the light sensor 122, and may include a splitter 126 to separate the first pulsatile component from the second non-pulsatile component for each element of the light sensor array 122. The controller 124 may also include an analyzer 128 to determine the presence of and/or characteristic(s) of the vessel V within the region 102 proximate to the working end 104 of the surgical instrument 106 based (at least in part) on the pulsatile component. According to the embodiments described herein, the analyzer may make that determination by first detecting or differentiating regions of interest in the DC signals from the light sensors 120 of the system 100.
Before discussing the details of the determination of the regions of interest, further details of the system may be discussed with reference, for example, to the transmittance-based embodiment of
As to those embodiments wherein the light emitter 120 is in the form of an array including one or more light emitting diodes, as is illustrated in
The light sensor 122 also may include one or more elements. Again, according to the embodiment illustrated in
In fact, where the array of light sensors 122 includes a row of light sensors (such as in
The system 100 may include hardware and software in addition to the emitter 120, sensor 122, and controller 124. For example, where more than one emitter 120 is used, a drive controller may be provided to control the switching of the individual emitter elements. In a similar fashion, a multiplexer may be provided where more than one sensor 122 is included, which multiplexer may be coupled to the sensors 122 and to an amplifier. Further, the controller 124 may include filters and analog-to-digital conversion as may be required.
According to certain embodiments, the splitter 126 and the analyzer 128 may be defined by one or more electrical circuit components. According to other embodiments, one or more processors (or simply, the processor) may be programmed to perform the actions of the splitter 126 and the analyzer 128. According to still further embodiments, the splitter 126 and the analyzer 128 may be defined in part by electrical circuit components and in part by a processor programmed to perform the actions of the splitter 126 and the analyzer 128.
For example, the splitter 126 may include or be defined by the processor programmed to separate the first pulsatile component from the second non-pulsatile component. Further, the analyzer 128 may include or be defined by the processor programmed to determine the presence of (or to quantify the size of, for example) the vessel V within the region 102 proximate to the working end 104 of the surgical instrument 106 based on the first pulsatile component. The instructions by which the processor is programmed may be stored on a memory associated with the processor, which memory may include one or more tangible non-transitory computer readable memories, having computer executable instructions stored thereon, which when executed by the processor, may cause the one or more processors to carry out one or more actions.
As to the operation of such a transmittance-based system to determine tissue and/or artifact (e.g., a vessel, such as a blood vessel or a ureter) characteristics, which characteristics may include position and dimension (e.g., length, width, diameter, etc.) by way of example and not by way of limitation, US Pub. Nos. 2015/0066000, 2017/0181701, 2018/0042522 and 2018/0098705 are each incorporated herein by reference in their entirety. As to associated structure and operation that may address issues related to the operation of such systems, PCT Application Nos. PCT/US16/55910, filed Oct. 7, 2016, and PCT/US17/48651, filed Aug. 25, 2017, are each incorporated herein by reference in their entirety.
As illustrated, the video camera 202 is directed at the region 102 proximate the working ends 104 of two surgical instruments 106. As illustrated, both of the surgical instruments 106 are part of an embodiment of a surgical system 100, such as illustrated in
The signal from the video camera 202 is passed to the display 206 via the video processor 204, so that the surgeon or other member of the surgical team may view the region 102 as well as the working ends 104 of the surgical instruments 106, which are typically inside the patient. Because of the proximity of the visual indicators 130 to the working ends 104, and thus the region 102, the visual indicators 130 are also visible on the display screen 108. As mentioned previously, this advantageously permits the surgeon to receive visual cues or alarms via the visual indicators 130 via the same display 206 and on the same display screen 208 as the region 102 and the working ends 104. This, in turn, limits the need of the surgeon to look elsewhere for the information conveyed via the visual indicators 130.
While the user interface 130 advantageously permits the surgeon or surgical team to view an output from the controller 124, it is possible to include other output devices with the user interface 130, as illustrated in
As mentioned above, the surgical system 100 may also include the surgical instrument 106 with the working end 104, to which the user interface 130 and the sensor (and in preferred embodiments, the light emitter 120 and the light sensor 122) are attached (in the alternative, removably/reversibly or permanently/irreversibly). The user interface 130 and sensor may instead be formed integrally (i.e., as one piece) with the surgical instrument 106. As also stated, it is possible that the user interface 130 and sensor be attached to a separate instrument or tool that is used in conjunction with a surgical instrument or tool 106.
As noted above, the surgical instrument 106 may be a thermal ligature device in one embodiment. In another embodiment, the surgical instrument 106 may simply be a grasper or grasping forceps having opposing jaws. According to still further embodiments, the surgical instrument may be other surgical instruments such as irrigators, surgical staplers, clip appliers, and robotic surgical systems, for example. According to still other embodiments, the surgical instrument may have no other function that to carry the user interface and sensor and to place them within a surgical field. The illustration of a single embodiment is not intended to preclude the use of the system 100 with other surgical instruments or tools 106.
In the context of one or more of the systems described above, it will be recognized that preliminary to characterization of an artifact (e.g., determining a diameter for a vessel) from data received from an array of light sensors, identification of a region of interest is important. Further, it is advantageous that the identification of the region of interest be relatively robust relative to environmental factors that may influence the identification of a region of interest. In addition, it is advantageous that the identification of the region of interest minimize the computational burden of making the identification. In conjunction with the former, an identification system and method that minimizes the computational burden may also facilitate the use of the system and method in real time or near real time implementations.
As noted above, the systems 100 described include light emitters 120 and light sensors 122, and that the signal from the light sensors may include a pulsatile (or AC) component and a non-pulsatile (or DC) component. The system and method for identification of a region of interest described herein utilizes the non-pulsatile, or DC, component. This DC data may be described in terms of a DC curve that includes DC data from a plurality of light sensors arranged in a array, for example a linear array, according to at least one embodiment described herein.
In general terms, the method illustrated in
As described with reference to
At block 306, the system 100 separates the signal(s) received from the light sensors 122 into pulsatile and non-pulsatile components. For example, the splitter 126 may be used to separate the different components of the signal(s). Further, it will be recognized that while the method 300 utilizes the non-pulsatile (or DC) component, the pulsatile (or AC) component may be utilized as well by the system 100 in a separate method, or in conjunction with the output of the method 300.
According to an embodiment of the method 300, the system 100 smoothes DC curve, which curve may include the DC signals corresponding to each of the light sensors 122 along the array. In this regard, smoothing may include filtering, which filtering may include averaging as well. The smoothing of the DC curve may assist in focusing on those sensors generating signals with the most pronounced DC signal relative to other sensors.
At block 310, the system 100 determines a derivative of the smoothed DC curve generated at block 308. In particular, the system 100 uses a 3-point numerical differentiation to determine the derivative, although other n-point numerical differentiations could be used instead. The system 100 uses a numerical differentiation to determine the derivative to reduce the computational burden presented by the method 300.
At block 312, the system 100 inverts the smoothed signal determined at block 308. At block 314, the system 100 determines a derivative of the inverted smoothed DC curve. As was the case with block 310, the system may determine the derivative by using a 3-point numerical differentiation, for example. Again, using a numerical differentiation may reduce the computational burden of the method 300.
At block 316, the system 100 subtracts the derivative determined at block 314 from the derivative determined at block 310. The system 100 then smoothes the result of this subtraction at block 316, referred to as the resultant curve, to generate a smoothed, resultant curve at block 318. The system 100 then optionally interpolates the smoothed resultant curve at block 320 for use in the remainder of the method 300; alternatively, the smoothed resultant curve can be used in the remainder of the method 300.
At block 322, the system 100 estimates the zero crossings of the smoothed (and optionally interpolated) resultant curve. The system 100 then applies a signum function to points adjacent the estimates zero crossings of the resultant curve at block 324, and determines if the result of applying the signum function for the points adjacent present a specific pattern at block 326. In particular, the system 100 may analyze the result for a pattern of [1, −1, 1], [1, −1], [−1, 1] for the points adjacent the zero crossing, suggesting a dip has occurred in the original signal curve. The system 100 then identifies this region as a region of interest at block 328.
As is reflected in
As is further reflected in the method 300 of
As one example of the actions that may be performed to select one or more regions of interest from a plurality of identified regions of interest, a method 350 is illustrated in
The method 350 begins at block 352 where a determination is made whether any of the regions of interest overlap. This determination may be made whether a starting point of a region of interest lies between the starting and ending points of another region of interest and/or whether an ending point or a region of interest lies between the starting and ending points of another region of interest. The starting and ending points of a region of interest may be determined by comparing the results of the signum function, as explained above. If there are regions of interest that overlap, the method 350 continues to block 354; if there are no regions of interest that overlap, the method 350 continues to block 362.
Assuming that it is determined that at least two regions of interest overlap at block 352, an analysis of the closeness of the regions of interest is performed at block 354. In general terms, if the regions of interest are sufficiently close, such that it is unlikely that any two vessels would actually be that close in reality, the system 100 may treat the regions of interest as a single region of interest; otherwise, the regions may be treated as separate regions. According to one embodiment, the closeness determination may include a determination of 1) the closeness between ending points of adjacent regions; 2) the closeness of the starting point of one region and the ending point of the previous region; and 3) the closeness between starting points of adjacent regions. That is, where D is
with the first element in each column being the starting point and the second element in each column being the ending point, the above closeness determinations (or factors) may be expressed as the following:
|D[2:n,2]−D[1:n−1,2]|
D[2:n,1]−D[1:n−1,2]
|D[2:n,1]−D[1:n−1,1]|
Based on the results of the analysis at block 354, the method 350 may determine whether to identify the regions as a single region at 356, and either identify multiple regions (e.g., two regions) as a single region at block 358 or identify the regions as separate regions at block 360.
Having resolved the overlapping region issues, the method 350 continues with the determination of which of the remaining regions is more or less likely to be associated with an artifact, e.g., a vessel. The determination could answer either or both of these questions (i.e., more or less likely), but the purpose is to identify the more likely regions of interest for further processing. While the method 350 of
According to the method 350, a plurality of parameters is analyzed at block 362 to determine if each of the regions of interest is more likely (or less likely) to be associated with a vessel. Each of the parameters may be analyzed at block 364 to determine if the parameter is satisfied, or is not, by comparing the parameter against a threshold associated with the parameter, for example. After each of the parameters is compared against its respective threshold, the results of all of the comparisons may be analyzed at block 366 to determine if it is more likely that a vessel is present, for example by comparing the analysis performed at block 364 against a further criterion. The determination(s) made at block 366 is/are then provided as an output at block 368, for example for purposes of identifying regions of interest for further processing as part of the method 300 of
In the embodiment of
At block 364, the parameters determined at block 362 are analyzed to determine if the parameter indicates that it is more or less likely that a vessel is associated with the region of interest. For example, the width parameter may be compare with a minimum width, the minimum width representing an actual limit on the size of the vessels expected or of the vessels of interest. As noted above, the use of a threshold comparison is not the only analysis method that may be used to determine if the parameters are more or less suggestive of a vessel being present in the region of interest.
At block 366, a determination is made based on the analysis at block 364 of each of the parameters determined at block 362 whether a vessel is more or less likely to be associated with a region of interest. For example, according to the present embodiment, the determination whether it is more likely that a vessel is present requires the satisfaction of all of the parameters. According to other embodiments, it may be sufficient that a simple majority of the parameters are in excess of the associated thresholds, for example. Still other embodiments may use a weighted average of the results from the parameter comparisons. In any event, after the determination is made at block 366, the results are provided at block 368.
After the system 100 has determined which regions of interest to evaluate, such as using a method such as is illustrated in
In this regard, the action or actions carried out by the system 100 at block 332 to determine tissue and/or artifact (e.g., a vessel, such as a blood vessel or a ureter) characteristics, which characteristics may include position and dimension (e.g., length, width, diameter, etc.), may include by way of example and not by way of limitation, for a transmittance-based system, those described in US Pub. Nos. 2015/0066000, 2017/0181701, 2018/0042522 and 2018/0098705, each of which is incorporated herein by reference in their entirety.
In conclusion, although the preceding text sets forth a detailed description of different embodiments of the invention, it should be understood that the legal scope of the invention is defined by the words of the claims set forth at the end of this patent. The detailed description is to be construed as exemplary only and does not describe every possible embodiment of the invention since describing every possible embodiment would be impractical, if not impossible. Numerous alternative embodiments could be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims defining the invention.
It should also be understood that, unless a term is expressly defined in this patent using the sentence “As used herein, the term ‘______’ is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based on any statement made in any section of this patent (other than the language of the claims). To the extent that any term recited in the claims at the end of this patent is referred to in this patent in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term be limited, by implication or otherwise, to that single meaning. Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based on the application of 35 U.S.C. § 112(f).
The present application is a U.S. National Stage of PCT International Patent Application No. PCT/US2019/068841, filed Dec. 27, 2019, which claims the benefit of U.S. Provisional Patent Application No. 62/786,532, filed Dec. 30, 2018, both of which are hereby incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2019/068841 | 12/27/2019 | WO |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2020/142394 | 7/9/2020 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4833418 | Quintus et al. | May 1989 | A |
5129400 | Makino et al. | Jul 1992 | A |
5259761 | Schnettler et al. | Nov 1993 | A |
5762609 | Benaron et al. | Jun 1998 | A |
5769791 | Benaron et al. | Jun 1998 | A |
5772597 | Goldberger et al. | Jun 1998 | A |
5785658 | Benaron et al. | Jul 1998 | A |
5807261 | Benaron et al. | Sep 1998 | A |
5987346 | Benaron et al. | Nov 1999 | A |
6178340 | Svetliza | Jan 2001 | B1 |
6341172 | Xu et al. | Jan 2002 | B1 |
6374128 | Toida et al. | Apr 2002 | B1 |
6569104 | Ono et al. | May 2003 | B2 |
6594518 | Benaron et al. | Jul 2003 | B1 |
6922577 | Nakashima et al. | Jul 2005 | B2 |
7006861 | Flock et al. | Feb 2006 | B2 |
7112201 | Truckai et al. | Sep 2006 | B2 |
7235072 | Sartor et al. | Jun 2007 | B2 |
7515265 | Alfano et al. | Apr 2009 | B2 |
7740591 | Starr et al. | Jun 2010 | B1 |
7749217 | Podhajsky | Jul 2010 | B2 |
7904138 | Goldman et al. | Mar 2011 | B2 |
7983738 | Goldman et al. | Jul 2011 | B2 |
8058771 | Giordano et al. | Nov 2011 | B2 |
8073531 | Goldman et al. | Dec 2011 | B2 |
8118206 | Zand et al. | Feb 2012 | B2 |
8123745 | Beeckler et al. | Feb 2012 | B2 |
8150500 | Goldman et al. | Apr 2012 | B2 |
8244333 | Wood et al. | Aug 2012 | B2 |
8255040 | Goldman et al. | Aug 2012 | B2 |
8295904 | Goldman et al. | Oct 2012 | B2 |
8380291 | Wood et al. | Feb 2013 | B2 |
8391960 | Wood et al. | Mar 2013 | B2 |
8417306 | Cheng | Apr 2013 | B2 |
8463364 | Wood et al. | Jun 2013 | B2 |
8467857 | Kim et al. | Jun 2013 | B2 |
8478386 | Goldman et al. | Jul 2013 | B2 |
8483805 | Takenoshita et al. | Jul 2013 | B2 |
8483819 | Choi et al. | Jul 2013 | B2 |
8489178 | Wood et al. | Jul 2013 | B2 |
8586924 | Demos | Nov 2013 | B2 |
8649568 | Sato | Feb 2014 | B2 |
8649848 | Crane et al. | Feb 2014 | B2 |
8682418 | Tanaka | Mar 2014 | B2 |
8706200 | Goldman et al. | Apr 2014 | B2 |
8712498 | Goldman et al. | Apr 2014 | B2 |
8750970 | Goldman et al. | Jun 2014 | B2 |
8792967 | Sato | Jul 2014 | B2 |
8818493 | Goldman et al. | Aug 2014 | B2 |
8838210 | Wood et al. | Sep 2014 | B2 |
9526921 | Kimball et al. | Dec 2016 | B2 |
20020169381 | Asada et al. | Nov 2002 | A1 |
20030036685 | Goodman | Feb 2003 | A1 |
20030036751 | Anderson et al. | Feb 2003 | A1 |
20030120306 | Burbank et al. | Jun 2003 | A1 |
20040111085 | Singh | Jun 2004 | A1 |
20050143662 | Marchitto et al. | Jun 2005 | A1 |
20050180620 | Takiguchi | Aug 2005 | A1 |
20060020212 | Xu et al. | Jan 2006 | A1 |
20060052850 | Darmos et al. | Mar 2006 | A1 |
20060100523 | Ogle et al. | May 2006 | A1 |
20060155194 | Marcotte et al. | Jul 2006 | A1 |
20070038118 | DePue et al. | Feb 2007 | A1 |
20090018414 | Toofan | Jan 2009 | A1 |
20090054908 | Zand et al. | Feb 2009 | A1 |
20100222786 | Kassab | Sep 2010 | A1 |
20100249763 | Larson et al. | Sep 2010 | A1 |
20110021925 | Wood et al. | Jan 2011 | A1 |
20110245685 | Murata et al. | Oct 2011 | A1 |
20120016362 | Heinrich et al. | Jan 2012 | A1 |
20120046555 | Takamatsu et al. | Feb 2012 | A1 |
20120143182 | Ullrich et al. | Jun 2012 | A1 |
20120172842 | Sela et al. | Jul 2012 | A1 |
20120296205 | Chernov et al. | Nov 2012 | A1 |
20130102905 | Goldman et al. | Apr 2013 | A1 |
20130226013 | McEwen et al. | Aug 2013 | A1 |
20130267874 | Marcotte et al. | Oct 2013 | A1 |
20140086459 | Pan et al. | Mar 2014 | A1 |
20140100455 | Goldman et al. | Apr 2014 | A1 |
20140155753 | McGuire, Jr. et al. | Jun 2014 | A1 |
20140194751 | Goldman et al. | Jul 2014 | A1 |
20140236019 | Rahum | Aug 2014 | A1 |
20140276088 | Drucker | Sep 2014 | A1 |
20140313482 | Shahidi et al. | Oct 2014 | A1 |
20150011896 | Yelin et al. | Jan 2015 | A1 |
20150051460 | Saxena et al. | Feb 2015 | A1 |
20150066000 | An et al. | Mar 2015 | A1 |
20160235368 | Akkermans | Aug 2016 | A1 |
20170181701 | Fehrenbacher et al. | Jun 2017 | A1 |
20170311877 | Watanabe et al. | Nov 2017 | A1 |
20170367772 | Gunn et al. | Dec 2017 | A1 |
20180042522 | Subramanian et al. | Feb 2018 | A1 |
20180098705 | Chaturvedi et al. | Apr 2018 | A1 |
20180289315 | Chaturvedi et al. | Oct 2018 | A1 |
20180317999 | Park et al. | Nov 2018 | A1 |
20190038136 | Gunn et al. | Feb 2019 | A1 |
20190046220 | Chaturvedi et al. | Feb 2019 | A1 |
20190175158 | Chaturvedi et al. | Jun 2019 | A1 |
20200268311 | Shukair et al. | Aug 2020 | A1 |
20200337633 | Chaturvedi et al. | Oct 2020 | A1 |
20200345297 | Chaturvedi et al. | Nov 2020 | A1 |
20210068856 | Gunn et al. | Mar 2021 | A1 |
Number | Date | Country |
---|---|---|
2 353 534 | Aug 2011 | EP |
1 445 678 | Aug 1976 | GB |
H02-177706 | Jul 1990 | JP |
H10-005245 | Jan 1998 | JP |
H10-234715 | Sep 1998 | JP |
2002-000576 | Jan 2002 | JP |
2002-051983 | Feb 2002 | JP |
2003-019116 | Jan 2003 | JP |
2010-081972 | Apr 2010 | JP |
2016-531629 | Oct 2016 | JP |
2018-534054 | Nov 2018 | JP |
WO9827865 | Jul 1998 | WO |
WO2001060427 | Aug 2001 | WO |
WO2003039326 | May 2003 | WO |
WO2004030527 | Apr 2004 | WO |
WO2005091978 | Oct 2005 | WO |
WO2008082992 | Jul 2008 | WO |
WO2009144653 | Dec 2009 | WO |
WO2011013132 | Feb 2011 | WO |
WO2012158774 | Nov 2012 | WO |
WO2013134411 | Sep 2013 | WO |
WO2014194317 | Dec 2014 | WO |
WO2015148504 | Oct 2015 | WO |
WO2016117106 | Jul 2016 | WO |
WO2016134327 | Aug 2016 | WO |
WO2016134330 | Aug 2016 | WO |
WO 2017062720 | Apr 2017 | WO |
WO 2017139624 | Aug 2017 | WO |
WO2017139642 | Aug 2017 | WO |
WO 2018044722 | Mar 2018 | WO |
WO2019050928 | Mar 2019 | WO |
WO2019126633 | Jun 2019 | WO |
WO2019143965 | Jul 2019 | WO |
WO2020041203 | Feb 2020 | WO |
Entry |
---|
Akl et al., Performance Assessment of an Opto-Fluidic Phantom Mimicking Porcine Liver Parenchyma, J. Bio. Optics, vol. 17(7) 077008-1 to 077008-9 (Jul. 2012). |
Comtois et al., A Comparative Evaluation of Adaptive Noise Cancellation Algorithms for Minimizing Motion Artifacts in a Forehead-Mounted Wearable Pulse Oximeter, Conf. Proc. IEEE Eng. Med. Biol. Soc., 1528-31 (2007). |
Figueiras et al., Self-Mixing Microprobe for Monitoring Microvascular Perfusion in Rat Brain, Med. Bio. Eng'r Computing 51:103-112 (Oct. 12, 2012). |
Hammer et al., A Simple Algorithm for In Vivo Ocular Fundus Oximetry Compensating for Non-Haemoglobin Absorption and Scattering, Phys. Med. Bio. vol. 47, N233-N238 (Aug. 21, 2002). |
Ibey et al., Processing of Pulse Oximeter Signals Using Adaptive Filtering and Autocorrelation to Isolate Perfusion and Oxygenation Components, Proc SPIE, vol. 5702, 54-60 (2005). |
Li et al., Pulsation-Resolved Deep Tissue Dynamics Measured with Diffusing-Wave Spectroscopy, Optics Express, vol. 14, No. 17, 7841-7851 (Aug. 21, 2006). |
Mendelson et al., In-vitro Evaluation of a Dual Oxygen Saturation/Hematocrit Intravascular Fiberoptic Catheter, Biomed Instrum. Technol. 24(3):199-206 (May/Jun. 1990). |
Phelps et al., Rapid Ratiometric Determination of Hemoglobin Concentration using UV-VIS Diffuse Reflectance at Isobestic Wavelengths, Optics Express, vol. 18, No. 18, 18779-18792 (Aug. 30, 2010). |
Subramanian, Real Time Perfusion and Oxygenation Monitoring in an Implantable Optical Sensor, Thesis Texas A&M Univ. (Dec. 2004). |
Subramanian, Real-Time Separation of Perfusion and Oxygenation Signals for an Implantable Sensor Using Adaptive Filtering, IEEE Trans. Bio. Eng'g, vol. 52, No. 12, 2016-2023 (Dec. 2005). |
Subramanian, An Autocorrelation-Based Time Domain Analysis Technique for Monitoring Perfusion and Oxygenation in Transplanted Organs, IEEE Trans. Bio. Eng'g, vol. 52, No. 7, 1355-1358 (Jul. 2005). |
International Search Report and Written Opinion, counterpart PCT application PCT/US2019/068841, 15 pages (dated Mar. 16, 2020). |
Chaturvedi, Amal et al., “Blood vessel detection, localization and estimation using a smart laparoscopic grasper: a Monte Carlo study” vol. 9, No. 5 (Apr. 3, 2018) (14 pages). |
Pu, Dong-Mel et al., “First and second order full-differential in the edge detection of images”, 2013 International Conference on Machine Learning and Cybernetics, IEEE, vol. 4, pp. 1543-1547, (Jul. 14, 2013). |
Abramowitz, Milton et al., “Handbook of Mathematical Functions”, Sections 25.2 and 25.3.4, pp. 878, 883 (Jun. 1, 1964). |
Notice of Reasons for Refusal and English-language machine translation, counterpart Japanese application No. 2021-538310 (dated Sep. 5, 2023) (10 pages). |
Search Report and English-language machine translation, counterpart Japanese App. No. 2021-538310 (dated Aug. 4, 2023) (25 pages). |
Number | Date | Country | |
---|---|---|---|
20220287628 A1 | Sep 2022 | US |
Number | Date | Country | |
---|---|---|---|
62786532 | Dec 2018 | US |