This invention relates generally to the monitoring of patient medication adherence to a prescribed regimen, patient behavior, medical procedure or other patient or healthcare provider activity, and more particularly to obscuring the identity of a video recording of a patient, while allowing for adherence to a medication protocol, or adherence to other desired patient activity to be confirmed.
Determination of adherence of patients to a medication protocol is difficult. While direct observation may be employed, it may be expensive and inconvenient. Watching medication administration over a video conference may be employed, but is also expensive and inconvenient in that both the patent and an administrator must be on a video or other conference call at the same time. Finally, the inventors of the present invention have determined that these conventional systems fail to protect the identification and privacy of patients and patient data, while still allowing for determination patient activity.
Therefore, in accordance with one or more embodiments of the invention, video sequences of patients administering medication may be recorded. The video sequences may be preferably de-identified in a manner to obscure patient identifying information while still allowing for a reviewer (either computerized or human) to determine proper medication administration from the video sequence. Additional embodiments of the invention may apply to activities performed by a healthcare provider, or other assistant, thus allowing for confirmation of proper action by them, while maintaining the privacy of the patient identification.
Still other objects and advantages of the invention will in part be obvious and will in part be apparent from the specification and drawings.
The invention accordingly comprises the several steps and the relation of one or more of such steps with respect to each of the others, and the apparatus embodying features of construction, combinations of elements and arrangement of parts that are adapted to affect such steps, all as exemplified in the following detailed disclosure, and the scope of the invention will be indicated in the claims.
For a more complete understanding of the invention, reference is made to the following description and accompanying drawings, in which:
and
Storing a video sequence of a patient or other individual taking medication to be later used to determine adherence to a particular medication protocol may comprise a difficulty as storage of such patient related information may run afoul of HIPAA or other patient privacy laws.
Therefore, in accordance with one or more embodiments of the invention, one or more methods or apparatuses depicted in one or more of the above applications incorporated herein by reference may be employed in order to record one or more video medication administration sequences by a user. These sequences are preferably de-identified before, upon or after storage, or before, upon or after transmission. The method of de-identification in a first embodiment of the invention includes blurring of various facial features of the patient so that the patient identity cannot be determined from the de-identified video sequences, but maintains an unblurred portion of the patient where the medication is to be administered Other de-identification methods may be employed, such as rotoscoping, substitution of an avatar or other cartoon-like character, or the like. If may also be desirable to overlay actual portions of a video sequence, such as an inhaler, blood pressure cuff, injectable medication administration device, and the like. It is contemplated in accordance with one or more embodiments of the invention that maintaining or providing such an unblurred portion of the image only be provided for one or a few number of frames determined to be at a critical time for determining patient medication adherence. Furthermore, an indicator, such as a border, highlight or the like may be provided to emphasize these unblurred portions. Other portions of the image may be blurred that are not in the region of interest, i.e. background or other areas of the image not necessary to determine proper medication administration.
In the case of taking a pill orally, in order to better protect the identity of a patient, it may be preferable to only unblurr the pill in the mouth region of the image, rather than the entire mouth region of the user. Alternatively, it is possible that the patient's mouth and surrounding area is preferably not blurred, thus providing a wider viewing area of the image. Furthermore, as noted above, it is possible to implement this unblurring process in only key frames for determining medication administration, or on all frames as desired. Such unblurring may be implemented when, for example, an object of interest is recognized in a frame, such as when it has been determined that a pill is correctly positioned on the screen, or the pill has been determined to have been placed in the mouth of the user. Such de-identification may also employ facial averaging, or any other technique that may be employed to remove patient identifiable information while still allowing for a determination of proper medication administration. Any of de-identification techniques noted in any of the above applications incorporated herein by reference may also be employed. The patient may be shown a de-identified version of their video images on a display of an image acquisition device, thus confirming to them that their privacy is being protected. Furthermore, the process may be applied to other procedures, such as surgery in a hospital, other hospital of healthcare provider scenarios, in medical education, or the like.
Furthermore, one or more preliminary steps, such as identification of a medication pill or the like, may not be blurred, as long as doing so will not compromise the identity of the patient, such as if the pill is in front of the face of the patient. In this case, other views of the pill may be unblurred, thus allowing for identification while preserving the secrecy of the identity of the patient.
The blurred video sequences may further be employed in an automated activity recognition sequence employing computer vision to determine whether the patient has properly administered the medication, or can be reviewed manually by a user to determine adherence to the protocol. A combination of these two methods may also be employed.
Referring first to
Next, a number of relevant video frames may be selected for transmission at step 140. Thus, frames comprising a number of frames comprising those needed for object and activity recognition may be selected for transmission, including a predetermined number of leading and following frames. Such selection may be employed to reduce the amount of data to be transmitted and possibly reviewed. Of course, the entire recorded video sequence may also be transmitted. Such selection may be performed in an automated manner, such as a predetermined number of frames before and after some critical object, activity recognition, recognition of suspicious behavior, predetermined frames in relation to one or more instruction prompts, UI guides, or the like. In such a case, selection of video frames to be reviewed may be performed upon receipt of transmitted frames at the remote location, or all of the frames may be stored at the remote location. It should also be understood that such transmission is intended to include transmission to a remote storage device, such as a remote computer network, database, cloud storage or the like, and may also comprise transmission to a storage device on a local area network, via the Internet, or to a locally attached storage device. Such transmission may also take place from a mobile device to any of the remote locations described above, and may comprise any appropriate apparatus for acquiring and processing data, and then transmitting the processed data.
Once received after transmission, image segmentation may be performed at step 150 to segment the background and other portions of the image from the patient and medication, and may be employed to desegment the mouth and medication area from the rest of the image, if desired. This segmentation step may be skipped, if desired, or may only be applied to a subset of the transmitted images. Thus, such segmentation processing may only be applied to particular frames based upon proximity to item or gesture recognition, or in some other manner as noted above. The segmented patient images (or unsegmented images) may then be subject to de-identification processing at step 160, as described above, to remove the ability to identify the patient from the images, while retaining the ability to view the video sequence and confirm proper medication administration. Portions of one or more of the images may not be de-identified in a manner in accordance with the process as described above. Patient faces may be tracked through the various video image sequences to continue to confirm identity, to link face with other biometric identity confirmation to be sure the patient does not leave the screen, and may be used to confirm proximity of the patient to the screen, if desired, to allow for proper viewing of administration at a later time, by either automated or manual review. As an alternative in all embodiments, tracking or object recognition of the mouth portion and medication pill portion of the image may be employed, allowing or the blurring of the remainder of the image determined to be other than the tracked mouth and/or medication pill, and such partial blurring may be applied only in selected frames, as described above.
After such de-identification, a unique user identifier may be assigned to the de-identified video sequence at step 170 so that any user may view the video information without learning the identification of the patient, thus preferably complying with one or more desired privacy schemes. This identifier may be related to a particular patient, particular image capture device, or the like. The patient will therefore be un-identifiable from the de-identified video image sequence, but the sequence will be attributable and pinned to a particular patient record, thus allowing for proper correlation by one or more healthcare providers able to access such identification information.
Referring next to
Image segmentation may next be performed at step 240 to segment the background and other portions of the image from the patient and medication, and may be employed to desegment the mouth and medication area from the rest of the image, if desired. This segmentation step may be skipped, if desired, or may only be applied to a subset of the transmitted images. Thus, such segmentation processing may only be applied to particular frames based upon proximity to item or gesture recognition, or in some other manner as noted above. Facial or other identification processing may be employed at this time to confirm the identity of the user. Other identification methods may also be used, either alone or in combination, including, but not limited to, fingerprint recognition, voice recognition, or password. After determining such identity, the segmented (or unsegmented) patient images may then be subject to de-identification processing at step 250, as described above, to remove the ability to identify the patient from the images, while retaining the ability to view the video sequence and confirm proper medication administration and ingestion. Portions of one or more images may not be de-identified in a manner in accordance with the process as described above. As an alternative in all embodiments, tracking or object recognition of the mouth portion and medication pill portion of the image may be employed, allowing or the blurring of the remainder of the image determined to be other than the tracked mouth and/or medication pill. Such partial blurring may also be only applied in one or more selected frames, in accordance with the process noted above. Tracking of other body portions may be employed for any particular desired test, such as tracking the arm of a user to be sure a blood pressure cuff is properly used, or to determine whether other testing (such a drug or urine testing) has been properly performed.
After such de-identification, a unique user identifier may be assigned to the de-identified video sequence at step 260 so that any user may view the video information without learning the identification of the patient, thus preferably complying with one or more desired privacy schemes. This identifier may be related to a particular patient, particular image capture device, or the like.
Next, a number of relevant video frames may be selected for transmission at step 270. Thus, frames comprising a number of frames comprising those needed for object and activity recognition may be selected for transmission, including a predetermined number of leading and following frames. Such selection may be employed to reduce the amount of data to be transmitted and possibly reviewed. Of course, the entire recorded video sequence may also be transmitted. Such selection may be performed in an automated manner, such as a predetermined number of frames before and after some critical object, activity recognition, recognition of suspicious behavior, predetermined frames in relation to one or more instruction prompts, UI guides, or the like. In such a case, selection of video frames to be reviewed may be performed upon receipt of transmitted frames at the remote location, or all of the frames may be stored at the remote location. It should also be understood that such transmission is intended to include transmission to a remote storage device, such as a remote computer network, database, cloud storage or the like, and may also comprise transmission to a storage device on a local area network, via the Internet, or to a locally attached storage device. Such transmission may also take place from a mobile device to any of the remote locations described above, and may comprise any appropriate apparatus for acquiring and processing data, and then transmitting the processed data. This frame selection step 270 may also be performed before the image segmentation and de-identification steps 250 and 260 if desired in order to reduce computational power requirements.
Referring next to
Next, a number of relevant video frames may be selected for transmission at step 350, or alternatively, this step may be performed before image segmentation and de-identification steps 320 and 330, if desired, in order to reduce computational power requirements. Thus, frames comprising a number of frames comprising those needed for object and activity recognition may be selected for transmission, including a predetermined number of leading and following frames. Such selection may be employed to reduce the amount of data to be transmitted and possibly reviewed. Of course, the entire recorded video sequence may also be transmitted. Such selection may be performed in an automated manner, such as a predetermined number of frames before and after some critical object, activity recognition, recognition of suspicious behavior, predetermined frames in relation to one or more instruction prompts, UI guides, or the like. In such a case, selection of video frames to be reviewed may be performed upon receipt of transmitted frames at the remote location, or all of the frames may be stored at the remote location. It should also be understood that such transmission is intended to include transmission to a remote storage device, such as a remote computer network, database, cloud storage or the like, and may also comprise transmission to a storage device on a local area network, via the Internet, or to a locally attached storage device. Such transmission may also take place from a mobile device to any of the remote locations described above, and may comprise any appropriate apparatus for acquiring and processing data, and then transmitting the processed data.
Referring next to
After such de-identification, a unique user identifier may be assigned to the de-identified video sequence at step 430 so that any user may view the video information without learning the identification of the patient, thus preferably complying with one or more desired privacy schemes. This identifier may be related to a particular patient, particular image capture device, or the like. A unique identifier may also be provided for the medication being administered by the patient, if desired. Such a medication may be identified using gesture recognition, object detection or the like.
Each of the above-described embodiments of the invention allows rapid review and streaming of a video sequence acquired of a patient administering medication from a server along with time, date, location, unique identifier, dose taken, medication name, and other patient activities that have been automatically logged, while maintaining the privacy of the patent identity and any other confidential or private patient information. Such rapid review may include the automated (or manual) designation of a portion of the total sequence of images that may or may not include a non-de-identified portion for review by a viewer. Thus, a predetermined number of frames before and after an event, such as before and after object detection of a medication, or gesture or activity recognition of ingestion or other medication administration, may be designated. As such, only these designated frames need be shown to the user, thus resulting in a substantially reduced set of images for review by a viewer. This may allow for a rapid review of a large number of medication administration sequences by the viewer. If the full video sequence is desired to be provided, it is contemplated in accordance with an alternative embodiment of the invention that the viewer be provided with the ability to skip or “fast forward” the video sequence to each of one or more groups of designated “important” frames, preferably displaying non-de-identified video portions, and preferably being designated as noted above in accordance with object, activity or other recognition. A slider or other time or location selection device may also be provided to allow a viewer to quickly and easily review and select various portions of a video sequence. Annotation of the video sequence may also be provided to allow for notes to be saved for further review, for example.
In order to further aid the viewer, it is contemplated in accordance with one or more various embodiments of the invention that one or more portions of one or more of the non-de-identified frames, or of the de-identified frames, be highlighted, zoomed, or otherwise amplified or highlighted in order to allow for a more precise review thereof by a viewer. This will allow the viewer to better determine suspicious behavior, proper administration and ingestion, or the like. Such highlighting may be performed in an automated manner based upon such recognition, or may be provided in a manual manner, such as indicating an item to be highlighted by an operator. Once highlighted, the highlighted area may be tracked through a sequence of frames so that the object, such as a medication, can be followed through a video sequence.
Various embodiments of the present invention may further employ face tracking that would track the user, eliminate other faces or other distracting or possibly identifying items in the video sequences. Such tracking may be employed in real-time, near-real time, or after storage and or transmission of the video sequence data. Upon use of such face tracking, it is contemplated in accordance with embodiments of the invention that if the face of the patient leaves the screen, the entire screen may be blurred, or otherwise de-identified. Furthermore, if the face of the patient turns sideways or to another difficult to see angle, the entire screen may be blurred. Whether the face or other object, even if completely blurred, tracking technologies may still be employed to confirm location of the patient face, object, or the like.
Such processing may be applicable to medication administration, and any other patient procedures, including in hospital, doctor office, outpatient, home or other settings. Thus, in accordance with various embodiments of the invention, administration of any item may be determined, while maintaining privacy of the user, by blurring or otherwise de-identifying video image sequences, while maintaining non-blurred video portions including the administration procedure.
In accordance with various embodiments of the invention, referring next to
It is further contemplated that a plurality such information capture apparatuses 500 may be coordinated to monitor a larger space than a space that can be covered by a single such apparatus. Thus, the apparatuses can be made aware of the presence of the other apparatuses, and may operate by transmitting all information to one of the apparatuses 500, or these apparatuses may each independently communicate with remote data and computing location, which is adapted to piece together the various information received from the plurality of devices 500. Each such apparatus 500 may comprise a mobile computing device, such as a smart phone or the like including a web camera, or a laptop computer or pad computing device, each including a web camera or the like. Therefore, in accordance with one or more embodiments of the invention, Processing of video information may be performed locally, remotely, or partially locally and remotely. Thus, de-identification and object and/or activity recognition may proceed locally, while frame selection may proceed remotely. Any other combination of locally and remote processing may also be provided, also including encryption and decryption schemes. Furthermore, and local remote, or combination of devices may each process a portion of the captured information, and may then transmit processed information to a single location for assembly. Finally, encryption of video may be performed locally, and then after decryption, all processing described above may be performed. Such remote location may comprise a cloud location, remote computer or group of computers, a remote processor or multiple processors, and the like, and may further include one or more attached or further remote storage devices for storing computer, computer program and/or video information.
Referring next to
It has been further determined by the inventors of the present invention that when employing object and activity detection to determine, for example, the identity of a medication, and more particularly when the medication pill has been placed in the mouth of a user, it may be difficult to determine the location of the medication pill, and thus confirm proper medication ingestion. This is because, in the case of a light colored or white medication pill, for example, the teeth of the user may look similar to the pill. Therefore in accordance with an embodiment of the invention, after (or instead of) a pill candidate has been identified during a step determining medication ingestion, a next step of processing may be performed to confirm that a mouth color surrounds the pill candidate. Thus, if the pill candidate turns out to be the tooth of a user, to the right and left of the tooth candidate will be other teeth, viewed as similarly colored by the video camera. Thus, by confirming that to the right and left, top and bottom of a pill candidate is mouth colored, it is possible to determine that the pill candidate is a single entity, and is therefore in the mouth of the user. Thus, by using a recognition system for recognizing a pill candidate surrounded by a mouth of a user, rather than simply looking for a medication pill, robustness of the object and activity recognition can be enhanced. This process may also be applied to isolation of a tongue piercing, for example. If not possible to remove such a piercing, the user may be asked to remove the piercing before using the system.
Similarly, when a user is moving a pill while holding it in their hand, it is possible to use the combination of the pill and fingertips of the user to differentiate the medication pill from other background items that may appear similar to a pill. Thus, through the tracking of motion of the medication pill through an expected path of travel, and segmentation of not only the medication pill, but also the fingertip/pill/fingertip combination, improved object and activity recognition may be achieved. In such a manner, in accordance with an additional embodiment of the invention, first, one or more objects may be detected and presented as pill candidates. Then, from this set of pill candidates, further segmentation may be performed to confirm whether the pill candidate is held between two finger tips to confirm and differentiate that the pill candidate is in fact a medication pill and not some other object. Of course, the determination may simply look for a pill/fingertip combination from the outset.
Furthermore, because individuals may each hold a pill differently, in accordance with yet another embodiment of the invention, two or more detection methods may preferably be used. Thus, in this particular embodiment, for example, in a first frame one may search for an oblong medication pill being held horizontally between fingertips, and in a second frame for the oblong medication pill being held vertically between fingertips. Of course, and number of different objects may be searched for, and in this example, orientation of the medication pill at 45 or 135 degrees may also be employed. Furthermore, these multiple methods may be employed on each frame, depending on processor speed. Employing these methods on different frames provides a more real-time analysis with a slower processor, for example. Furthermore, with the use of a multi-core processor, the different methods may be processed in parallel on the different cores without sacrificing the real time effect.
Instructional feedback may be provided to the user to aid in proper positioning. This real time feedback makes the user aware that the tracking system is in operation, and may reduce the likelihood that the user will try to trick the system. By searching for these various different types of images in sequential or different frames, the likelihood of properly identifying the medication pill, regardless of how it is handled, is greatly increased. Of course, rather than simply looking for differently oriented images, one could, for example, look for color in one or more frames, shape in one or more frames, markings in one or more frames, and any other type of identifier in similar groups of one or more frames. Thus, any of these recognition systems may be employed, essentially in parallel. Furthermore, the use of such a plurality of methods of detection may allow for increased confirmation of a correct medication pill being identified. Additionally, once the medication pill has been identified, it is possible to select the one attribute generating the highest confidence score, and then use this one attribute to continue to track the medication pill through subsequent frames of the video images.
As one goal of the present invention is to confirm proper medication adherence and ingestion of medication, small movements, micro audio sounds suggesting swallowing, or the like, of the user may be employed in order to further confirm proper medication administration, reducing the likelihood of a user tricking the system. So, for example, identification of any swallowing motions in the neck or throat, gullet movement, jaw movement, or audio information related to swallowing, such as sounds related to the swallowing of water or the like, may be further employed in order to lend additional confirmation to the ingestion of a medication pill. Therefore, in accordance with one or more embodiments of the present invention, teaching the system to recognize such micro movements may be employed in a manner similar to the teaching of the system to recognize the medication pill. These movements may be correlated in time to properly determine sequence of administration, and may be compared to the first use by the user of the system in a controlled environment, such as a clinic. Additionally, all of the features of the invention noted above may be applied to such micro movements, including non-de-identifying these micro movements, when recognized, in all or a subset of the frames of captured video as described above. It may be desirable to de-identify any micro audio sounds, such as through sound synthesis, audio mix, sampling or the like to further protect the identity of the user.
Furthermore, magnification of these movements may be provided so that manual subsequent review of these portions of the video sequences may be easier. Additionally, during video capture, these portions of the user may be zoomed, to appear larger in the frame, and allowing for an easier automated and manual determination of such micro movements. The combination of automatic recognition and/or manual review of these micro movements may be used in conjunction with any of the methods for confirming medication administration and ingestion noted within this application in order to improve the confidence with which medication administration and ingestion may be confirmed.
It will thus be seen that the objects set forth above, among those made apparent from the preceding description, are efficiently attained and, because certain changes may be made in carrying out the above method and in the construction(s) set forth without departing from the spirit and scope of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
It is also to be understood that this description is intended to cover all of the generic and specific features of the invention herein described and all statements of the scope of the invention which, as a matter of language, might be said to fall there between.
This application is a continuation of U.S. patent application Ser. No. 15/595,441, filed May 15, 2017 to Hanina et al., titled “Identification and De-Identification Within A Video Sequence”, which is a continuation of U.S. patent application Ser. No. 14/990,389, filed Jan. 7, 2016 to Hanina et al., now U.S. Pat. No. 9,652,665, issued May 16, 2017, titled “Identification and De-Identification Within a Video Sequence”, which is a continuation of U.S. patent application Ser. No. 13/674,209, filed Nov. 12, 2012 to Hanina et al., now U.S. Pat. No. 9,256,776, issued Feb. 9, 2016, titled “Method and Apparatus for Identification”, which, in turn, claims the benefit of U.S. Provisional Patent Application Ser. No. 61/582,969, filed Jan. 4, 2012 to Hanina et al., titled “Method and Apparatus for Identification.” The contents of all of the prior applications are incorporated herein by reference in their entirety. This application incorporates by reference the entire contents of the following applications, and any applications to which they claim priority, or otherwise incorporate by reference: Method and Apparatus for Verification of Medication Administration Adherence, Ser. No. 12/620,686, filed Nov. 18, 2009 to Hanina et al. Method and Apparatus for Verification of Clinical Trial Adherence, Ser. No. 12/646,383, filed Dec. 23, 2009 to Hanina et al. Method and Apparatus for Management of Clinical Trials, Ser. No. 12/646,603, filed Dec. 23, 2009 to Hanina et al. Apparatus and Method for Collection of Protocol Adherence Data, Ser. No. 12/728,721, filed Mar. 22, 2010 to Hanina et al. Apparatus and Method for Recognition of Patient Activities when Obtaining Protocol Adherence Data, Ser. No. 12/815,037, filed Jun. 14, 2010, which claims the benefit of Apparatus and Method for Recognition of Patient Activities When Obtaining Protocol Adherence Data, U.S. Provisional Patent Application 61/331,872, filed May 6, 2010 to Hanina et al. Apparatus and Method for Assisting Monitoring of Medication Adherence, Ser. No. 12/899,510, filed Oct. 6, 2010 to Hanina et al. Apparatus and Method for Object Confirmation and Tracking, Ser. No. 12/898,338, filed Oct. 5, 2010 to Hanina et al. Method and Apparatus for Monitoring Medication Adherence, Ser. No. 13/189,518, filed Jul. 12, 2011 to Hanina et al., which claims the benefit of Method and Apparatus for Monitoring Medication Adherence, 61/495,415, filed Jun. 10, 2011 to Hanina et al.
Number | Name | Date | Kind |
---|---|---|---|
3814845 | Hurlbrink et al. | Jun 1974 | A |
5065447 | Barnsley et al. | Nov 1991 | A |
5441047 | David et al. | Aug 1995 | A |
5544649 | David et al. | Aug 1996 | A |
5596994 | Bro | Jan 1997 | A |
5619991 | Sloane | Apr 1997 | A |
5646912 | Cousin | Jul 1997 | A |
5752621 | Passamante | May 1998 | A |
5764296 | Shin | Jun 1998 | A |
5810747 | Brundy et al. | Sep 1998 | A |
5911132 | Sloane | Jun 1999 | A |
5961446 | Beller et al. | Oct 1999 | A |
5963136 | Obrien | Oct 1999 | A |
6151521 | Guo et al. | Nov 2000 | A |
6233428 | Fryer | May 2001 | B1 |
6234343 | Papp | May 2001 | B1 |
6283761 | Joao | Sep 2001 | B1 |
6380858 | Yarin et al. | Apr 2002 | B1 |
6409661 | Murphy | Jun 2002 | B1 |
6421650 | Goetz et al. | Jul 2002 | B1 |
6483993 | Misumi et al. | Nov 2002 | B1 |
6484144 | Martin et al. | Nov 2002 | B2 |
6535637 | Wootton et al. | Mar 2003 | B1 |
6611206 | Eshelman et al. | Aug 2003 | B2 |
6628835 | Brill et al. | Sep 2003 | B1 |
6705991 | Bardy | Mar 2004 | B2 |
6879970 | Shiffman et al. | Apr 2005 | B2 |
6988075 | Hacker | Jan 2006 | B1 |
7184047 | Crampton | Feb 2007 | B1 |
7184075 | Reiffel | Feb 2007 | B2 |
7256708 | Rosenfeld et al. | Aug 2007 | B2 |
7277752 | Matos | Oct 2007 | B2 |
7304228 | Bryden et al. | Dec 2007 | B2 |
7307543 | Rosenfeld et al. | Dec 2007 | B2 |
7317967 | DiGianfilippo et al. | Jan 2008 | B2 |
7340077 | Gokturk | Mar 2008 | B2 |
7395214 | Shillingburg | Jul 2008 | B2 |
7415447 | Shiffman et al. | Aug 2008 | B2 |
7448544 | Louie et al. | Nov 2008 | B1 |
7562121 | Berisford et al. | Jul 2009 | B2 |
7627142 | Kurzweil et al. | Dec 2009 | B2 |
7657443 | Crass et al. | Feb 2010 | B2 |
7692625 | Morrison et al. | Apr 2010 | B2 |
7712288 | Ramasubramanian et al. | May 2010 | B2 |
7747454 | Bartfeld et al. | Jun 2010 | B2 |
7761311 | Clements et al. | Jul 2010 | B2 |
7769465 | Matos | Aug 2010 | B2 |
7774075 | Lin et al. | Aug 2010 | B2 |
7874984 | Elsayed et al. | Jan 2011 | B2 |
7881537 | Ma et al. | Feb 2011 | B2 |
7908155 | Fuerst et al. | Mar 2011 | B2 |
7912733 | Clements et al. | Mar 2011 | B2 |
7945450 | Strawder | May 2011 | B2 |
7956727 | Loncar | Jun 2011 | B2 |
7983933 | Karkanias et al. | Jul 2011 | B2 |
8065180 | Hufford et al. | Nov 2011 | B2 |
8321284 | Clemets et al. | Nov 2012 | B2 |
8370262 | Blessing | Feb 2013 | B2 |
8606595 | Udani | Dec 2013 | B2 |
9256776 | Hanina et al. | Feb 2016 | B2 |
9652665 | Hanina et al. | May 2017 | B2 |
20010049673 | Dulong et al. | Dec 2001 | A1 |
20010056358 | Dulong et al. | Dec 2001 | A1 |
20020026330 | Klein | Feb 2002 | A1 |
20020093429 | Matsushita et al. | Jul 2002 | A1 |
20020143563 | Hufford et al. | Oct 2002 | A1 |
20030036683 | Kehr et al. | Feb 2003 | A1 |
20030058341 | Brodsky et al. | Mar 2003 | A1 |
20030164172 | Chumas et al. | Sep 2003 | A1 |
20030190076 | Delean | Oct 2003 | A1 |
20030225325 | Kagermeier et al. | Dec 2003 | A1 |
20040100572 | Kim | May 2004 | A1 |
20040107116 | Brown | Jun 2004 | A1 |
20040155780 | Rapchak | Aug 2004 | A1 |
20050144150 | Ramamurthy et al. | Jun 2005 | A1 |
20050149361 | Saus et al. | Jul 2005 | A1 |
20050180610 | Kato et al. | Aug 2005 | A1 |
20050182664 | Abraham-Fuchs et al. | Aug 2005 | A1 |
20050234381 | Niemetz et al. | Oct 2005 | A1 |
20050267356 | Ramasubramanian et al. | Dec 2005 | A1 |
20060066584 | Barkan | Mar 2006 | A1 |
20060218011 | Walker et al. | Sep 2006 | A1 |
20060238549 | Marks | Oct 2006 | A1 |
20060294108 | Adelson et al. | Dec 2006 | A1 |
20070008112 | Covannon et al. | Jan 2007 | A1 |
20070008113 | Spoonhower et al. | Jan 2007 | A1 |
20070030363 | Cheatle et al. | Feb 2007 | A1 |
20070118389 | Shipon | May 2007 | A1 |
20070194034 | Vasiadis | Aug 2007 | A1 |
20070233035 | Wehba et al. | Oct 2007 | A1 |
20070233049 | Wehba et al. | Oct 2007 | A1 |
20070233050 | Wehba et al. | Oct 2007 | A1 |
20070233281 | Wehba et al. | Oct 2007 | A1 |
20070233520 | Wehba et al. | Oct 2007 | A1 |
20070233521 | Wehba et al. | Oct 2007 | A1 |
20070273504 | Tran | Nov 2007 | A1 |
20070288266 | Sysko et al. | Dec 2007 | A1 |
20080000979 | Poisner | Jan 2008 | A1 |
20080093447 | Johnson et al. | Apr 2008 | A1 |
20080114226 | Music et al. | May 2008 | A1 |
20080114490 | Jean-Pierre | May 2008 | A1 |
20080119958 | Bear et al. | May 2008 | A1 |
20080138604 | Kenney et al. | Jun 2008 | A1 |
20080140444 | Karkanias et al. | Jun 2008 | A1 |
20080162192 | Vonk et al. | Jul 2008 | A1 |
20080172253 | Chung et al. | Jul 2008 | A1 |
20080178126 | Beeck et al. | Jul 2008 | A1 |
20080201174 | Ramasubramanian et al. | Aug 2008 | A1 |
20080219493 | Tadmor | Sep 2008 | A1 |
20080275738 | Shillingburg | Nov 2008 | A1 |
20080290168 | Sullivan et al. | Nov 2008 | A1 |
20080294012 | Kurtz et al. | Nov 2008 | A1 |
20080297589 | Kurtz et al. | Dec 2008 | A1 |
20080298571 | Kurtz et al. | Dec 2008 | A1 |
20080303638 | Nguyen et al. | Dec 2008 | A1 |
20090012818 | Rodgers | Jan 2009 | A1 |
20090018867 | Reiner | Jan 2009 | A1 |
20090043610 | Nadas et al. | Feb 2009 | A1 |
20090048871 | Skomra | Feb 2009 | A1 |
20090095837 | Lindgren | Apr 2009 | A1 |
20090128330 | Monroe | May 2009 | A1 |
20090159714 | Coyne, III et al. | Jun 2009 | A1 |
20090217194 | Martin et al. | Aug 2009 | A1 |
20090245655 | Matsuzaka | Oct 2009 | A1 |
20100042430 | Bartfield | Feb 2010 | A1 |
20100050134 | Clarkson | Feb 2010 | A1 |
20100057646 | Martin et al. | Mar 2010 | A1 |
20100092093 | Akatsuka et al. | Apr 2010 | A1 |
20100136509 | Mejer et al. | Jun 2010 | A1 |
20100138154 | Kon | Jun 2010 | A1 |
20100255598 | Melker | Oct 2010 | A1 |
20100262436 | Chen et al. | Oct 2010 | A1 |
20100316979 | Von Bismarck | Dec 2010 | A1 |
20110021952 | Vallone | Jan 2011 | A1 |
20110119073 | Hanina et al. | May 2011 | A1 |
20110153360 | Haninia et al. | Jun 2011 | A1 |
20110161109 | Pinsonneault et al. | Jun 2011 | A1 |
20110161999 | Klappert et al. | Jun 2011 | A1 |
20110195520 | Leider et al. | Aug 2011 | A1 |
20110275051 | Hanina et al. | Nov 2011 | A1 |
20120011575 | Cheswick et al. | Jan 2012 | A1 |
20120075464 | Derenne et al. | Mar 2012 | A1 |
20120081551 | Mizuno et al. | Apr 2012 | A1 |
20120140068 | Monroe et al. | Jun 2012 | A1 |
20120182380 | Ohmae et al. | Jul 2012 | A1 |
20120316897 | Hanina | Dec 2012 | A1 |
Entry |
---|
Ammouri et al., Face and Hands Detection and Tracking Applied to the Monitoring of Medication Intake, Computer and Robot Vision, 2008. CRV '08. Canadian Conference on, vol. No., pp. 147, 154, May 28-30, 2008. |
Batz, et al. A computer Vision System for Monitoring Medication Intake, in Proc. IEEE 2nd Canadian Conf. on Computer and Robot Vision, Victoria, BC, Canada, 2005, pp. 362-369. |
Bilodeau et al. Monitoring of Medication Intake Using a Camera System, Journal of Medical Systems 2011. [retrieved on Feb. 18, 2013] Retrieved from ProQuest Technology Collection. |
Chen, Pauline W., Texting as a Health Tool for Teenagers, The New York Times, Nov. 5, 2009, http://www.nytimes.com/2009/11/05/health/05chen.html?_r=1&emc=. |
Danya International, Inc., Pilot Study Using Cell Phones for Mobile Direct Observation Treatment to Monitor Medication Compliance of TB Patients, Mar. 20, 2009, www.danya.com/MDOT.asp. |
Final Office Action from PTO, Cited in AI-0001-U1 (U.S. Appl. No. 12/620,686), (dated May 8, 2012), 1-24. |
Final Office Action from PTO, Cited in AI-0001-U2 (U.S. Appl. No. 13/558,377), dated May 7, 2013, 1-29. |
Final Office Action from PTO, Cited in AI-0002-U1 (U.S. Appl. No. 12/646,383), (dated May 8, 2012), 1-31. |
Final Office Action from PTO, Cited in AI-0002-U2 (U.S. Appl. No. 13/588,380), (dated Mar. 1, 2013), 1-27. |
Final Office Action from PTO, Cited in AI-0003-U1 (U.S. Appl. No. 12/646,603), (dated Feb. 1, 2012), 1-17. |
Final Office Action from PTO, Cited in AI-0004-U1 (U.S. Appl. No. 12/728,721), (dated Apr. 12, 2012), 1-31. |
Final Office Action from PTO, Cited in AI-0005-U1 (U.S. Appl. No. 12/815,037), (dated Sep. 13, 2012), 1-15. |
Final Office Action from PTO, Cited in AI-0006-U1 (U.S. Appl. No. 12/899,510), (dated Aug. 28, 2013). |
Final Office Action from PTO, Cited in AI-0008-U1 (U.S. Appl. No. 12/898,338), dated Nov. 9, 2012), 1-12. |
Final Office Action from PTO, Cited in AI-0012-U1 (U.S. Appl. No. 13/189,518), (dated Jul. 23, 2013), 1-16. |
Fook et al. Smart Mote-Based Medical System for Monitoring and Handling Medication Among Persons with Dementia. ICOST 2007, LNCS 4541, pp. 54-62, 2007. |
Global Tuberculosis Control: A short update to the 2009 report, World Health Organization, (2009). |
Huynh et al., Real time detection, tracking and recognition of medication intake. World Academy of Science, Engineering and Technology 60 (2009), 280-287. |
International Preliminary Report on Patentability, cited in AI-0001-PCT1 (PCT/US2010/056935) (dated May 31, 2012), 1-8. |
International Preliminary Report on Patentability, cited in AI-0020-PCT1 (PCT/US2013/020026) dated May 5, 2015 (13 pages). |
Mintchell, Exploring the Limits of Machine Vision, Automating World, Oct. 1, 2011. |
Non-Final Office Action from PTO, Cited in AI-0001-111 (U.S. Appl. No. 12/620,686), (dated Dec. 21, 2011), 1-78. |
Non-Final Office Action from PTO, Cited in AI-0001-U2 (U.S. Appl. No. 13/558,377), (dated Oct. 22, 2012), 1-21. |
Non-Final Office Action from PTO, Cited in AI-0002-U1 (U.S. Appl. No. 12/646,383), (dated Dec. 22, 2011),1-78. |
Non-Final Office Action from PTO, Cited in AI-0002-U2 (U.S. Appl. No. 13/558,380), (dated Oct. 4, 2012), 1-20. |
Non-Final Office Action from PTO, Cited in AI-0003-U1 (U.S. Appl. No. 12/646,603), (dated Oct. 13, 2011),1-74. |
Non-Final Office Action from PTO, Cited in AI-0003-U1 (U.S. Appl. No. 12/646,603), (dated Jun. 13, 2013), 1-16. |
Non-Final Office Action from PTO, Cited in AI-0004-U1 (U.S. Appl. No. 12/728,721), (dated Jan. 6, 2012), 1-31. |
Non-Final Office Action from PTO, Cited in AI-0004-U1 (U.S. Appl. No. 12/728,721), (dated May 9, 2013), 1-25. |
Non-Final Office Action from PTO, Cited in AI-0005-U1 (U.S. Appl. No. 12/815,037), (dated Mar. 28, 2012),1-17. |
Non-Final Office Action from PTO, Cited in AI-0005-U1 (U.S. Appl. No. 12/815,037), (dated Jul. 18, 2013), 1-19. |
Non-Final Office Action from PTO, Cited in AI-0006-U1 (U.S. Appl. No. 12/899,510), (dated Jan. 23, 2013), 1-20. |
Non-Final Office Action from PTO, Cited in AI-0008-U1 (U.S. Appl. No. 12/898,338), (dated Jun. 19, 2012), 1-16. |
Non-Final Office Action from PTO, Cited in AI-0012-U1 (U.S. Appl. No. 13/189,518), (dated Dec. 21, 2012), 1-10. |
Non-Final Office Action from PTO, Cited in AI-0013-U1 (U.S. Appl. No. 13/235,387), dated Sep. 12, 2013), 1-16. |
Osterberg, Lars and Blaschke, Terrence, Adherence to Medication, New England Journal of Medicine 2005; 353:487-97, Aug. 4, 2005. |
PCT Search report and written opinion, Cited in AI-0001-PCT1 (PCT/US2010/56935, (dated Jan. 12, 2011), 1-9. |
PCT Search report and written opinion, Cited in AI-0005-PCT1 (PCT/US2011/35093, (dated Sep. 12, 2011), 1-8. |
PCT Search report and written opinion, Cited in AI-0006-PCT1 (PCT/US11/54666), (dated Feb. 28, 2012), 1-13. |
PCT Search report and written opinion, Cited in AI-0008-PCT1 (PCT/US11/54668), dated Feb. 28, 2012, 1-12. |
PCT Search report and written opinion, Cited in AI-0012-PCT1 (PCT/US12/41785), (dated Aug. 14, 2012), 1-10. |
PCT Search report and written opinion, Cited in AI-0013-PCT1 (PCT/US12/42843), (Aug. 31, 2012), 1-8. |
PCT Search report and written opinion, Cited in AI-0018-PCT1 (PCT/US2012/051554), (dated Oct. 19, 2012), 1-12. |
PCT Search report and written opinion, Cited in AI-0019-PCT (PCT/US12/59139), (dated Dec. 18, 2012), 1-15. |
PCT Search report and written Opinion, Cited in AI-0020-PCT1 (PCT/US13/20026), (dated Aug. 5, 2013), 1-14. |
PR Newswire. Pilot Study Using Video Cell Phones for Mobile Direct Observation (MOOT) to Monitor Medication Compliance of TB Patients, New York: Mar 23, 2009. |
Super-Resolution, Wikipedia, (Oct. 5, 2010). |
University of Texas, GuideView, Mar. 15, 2007, http://www.sahs.uth.tmc.edu/MSriram/GuideView/. |
Valin, et al. Video Surveillance of Medication intake, Int. Conf. of the IEEE Engineering in Medicine and Biology Society, New York City, USA, Aug. 2006. |
Wang et al. Recent Developments in human motion analysis. Pattern Recognition 36 (220) 585-601 (Nov. 2001). |
Whitecup, Morris S., 2008 Patient Adherence Update: New Approaches for Success, www.guideline.com, The Trend Report Series, (Oct. 1, 2008). |
Number | Date | Country | |
---|---|---|---|
20190205615 A1 | Jul 2019 | US |
Number | Date | Country | |
---|---|---|---|
61582969 | Jan 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15595441 | May 2017 | US |
Child | 16188837 | US | |
Parent | 14990389 | Jan 2016 | US |
Child | 15595441 | US | |
Parent | 13674209 | Nov 2012 | US |
Child | 14990389 | US |