The present disclosure relates generally to capsule endoscopy procedures, and more specifically, to flexible systems, devices, apps, and methods for conducting capsule endoscopy procedures in a variety of manners and configurations.
Capsule endoscopy (CE) allows examining the entire GIT endoscopically. There are capsule endoscopy systems and methods that are aimed at examining specific portions of the GIT, such as the small bowel or the colon. CE is a non-invasive procedure which does not require the patient to be admitted to a hospital, and the patient can continue most daily activities while the capsule is in the body. The patient may also continue taking regular medications.
On a typical CE procedure, the patient is referred to a procedure by a physician. The patient then arrives at a medical facility (e.g., a clinic or a hospital), to perform the procedure. The patient is admitted by a HealthCare Provider (HCP) such as a nurse and/or a physician, who sets up the specific procedure, manages, and supervises it. In some cases, the HCP may be the referring physician. The capsule, which is about the size of a multi-vitamin, is swallowed by the patient under the supervision of the HCP at the medical facility and the patient is provided with a wearable device, e.g., a sensor belt and a recorder placed in a pouch and strap to be placed around the patient's shoulder. The wearable device typically includes a storage device. The patient may be given guidance and/or instructions and then released to his daily activities. The capsule captures images as it travels naturally through the GIT. Images and additional data (e.g., metadata) are then transmitted to the recorder that is worn by the patient. The capsule is disposable and passes naturally with a bowel movement. The procedure data (e.g., the captured images or a portion of them and additional metadata) is stored on the storage device of the wearable device.
The wearable device is typically returned by the patient to the medical facility with the procedure data stored thereon. The procedure data is then downloaded to a computing device typically located at the medical facility, which has an engine software stored thereon. The received procedure data is then processed by the engine to a compiled study. Typically, the number of images to be processed is of the order of tens of thousands and about 90,000 to 100,000 on average. Typically, a compiled study includes thousands of images (around 6,000 to 9,000). Since the patient is required to return the wearable device to the HCP or medical facility and only then the procedure data would be processed, a compiled study and a report usually would not be generated at the same day of the procedure or shortly afterward.
A reader (which may be the procedure supervising physician, a dedicated physician or the referring physician) may access the compiled study via a reader application. The reader then reviews the compiled study, evaluates the procedure and provides his input via the reader application. Since the reader needs to review thousands of images, the reading time of a compiled study may usually take between half an hour to an hour on average and the reading task may be tiresome. A report is then generated by the reader application based on the compiled study and the reader's input. On average, it would take an hour to generate a report. The report may include, for example, images of interest, e.g., images which are identified as including pathologies, evaluation or diagnosis of the patient's medical condition based on the procedure's data and/or recommendations for follow-up and/or treatment. The report may then be forwarded to the referring physician. The referring physician may decide on a required follow-up or treatment based on the report.
Some capsule procedures, specifically those aimed at the colon, may require patient preparation. For example, the colon and/or small bowel may be required to be emptied. To clean the bowel, a physician may determine a regimen, e.g., prescribe a diet and/or medication, such as prep solution and/or laxatives, for the patient to ingest before the procedure. It is important that the patient follow all of the instructions and ingest all preparation medication to ensure the patient's GIT can be seen properly. In addition, the patient may be also required to follow a diet and/or take medication (e.g., laxatives) after the capsule is swallowed and during the procedure (herein referred as “boosts”). The recorder may alert the patient if this step needs to be repeated to ensure a complete procedure. Typically, a physician (e.g., the referring physician or the physician supervising the procedure) decides on a preparation that suits the patient and the desired type of capsule procedure.
The present disclosure relates to systems, devices, apps, and methods for capsule endoscopy procedures. More particularly, the present disclosure relates to systems, devices, apps, and methods for coordinating, conducting, evaluating, and monitoring numerous capsule endoscopy procedures simultaneously. Networked and systems and devices provide the capability for patients to conduct capsule endoscopy procedures partially or entirely outside a medical facility, if they wish, and for healthcare professionals to remotely access and evaluate data from the capsule endoscopy procedure during and/or after the procedure. The disclosed systems, devices, apps, and methods are flexible and permit capsule endoscopy procedures to be conducted in a variety of manners and configurations.
In accordance with aspects of the present disclosure, a swallowable capsule apparatus includes one or more processors and one or more memory storing instructions. The instructions, when executed by the one or more processors, cause the swallowable capsule apparatus at least to perform: capturing in-vivo images over time of at least a portion of a gastrointestinal tract (GIT) of a person; pruning, over time, at least a portion of the in-vivo images; and communicating images of the in-vivo images, which were not pruned, to a receiver device external to the person.
In various embodiments of the swallowable capsule apparatus, in the pruning, the instructions, when executed by the one or more processors, cause the swallowable capsule apparatus at least to perform: pruning at least a first portion of the in-vivo images at a first degree based on a first location indication specifying a first location in the GIT where the swallowable capsule apparatus captured the first portion of the in-vivo images; and pruning at least a second portion of the in-vivo images at a second degree based on a second location indication specifying a second location in the GIT where the swallowable capsule apparatus captured the second portion of the in-vivo images.
In various embodiments of the swallowable capsule apparatus, the instructions, when executed by the one or more processors, further cause the swallowable capsule apparatus at least to perform: generating the first location indication based on at least the first portion of the in-vivo images; and generating the second location indication based on at least the second portion of the in-vivo images.
In various embodiments of the swallowable capsule apparatus, the first location indication and the second location indication are generated by at least one of: a wearable device configured to be secured to the person, a mobile device carried by the person, or a remote computing system remote from the swallowable capsule apparatus.
In various embodiments of the swallowable capsule apparatus, the first location is a small bowel of the person and the second location is a colon of the person, and the first degree is greater than the second degree such that in-vivo images of the small bowel are pruned to a greater degree than in-vivo images of the colon.
In various embodiments of the swallowable capsule apparatus, the first location is a location of the GIT having no suspected pathology and the second location is a location of the GIT having a suspected pathology, and the first degree is greater than the second degree such that in-vivo images of the location of the GIT having no suspected pathology are pruned to a greater degree than in-vivo images of the location of the GIT having a suspected pathology.
In various embodiments of the swallowable capsule apparatus, the pruning the at least the first portion of the in-vivo images at the first degree is different from the pruning the at least the second portion of the in-vivo images at the second degree by at least one of: a rate of pruning images, where the first degree of pruning has a higher rate of pruning images than the second degree of pruning, a similarity threshold for pruning the in-vivo images, where the first degree of pruning has a lower similarity threshold for pruning the in-vivo images than the second degree of pruning, or a difference threshold for pruning the in-vivo images, where the first degree of pruning has a higher difference threshold for pruning the in-vivo images than the second degree of pruning.
In accordance with aspects of the present disclosure, a processor-implemented method in a swallowable capsule apparatus includes: capturing, by the swallowable capsule apparatus, in-vivo images over time of at least a portion of a gastrointestinal tract (GIT) of a person; pruning, by the swallowable capsule apparatus over time, at least a portion of the in-vivo images; and communicating, by the swallowable capsule apparatus, images of the in-vivo images, which were not pruned, to a receiver device external to the person.
In various embodiments of the processor-implemented method, the pruning includes: pruning, by the swallowable capsule apparatus, at least a first portion of the in-vivo images at a first degree based on a first location indication specifying a first location in the GIT where the swallowable capsule apparatus captured the first portion of the in-vivo images; and pruning, by the swallowable capsule apparatus, at least a second portion of the in-vivo images at a second degree based on a second location indication specifying a second location in the GIT where the swallowable capsule apparatus captured the second portion of the in-vivo images.
In various embodiments of the processor-implemented method, the method further includes: generating, by the swallowable capsule apparatus, the first location indication based on at least a first portion of the in-vivo images; and generating, by the swallowable capsule apparatus, the second location indication based on at least a second portion of the in-vivo images.
In various embodiments of the processor-implemented method, the first location indication and the second location indication are generated by at least one of: a wearable device configured to be secured to the person, a mobile device carried by the person, or a remote computing system remote from the swallowable capsule apparatus.
In various embodiments of the processor-implemented method, the first location is a small bowel of the person and the second location is a colon of the person, and the first degree is greater than the second degree such that in-vivo images of the small bowel are pruned, by the swallowable capsule apparatus, to a greater degree than in-vivo images of the colon.
In various embodiments of the processor-implemented method, the first location is a location of the GIT having no suspected pathology and the second location is a location of the GIT having a suspected pathology, and the first degree is greater than the second degree such that in-vivo images of the location of the GIT having no suspected pathology are pruned, by the swallowable capsule apparatus, to a greater degree than in-vivo images of the location of the GIT having a suspected pathology.
In various embodiments of the processor-implemented method, the pruning the at least the first portion of the in-vivo images at the first degree is different from the pruning the at least the second portion of the in-vivo images at the second degree by at least one of: a rate of pruning images, wherein the first degree of pruning has a higher rate of pruning images than the second degree of pruning, a similarity threshold for pruning the in-vivo images, wherein the first degree of pruning has a lower similarity threshold for pruning the in-vivo images than the second degree of pruning, or a difference threshold for pruning the in-vivo images, wherein the first degree of pruning has a higher difference threshold for pruning the in-vivo images than the second degree of pruning.
In accordance with aspects of the present disclosure, a non-transitory processor-readable medium stores instructions which, when executed by one or more processors in a swallowable capsule apparatus, cause the swallowable capsule apparatus at least to perform: capturing in-vivo images over time of at least a portion of a gastrointestinal tract (GIT) of a person; pruning, over time, at least a portion of the in-vivo images; and communicating images of the in-vivo images, which were not pruned, to a receiver device external to the person.
In various embodiments of the non-transitory processor-readable medium, in the pruning, the instructions, when executed by the one or more processors, cause the swallowable capsule apparatus at least to perform: pruning, by the swallowable capsule apparatus, at least a first portion of the in-vivo images at a first degree based on a first location indication specifying a first location in the GIT where the swallowable capsule apparatus captured the first portion of the in-vivo images; and pruning, by the swallowable capsule apparatus, at least a second portion of the in-vivo images at a second degree based on a second location indication specifying a second location in the GIT where the swallowable capsule apparatus captured the second portion of the in-vivo images.
In various embodiments of the non-transitory processor-readable medium, the instructions, when executed by the one or more processors, further cause the swallowable capsule apparatus at least to perform: generating the first location indication based on at least the first portion of the in-vivo images; and generating the second location indication based on at least the second portion of the in-vivo images.
In various embodiments of the non-transitory processor-readable medium, the first location is a small bowel of the person and the second location is a colon of the person, and the first degree is greater than the second degree such that in-vivo images of the small bowel are pruned to a greater degree than in-vivo images of the colon.
In various embodiments of the non-transitory processor-readable medium, the first location is a location of the GIT having no suspected pathology and the second location is a location of the GIT having a suspected pathology, and the first degree is greater than the second degree such that in-vivo images of the location of the GIT having no suspected pathology are pruned to a greater degree than in-vivo images of the location of the GIT having a suspected pathology.
In various embodiments of the non-transitory processor-readable medium, the pruning the at least the first portion of the in-vivo images at the first degree is different from the pruning the at least the second portion of the in-vivo images at the second degree by at least one of: a rate of pruning images, wherein the first degree of pruning has a higher rate of pruning images than the second degree of pruning, a similarity threshold for pruning the in-vivo images, wherein the first degree of pruning has a lower similarity threshold for pruning the in-vivo images than the second degree of pruning, or a difference threshold for pruning the in-vivo images, wherein the first degree of pruning has a higher difference threshold for pruning the in-vivo images than the second degree of pruning.
The details of one or more embodiments of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description and drawings, and from the claims.
The above and other aspects and features of the disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings wherein like reference numerals identify similar or identical elements.
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions and/or aspect ratio of some of the elements can be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals can be repeated among the figures to indicate corresponding or analogous elements throughout the serial views.
The present disclosure relates to systems, devices, apps, and methods for capsule endoscopy procedures. More particularly, the present disclosure relates to systems, devices, apps, and methods for coordinating, conducting, evaluating, and monitoring numerous capsule endoscopy procedures performed simultaneously. Networked and systems and devices provide the capability for patients to conduct capsule endoscopy procedures partially or entirely outside a medical facility, if they wish, and for healthcare professionals to remotely monitor, access and evaluate data from the capsule endoscopy procedure during and/or after the procedure from a networked device. The disclosed systems and methods are flexible and permit capsule endoscopy procedures to be conducted in a variety of manners and configurations. The disclosed systems, methods, devices and apps are patient-friendly and may improve the case of use for both the patient and the Health Care Provider thereby allowing better performance and patient compliance. Furthermore, by reducing the read-time of a capsule endoscopy compiled study, the disclosed systems, methods, devices and apps allow for better diagnosis and treatment.
In the following detailed description, specific details are set forth in order to provide a thorough understanding of the disclosure. However, it will be understood by those skilled in the art that the disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present disclosure. Some features or elements described with respect to one system may be combined with features or elements described with respect to other systems. For the sake of clarity, discussion of same or similar features or elements may not be repeated.
Although the disclosure is not limited in this regard, discussions utilizing terms such as, for example, “processing,” “computing,” “calculating,” “determining,” “establishing,” “analyzing,” “checking,” or the like, may refer to operation(s) and/or process(es) of a computer, a computing platform, a computing system, or other electronic computing device, that manipulates and/or transforms data represented as physical (e.g., electronic) quantities within the computer's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium that may store instructions to perform operations and/or processes.
Although the disclosure is not limited in this regard, the terms “plurality” and “a plurality” as used herein may include, for example, “multiple” or “two or more.” The terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like. The term set, when used herein, may include one or more items. Unless explicitly stated, the methods described herein are not constrained to a particular order or sequence. Additionally, some of the described methods or elements thereof can occur or be performed simultaneously, at the same point in time, or concurrently.
The term “classify” may be used throughout the specification to indicate a decision that assigns one category among a set of categories to an image/frame.
The terms “image” or “frame” may be used interchangeably herein.
The term “gastrointestinal tract” (“GIT”), as used herein, may relate to and include the entire digestive system extending from the mouth to the anus, including the pharynx, esophagus, stomach and intestines, and any other portion. The terms “GIT portion” or “portion of a GIT” may refer to any portion of the GIT (anatomically distinct or not). Depending on context, the term GIT may refer to a portion of the entire digestive system but not the entire digestive system.
The term “location” and its derivatives, as referred to herein with respect to an image, may refer to the estimated location of the capsule along the GIT while capturing the image or to the estimated location of the portion of the GIT shown in the image along the GIT.
A type of CE procedure may be determined based on, inter alia, the portion of the GIT that is of interest and is to be imaged (e.g., the colon or the small bowel (“SB”)), or based on the specific use (e.g., for checking the status of a GI disease, such as Crohn's disease, or for colon cancer screening).
The terms “surrounding” or “adjacent,” as referred to herein with respect to images (e.g., images that surround another image(s), or that are adjacent to other image(s)), may relate to spatial and/or temporal characteristics unless specifically indicated otherwise. For example, images that surround or are adjacent to other image(s) may be images that are estimated to be located near the other image(s) along the GIT and/or images that were captured near the capture time of another image, within a certain threshold, e.g., within one or two centimeters, or within one, five, or ten seconds.
The term “Procedure Data” may refer to images and metadata stored on the wearable device and uploaded to the cloud or to a local computer for processing by an engine software.
The term “Compiled Study” or “Study” may refer to and include at least a set of images selected from the images captured by a capsule endoscopy device during a single capsule endoscopy procedure performed with respect to a specific patient and at a specific time, and can optionally include information other than the images as well.
The term “Capsule Endoscopy Report” or “Report” may refer to and include a report generated based on the compiled study for a single capsule endoscopy procedure performed with respect to a specific patient and at a specific time and based on reader input, and may include images, text summarizing the findings and/or recommendation for follow-up based on the compiled study.
The terms “app” or “application” may be used interchangeably and refer to and include software or programs having machine-executable instructions which can be executed by one or more processors to perform various operations.
The term “online processing” may refer to operations which are performed during a CE procedure or prior to the upload of all of the procedure data. In contrast, the term “offline” may refer to operations which are performed after a CE procedure has been completed or after the upload of all of the procedure data.
Referring to
Different capsule devices 110 can be used for different types of CE procedures. For example, different capsule devices 110 may be designed for imaging the small bowel, imaging the colon, imaging the entire GIT, or imaging particular situations, such as imaging a GIT that has Crohn's disease. The terms “capsule” and “capsule device” may be used interchangeably herein. The capsules 110 may include processing capabilities that allow the capsules to prune or discard images, e.g., to prune or discard very similar images. As used herein, “pruning” an image refers to designating or otherwise treating the image as an image that will not be utilized. In various embodiments, pruning an image may include discarding the image, not storing the image, or deleting the image from memory or from a storage, among other possibilities. For example, if a capsule 110 captures images that are essentially identical, processing within the capsule 110 can detect such similarity and decide to communicate only one of the essentially identical images to the wearable device 120. Therefore, a capsule 110 may not communicate all of its images to the wearable device 120. In some embodiments such filtering of similar images may be performed alternatively or additionally in the wearable device 120 or in the mobile device 130. Further aspects of pruning and/or discarding images by the capsule 110 will be described in connection with
In some embodiments, a capsule may communicate images in a sparse manner, e.g., communicate only each xth captured image (e.g., every second captured image, every fifth or every tenth captured image). A device receiving the communicated images, may process the images to determine a measure of similarity or differentiation between the communicated images. According to some aspects, if two successively communicated images are determined to be different based on such a measure, an instruction may be communicated to the capsule 110 to communicate the images captured between the two images. The receiving device on which such processing may be performed may be, for example, the wearable device 120, the mobile device 130, or the remote computing device (e.g., a cloud system) 140. Such similar image filtering configuration may allow saving in resources and be more cost-effective since it may lead to reduction in communication and processing volumes. Saving in resources is specifically significant for devices which are typically limited in resources, such as the capsule 110 and the wearable device 120.
The wearable device 120 can be a device that is designed to communicate with the capsule device 110 and to receive images of the GIT from the capsule device 110. In aspects of the present disclosure, the wearable device 120 is referred to as a “patch” based on a form factor and light weight similar to a medical patch that can be adhered to patient's skin. The patch is smaller than, for example, a wearable device that must be secured to a patient using a belt. The patch can be a single unitary device (as opposed device with separate parts) that includes an adhesive configured to adhere to a patient's skin, such as to the abdomen. The patch/wearable device 120 can be a single-use disposable device. For example, the patch/wearable device 120 can be non-rechargeable and can have power sufficient for only a single capsule endoscopy procedure. The/wearable device 120 may be then removed and discarded, e.g., by the patient, at the end of the procedure. Although the wearable device 120 is illustrated in
With continuing reference to
The remote computing system 140 can be any system that performs computing and can be configured in various ways, including, without limitation, a cloud system/platform, a shared computing system, a server farm, a proprietary system, a networked Intranet system, a centralized system, or a distributed system, among others, or a combination of such systems. For convenience, the remote computing system 140 is illustrated in
The cloud system 140 receives and stores the procedure data 122. The cloud system 140 can process and analyze the procedure data 122 using, for example, cloud computing resources, to generate a compiled study 142. As mentioned above, the term “compiled study” may refer to and include at least a set of images selected from the images captured by a capsule endoscopy device during a single capsule endoscopy procedure performed with respect to a specific patient and at a specific time, and can optionally include information other than the images as well. The term “capsule endoscopy report” or “report” may refer to and include a report that is generated based on the compiled study for a single capsule endoscopy procedure performed with respect to a specific patient and at a specific time and based on reader input and based on reader input and may include images, image indications, text summarizing the findings and/or recommendation for follow-up based on the compiled study. In the cloud system 140, the software which processes the procedure data and generates the study may be referred to as “AI engine.” The AI engine includes a bundle of algorithms and may include machine learning algorithms, such as deep learning algorithms, and also other types of algorithms. When the remote computing system 140 is not a cloud system, the remote computing system 140 can process and analyze the procedure data using centralized or distributed computing resources, which persons skilled in the art will understand.
A reader 160, typically a healthcare professional, can remotely access the compiled study 142 in the cloud system 140 using a client software application and/or using a browser. The reader 160 reviews and evaluates the compiled study 142 and may create a procedure report via a dedicated reading or viewing application while, e.g., selecting, adding or revising information. A capsule endoscopy (CE) report 144 is generated based on the compiled study 142 and the reader's input via the reading application. The CE report 144 may be then transmitted to the medical facility 150 associated with the CE procedure and may be stored in the medical facility's data systems. In some embodiments, the CE report may become available to a health care provider in the medical facility or to the procedure referring health care provider via a dedicated application. According to some aspects, the read time of a compiled study 142 may be reduced by generating compiled studies which include only a relatively small number of images (e.g., only up to a hundred images per a procedure, up to a few hundreds of images per a procedure or up to an order of a 1,000). This may be enabled, inter alia, by utilizing selection or decision-making methods which provide high sensitivity (e.g., by providing high probability of identifying the images of interest) together with high specificity (e.g., by providing high probability of identifying images which are not of interest) per a procedure. According to some aspects, the compiled study generation may be performed by employing machine learning or specifically deep-learning.
In the capsule endoscopy procedure phase, the patient ingests the capsule (215). If the patient is in a medical facility, the patient can either remain there or can be released to go home or go elsewhere (220). During the procedure, the capsule device captures images of the patients GIT. The wearable device receives the data from the capsule device. Using the Internet connectivity provided by the mobile device or using its own cellular connectivity, the wearable device uploads procedure data to a remote computing system, when the Internet connectivity is available (225). If there is no available connection, the procedure data can be stored in an internal storage of the wearable device.
In various embodiments, the wearable device can determine that the capsule endoscopy procedure is completed (230) by, for example, receiving no further data from the capsule, processing the procedure data to detect a completion, and/or other ways. In various embodiments, the remote computing system can determine that the capsule endoscopy procedure is completed (230), which will be discussed in more detail later herein. In various embodiments, a procedure may be “completed” when the capsule has left the GIT portion of interest for the CE procedure even though the capsule is still traversing the patient's GIT. In various embodiments, the procedure may be completed when the capsule has exited the patient's body. When the completion of the CE procedure is detected, the patient is alerted to remove the wearable device (235). In various embodiments, the alert can be provided by the wearable device or by the mobile device or by both. If procedure data on the wearable device was not fully uploaded to the remote computing system because an Internet connection was not available, or for any other reason, the patient can be notified to provide the wearable device to a medical facility where the procedure data can be uploaded from the wearable device to the remote computing system (240).
In the post-procedure phase, the remote computing system processes and analyses the procedure data to generate a compiled study (245). The cloud system alerts one or more healthcare professionals that the compiled study is ready and available (250). The healthcare professional(s) may include a specialist, a referring physician, and/or other medical professionals. A reader reviews the compiled study and may select, add or revise certain information (255). When the review is completed, the computing system generates a report based on the compiled study and the healthcare professional's input. The report is then communicated to and stored in the medical facility's data systems, such as in electronic hospital records (EHR) (255).
The embodiments of
Referring to
The computing system 340 then processes and analyzes the procedure data 322 and generates a compiled study 342. In the computing system 340, the software which processes the procedure data and generates the study may be referred to as “AI engine,” as explained above. The AI engine includes a bundle of algorithms and may include machine learning algorithms, such as deep learning algorithms and additional algorithms. The AI engine can be installed in the computing system 340 in various ways. In various embodiments, the AI engine can reside in a standalone computer or computing box and can be executed by computing resources of the standalone computer. A reader 360, such as a healthcare professional, can access the compiled study 342 in the computing system 350 using a client software application and/or using a browser. The reader 360 reviews and evaluates the compiled study 342 and may, e.g., select, add or revise certain information. The computing system 340 generates a capsule endoscopy (CE) report 344 based on the compiled study 342 and the reader's input. The CE report 344 is then stored in the medical facility's data systems 346. Accordingly, the procedure data 322 is stored in and is processed by the medical facility's systems after the CE procedure is completed, and the compiled study 342 and CE report 344 are also stored in and processed by the medical facility's systems, without such information being transferred to a remote computing system.
The computing system 500 includes a processor or controller 505 that may be or include, for example, one or more central processing unit processor(s) (CPU), one or more Graphics Processing Unit(s) (GPU or GPGPU), and/or other types of processor, such as a microprocessor, digital signal processor, microcontroller, programmable logic device (PLD), field programmable gate array (FPGA), or any suitable computing or computational device. The computing system 500 also includes an operating system 515, a memory 520, a storage 530, input devices 535, output devices 540, and a communication device 522. The communication device 522 may include one or more transceivers which allow communications with remote or external devices and may implement communications standards and protocols, such as cellular communications (e.g. 3G, 4G, 5G, CDMA, GSM), Ethernet, Wi-Fi, Bluetooth, low energy Bluetooth, Zigbee, Internet-of-Things protocols (such as mosquitto MQTT), and/or USB, among others.
The operating system 515 may be or may include any code designed and/or configured to perform tasks involving coordination, scheduling, arbitration, supervising, controlling or otherwise managing operation of computing system 500, such as scheduling execution of programs. The memory 520 may be or may include, for example, one or more Random Access Memory (RAM), read-only memory (ROM), flash memory, volatile memory, non-volatile memory, cache memory, and/or other memory devices. The memory 520 may store, for example, executable instructions that carry out an operation (e.g., executable code 525) and/or data. Executable code 525 may be any executable code, e.g., an app/application, a program, a process, task or script. Executable code 525 may be executed by controller 505.
The storage 530 may be or may include, for example, one or more of a hard disk drive, a solid state drive, an optical disc drive (such as DVD or Blu-Ray), a USB drive or other removable storage device, and/or other types of storage devices. Data such as instructions, code, procedure data, and medical images, among other things, may be stored in storage 530 and may be loaded from storage 530 into memory 520 where it may be processed by controller 505. The input devices 535 may include, for example, a mouse, a keyboard, a touch screen or pad, or another type of input device. The output devices 540 may include one or more monitors, screens, displays, speakers and/or other types of output devices.
The illustrated components of
The description above described various systems and methods for capsule endoscopy procedures. Communication capabilities between various components of the described systems are described below in connection with
Referring to
In the capsule endoscopy kit 610, the capsule device 612 and the wearable device 614 can communicate with each other using radio frequency (RF) transceivers. Persons skilled in the art will understand how to implement RF transceivers and associated electronics for interfacing with RF transceivers. In various embodiments, the RF transceivers can be designed to use frequencies that experience less interference or no interference from common communications devices, such as cordless phones, for example. The wearable device 614 can include various communication capabilities, including Wi-Fi, low energy Bluetooth (BLE), and/or a USB connection. The term Wi-Fi includes Wireless LAN (WLAN), which is specified by IEEE 802.11 family of standards. The Wi-Fi connection allows the wearable device 614 to upload procedure data to the cloud system 640. The wearable device 614 can connect to a Wi-Fi network in either a patient's network system 620 or a healthcare provider's network system 630, and the procedure data is then transferred to the cloud system 640 through the Internet infrastructure. The wearable device 614 is also equipped with a wired USB channel for transferring procedure data when a Wi-Fi connection is not available or when procedure data could not all be communicated using Wi-Fi. The Bluetooth® low energy (BLE) connection is used for control and messaging. Because the BLE connection uses relatively low power, BLE can be continuously-on during the entire procedure and is suited for control messaging. Depending on the device and its BLE implementation, the BLE connection may support communications rates of about 250 Kbps-270 Kbps through about 1 Mbps. While some BLE implementations may support somewhat higher communication rates, a Wi-Fi connection is generally capable of providing much higher communication rates. Therefore, a Wi-Fi connection will generally be used for transferring procedure data to the cloud system 640, which may be transferred at transfer rates of 10 Mbps or higher, depending on the connection quality and amount of procedure data. In various embodiments, when the amount of procedure data to be transferred is suitable for the BLE connection transfer rate, the procedure data can be transferred using the BLE connection.
As shown in
With reference to
A patient software app can be used to set up the Wi-Fi connection 720 between the wearable device 614 and the mobile hotspot of the patient mobile device 622. The patient app will be described later herein. Using a mobile hotspot, the wearable device 614 can communicate directly to a given Internet address or, alternatively, can connect to a subnet client (e.g., default gateway address). An advantage of direct connection is that the mobile device 622 transfers the procedure data transparently to the cloud system 640 and there is no need for internal buffers, but a potential disadvantage is that data transfer speed between the wearable device 614 and the mobile device 622 may vary depending on the upstream Internet connection quality, such as cellular signal strength 710. When the wearable device 614 connects to a mobile local subnet (default gateway), the wearable device 614 transfers the procedure data to a local buffer of the mobile device 622, and upload of the procedure data from this buffer to the cloud system 640 is handled by another thread in parallel. In this case, data transfer speed between the wearable device 614 and the mobile device 622 can advantageously utilize the full bandwidth of the Wi-Fi connection 720 regardless of the Internet connection quality 710, but a potential disadvantage is that the internal buffer of the mobile device 622 can exposed the procedure data to security threats.
In the illustrated configuration, the connections between the wearable device 614 and the patient mobile device 622 include a BLE connection (CH1) 1030 for control and massaging and a Wi-Fi connection 1020 for data upload (Client/hotspot). The connections between the wearable device 614 and the healthcare provider device 634 include a BLE connection (CH2) 1050 for the healthcare provider device 634 to control the “real-time view” functionality and a Wi-Fi connection 1040 for “real-time view” data transfer from AP to client. The wearable device 614 can ping the mobile device BLE connection (CH1) 1030 every sixty seconds (or another time interval) to verify that the mobile device 622 is active and in range. If the mobile device 622 is detected as located too far away based on the ping of the BLE connection 1030, the wearable device 614 can provide an alert to the patient before the connection 1030 is lost (e.g., beep alerts).
Generally, the wearable device 614 operates as a Wi-Fi client to upload procedure data to the cloud system 640. The wearable device 614 can expose the BLE channel (CH2) 1050 constantly or regularly to check for a “real-time view” request. In case such a request is received, the wearable device 614 can establish a TLS1.2 (or higher) secured TCP/IP connection before data transmission. In various embodiments, the wearable device 614 may keep the Wi-Fi connection 1040 active for a period of time, such as sixty seconds, and then terminate the Wi-Fi connection 1040. The “real-time view” request may be re-established. However, the wearable device 614 also operates to ping the mobile hotspot Wi-Fi connection 1020 of the mobile device every sixty seconds (or another time interval) to keep the mobile hotspot Wi-Fi connective 1020 active, so that the mobile hotspot connection 1020 is not shut down due to inactivity. The wearable device 614 may not upload procedure data to the cloud system 640 while the “real time view” request is ongoing, such that upload of procedure data by the wearable device 614 to the cloud system 640 is delayed until the “real time view” request ends.
Accordingly, the description above described, with reference to
As mentioned above, various software apps/applications can run on the devices.
The patient app 1210, the reader app 1240, and the healthcare provider app 1230 can communicate with the cloud system 640. In the illustrated configuration, such apps, 1210, 1230 and 1240, communicate with a portion of the cloud system 640 configured to receive and present data, which is designated as the HCP cloud 642. Another portion of the cloud system 640, designated as the AI cloud 644, is a data processing and machine learning sub-system that performs processing of procedure data and generates data to be presented by the HCP cloud 642. Thus, the AI cloud can perform machine learning but can also perform non-AI processing and tasks. The AI cloud 644 can perform operations that generate a compiled study. In the AI cloud 644, the software which processes the procedure data and generates the study may be referred to as “AI engine.” The AI engine includes a bundle of algorithms and may include machine learning algorithms, such as deep learning algorithms and algorithms of other types. The AI cloud 644 can apply various algorithms and automated decision systems, including deep learning or other machine learning operations and techniques. The separation of the cloud system 640 into two sub-systems provides isolation of the AI cloud 644, such that the AI cloud 644 may only be accessed by the HCP cloud 642 and there is no direct connection between any of the applications used by end-users and the AI cloud 644. Such configuration may better protect the AI cloud from malicious actions or unauthorized access. However, use of two sub-systems is illustrative and is not intended to limit the scope of the present disclosure. Other types and/or numbers of sub-systems in a cloud system 640 are within the scope of the present disclosure. Persons skilled in the art will recognize how to implement the cloud system 640, including by way of cloud services platforms.
As mentioned above, the term “online processing” may refer to processing performed on a remote computing system (e.g., AI cloud 644) during the procedure or prior to the upload of all of the procedure data (i.e., complete upload of procedure data) and with respect to only a portion of the procedure data. Based on such online processing, online detection of, e.g., pathologies of interest or anatomical structures, may be provided. According to some aspects, the online detection may be performed with respect to batches of images uploaded to the cloud system 640. For example, fresh bleeding, strictures, capsule retention or passage to another anatomical portion of the GIT, may be online detected. A referring physician or a healthcare provider supervising the procedure may be notified in real-time of suspected findings such as fresh bleeding or stricture, which may require immediate treatment. Identification of anatomical structures, portions or landmarks (e.g., cecum or the pyloric valve) may be used, for example, for localization of the capsule. According to some aspects, the uploaded images may be processed online to determine a prediction, e.g. with respect to the capsule transfer time, velocity or motility. Such prediction, for example, may be used to change the capsule capture frame rate.
Each app/application will now be described below.
The patient app is configured to provide display screens to guide a patient through preparing for a capsule endoscopy procedure and though undergoing the procedure. In addition, the patient app provides patient information to the cloud system and also allows the patient to set up uploading of procedure data from the wearable device to the cloud system. The patient app may be installed on a mobile device carried by a patient before the CE procedure commences. In various embodiments, the mobile device may be a dedicated device provided to the patient by a medical facility for the CE procedure or may be a mobile device owned by the patient, such as the personal mobile phone of the patient.
Referring to
In accordance with aspects of the present disclosure, a regimen may be identified in a QR code. A healthcare professional can select a regimen for a CE procedure for a patient, and the regimen can be identified in the QR code that is provided/printed in the patient instructions provided to the patient, as mentioned above. The QR code can be generated based on the regimen selected by a physician and based on other information, and the QR code can be printed in the patient instructions. An example of a regimen is shown in the patient app screen of
The aspects and embodiments described in connection with
If procedure data in the wearable device is not fully uploaded to the cloud system, the patient may be instructed to provide the wearable device to a medical facility for manual transfer of the procedure data from the wearable device. According to some aspects, a healthcare provider may be notified when a procedure is completed, e.g., via the healthcare provider application, which is described below in connection with
The HCP application may facilitate the handling and/or managing of the CE procedures, including check-in processes and pairing processes between different devices or components of the disclosed systems. The HCP application may conveniently allow the HCP to review online the progress and status of the CE procedures (e.g., by displaying a dashboard of procedures in progress), to access procedure data, and to connect with data systems of the medical facility. In the illustrated embodiment, the HCP app 2010 allows a medical facility and healthcare providers to obtain information relating to CE procedures 2020 which are ready to start 2022, CE procedures which are ongoing 2024, CE procedures which have a compiled study ready for review 2026, and CE studies which have a completed report 2028. A healthcare professional can interact with the HCP app 2010 to obtain a listing of such procedures 2030 and to select a particular procedure 2040 to access. When a healthcare provider selects a particular procedure 2040, information relating to the procedure can be shown on the display screen, such as type of CE procedure 2042, status of the procedure 2044, duration of the procedure 2046, and latest image received from a wearable device or from the cloud system 2048. The displayed information also includes interim findings history 2050, which will be described in connection with
The display screen of
According to some aspects, the online processing of images by the cloud system (e.g., AI cloud sub-system) may provide online identification of polyps (e.g., via the interim finding) and may allow for a same day colonoscopy. In case an identified polyp needs to be removed, a physician provided with the interim findings may suggest the patient to have a colonoscopy at the same day to remove the polyp. Same day colonoscopy may be more convenient and less difficult for the patient because the patient is already completed with pre-procedure preparation.
The description above described aspects of a patient app and an HCP app. The following will describe the options for remote view, real-time view, and a near real-time view. While the real-time view requires a separate app, the near-real time view and the remote view may be provided as features of the HCP application.
In contrast to the remote view feature or app, the real-time view app (1220,
The near real-time view provides a timing of image access that is between the timing provided by the remote view and the real-time view, and utilizes the connectivity shown in
The following will describe the reader app 1240 of
A procedure study may include images selected from the procedure data (i.e., the procedure data received by the computing system according to the disclosed systems and methods). The images of a study may be, for example, images selected to represent the procedure data or the GIT portion of interest, to include or represent one or more event indicators of interest or a combination of such and depending on the goal of the CE procedure. According to some aspects, the study may include additional data, such as estimated location of the images along the GIT, indication to an event indicator identified (at some level of certainty) in the image and a size of such event indicator. The images may be processed and analyzed by the computing system (e.g., the AI cloud of a cloud system according to the disclosed systems and methods) to select the images to be included in the study and to receive additional data. In some embodiments, the images of a study may include two levels of images selected at two stages. At a first stage, first level images may be selected as disclosed above. At a second stage, second level images may be selected to provide additional information for images of the first level. According to some aspects, first level images will be displayed to the viewer by default while second level images will be displayed only per a user's action or request. The first and second level images may be displayed as exemplified and described with respect to
According to some aspects, a subset of images of a captured stream of in-vivo images (i.e., images of a procedure data) may be automatically selected form the stream of in-vivo images according to a first selection method. For each image of at least a portion of the subset of images, one or more corresponding additional images from the stream of in vivo images may be selected according to a second selection method. For each image of the subset of images, one or more images may be selected according to a second selection method. The subset of selected images (i.e., first level images) may be displayed for a user's review. Upon receiving user input (e.g., mouse click, activating a GUI control etc.), one or more additional images (i.e., second level images) corresponding to a currently displayed image of the subset of images (i.e., a first level image), may be displayed. The second selection method may be based on a relation between images of the stream of in vivo images and the currently displayed image. Such a relation between the first and second level images may be: the images are identified to include at least a portion of the same feature or the same event or event indicator, the images are identified to include at least a portion of the same type of feature or event or event indicator, the images were captured in time proximity, the images are localized adjacently along the at least portion of the subject's GIT, and combinations thereof. The subset of images and the one or more images corresponding to the subset of images may be automatically selected by the computing device (e.g., the AI cloud). According to some aspects, the selection may involve the application of Machine Learning and specifically Deep Learning.
With reference also to
The illustrated cloud system is a multi-user system that is able to support a vast number of procedures performed in parallel, even when resource load may dramatically change at different times (e.g., peak hours versus ‘low activity’ hours or hours with no activity). With respect to the system uptime, the cloud system is dynamically scalable, which allows stable updates and changes to the cloud platform without affecting the system uptime.
The AI cloud sub-system 2620 is responsible for processing data and may perform resource-intensive computations such as machine learning and specifically deep-learning. The AI cloud can also perform non-AI processing and tasks. Some machine learning algorithms are complex and require heavy computation resources. These resources require scaling out when usage load increases, in order to support multiple accounts\users simultaneously during peak levels and to maintain an expected service level. In order to meet ever-growing needs for high performance with strong computation capabilities in scalable platforms, software infrastructure also should effectively exploit the cloud resources to provide both performance and efficiency.
As persons skilled in the art will recognize, a difference between different software architectures is the level of granularity. Generally, a more granular architecture provides more flexibility. A software system is “monolithic” if it has an architecture in which functionally distinguishable aspects (for example data input and output, data processing, error handling, and the user interface) are interwoven rather than being contained in architecturally separate components. In the illustrated cloud system, the software architecture of the cloud system breaks a big monolithic flow into small pieces of a structured pipeline that can be managed and scaled more easily by using microservices technology. Microservices, or microservice architecture, is an approach to application development in which a large application is built as a suite of modular components or services. When operations are divided into microservices, each microservice is not dependent on most of the other microservices and generally can work independently.
Such a software architecture allows scalability of the system, as services may be added or removed on-demand. Each microservice is packaged in a container, and optionally may be packages in a container-docker. A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A docker container is a lightweight, standalone, executable package of software that includes everything needed to run an application, such as code, runtime, system tools, system libraries, and settings. The docker container is a kind of a virtual environment, and holds, for example, an operation system and all the elements needed to run the microservice (e.g., an application). The cloud system of
An orchestrator application, such as Kubernetes, can be used for containers management. The containers management application may add or remove containers. For a group of machines and containerized applications (e.g. Dockerized applications), the orchestrator can manage those applications across those machines. The use of orchestrator may better the performance of the system.
In the cloud system of
A cloud system architecture as described above provides a flexible and efficient cloud platform, simplifies the upgrades in the cloud system, and allows scalability and compatibility for the specific needs of the system clients. It supports and facilitates a multiuser system which services numerous end-users simultaneously. It also allows better handling of malfunctions because the services are mostly independent. At each point of time, the healthiness level of the system, e.g., load level and exceptions in a specific microservice, may be monitored and may be handled immediately. Such an architecture for the cloud system can be sufficient to meet the requirement of the disclosed systems and methods, including heavy computational tasks involving complex algorithms, such as deep learning algorithms and the processing of large amounts of data.
The aspects described above are exemplary and variations are contemplated to be within the scope of the present disclosure. For example, the architecture described above may also be applicable to an on-premises computing system, such as the system of
Various operations will now be described in connection with
As an example of the operation of
As mentioned above in connection with
In accordance with aspects of the present disclosure, the operation of pruning images may involve pruning images at a particular rate, i.e., prune every kth image, where k>1. For example, without limitation, a pruning operation may prune every 2nd image (k=2), or prune every 3rd image (k=3), or prune images at another rate. A higher rate of pruning images (i.e., lower k) utilizes less resources (such as storage resources and/or communication resources) but obtains a less comprehensive capture of the GIT, while a lower rate of pruning images (i.e., higher k) utilizes more resources but obtains a more comprehensive capture of the GIT. As described in more detailed later herein, different pruning rates may be applied depending on the location or segment in the GIT where the capsule captured the images. Images which are not pruned by the capsule 110 may be communicated to a receiver device external to the person, such as communicated to a wearable device 120 worn by or adhered to the person, a mobile device 130 carried by the person, and/or another device (e.g., router or other network device), among other possibilities.
In accordance with aspects of the present disclosure, the operation of pruning images may involve pruning images based on a similarity threshold. In various embodiments, the capsule 110 may include processing capabilities (e.g., processor(s), memory, instructions, etc.) that compare two images to determine overall image similarity. In various embodiments, the capsule 110 may include processing capabilities that compare two images and/or image regions to determine pathology similarity (e.g., similarity of occurrences of a pathology, such as polyp, shown in the two images).
In various embodiments, the process capabilities may implement a pixel-based comparison that compares pixel values to determine their overall image similarity or determine pathology similarity. Persons skilled in the art will understand how to implement pixel-based comparisons. In various embodiments, the processing capabilities may implement a machine learning model (such as, without limitation, a convolutional neural network) that receives two images or receives image portions as inputs and outputs an indication of similarity between the two images or image portions. Such a machine learning model may be a classification model or may be a regression model. An example of a machine learning model that compares two images to determine whether they contain the same pathology or contain different pathologies is described in International Publication No. WO2022/049577A1, which is hereby incorporated by reference herein in its entirety. Other approaches for a capsule to compare two images and/or image portions to determine overall image similarity and/or determine pathology similarity are contemplated to be within the scope of the present disclosure.
A similarity threshold may be used to determine whether two images or pathologies are sufficiently similar. The similarity threshold may be applied to an output value of the comparison or to an intermediate value of the comparison. If the value is below the similarity threshold, the two images or pathologies can be determined to be sufficiently different, such that neither image is pruned. If the value is above the similarity threshold, the two images or pathologies can be determined to be sufficiently similar, such that one of the two images may be pruned. As described in more detailed later herein, different similarity thresholds may be applied depending on the location or segment in the GIT where the capsule captured the images. Images which are not pruned by the capsule 110 may be communicated to a receiver device external to the person, such as communicated to a wearable device 120 worn by or adhered to the person, a mobile device 130 carried by the person, and/or another device (e.g., router or other network device), among other possibilities.
In accordance with aspects of the present disclosure, the operation of pruning images may involve pruning images based on a difference threshold. A difference threshold, in contrast to a similarity threshold, is used to determine whether two images or pathology are sufficiently different. The same techniques described in connection with the similarity threshold may be used in connection with the difference threshold, such as the pixel-based techniques or machine-learning based techniques described above. An example of a machine learning model that compares two images to determine whether they contain the same pathology or contain different pathologies is described in International Publication No. WO2022/049577A1, which was incorporated by reference above. Other approaches for a capsule to compare two images and/or image portions to determine overall image difference and/or determine pathology difference are contemplated to be within the scope of the present disclosure.
A difference threshold may be used to determine whether two images or pathologies are sufficiently different. The difference threshold may be applied to an output value of the comparison or to an intermediate value of the comparison. If the value is above the difference threshold, the two images or pathologies can be determined to be sufficiently different, such that one of the two image may be pruned. If the value is below the difference threshold, the two images or pathologies can be determined to be sufficiently similar, such that neither of the two images is pruned. As described in more detailed later herein, different difference thresholds may be applied depending on the location or segment in the GIT where the capsule captured the images. Images which are not pruned by the capsule 110 may be communicated to a receiver device external to the person, such as communicated to a wearable device 120 worn by or adhered to the person, a mobile device 130 carried by the person, and/or another device (e.g., router or other network device), among other possibilities.
At block 3120, the operation involves pruning, over time, at least a portion of the in-vivo images. In various embodiments, the pruning operation of block 3120 may be performed while the capturing operation of block 3110 occurs. In various embodiments, the capturing operation 3110 and the pruning operation of block 3120 may be performed successively, one after the other. As mentioned above, the pruning operation may apply one or more rates of pruning, may apply one or more similarity threshold, and/or may apply one or more difference thresholds.
At block 3130, the operation involves communicating images of the in-vivo images, which were not pruned, to a receiver device external to the person. The receiver device may be a wearable device worn by or adhered to the person, a mobile device carried by the person, and/or another device (e.g., router or other network device), among other possibilities.
The operation of
Aspects of the present disclosure involve detecting or approximating the location in a gastrointestinal tract (GIT) where a capsule device (e.g., 110,
An approach for estimating images that reflect a transition between adjacent segments of a GIT is described in U.S. Patent Application Publication No. 2023/0148834, by Given Imaging Ltd., which is hereby incorporated by reference herein in its entirety. An approach for classifying images to segments of a GIT is described in U.S. Pat. No. 11,934,491, by Given Imaging Ltd., which is hereby incorporated by reference herein in its entirety. In various embodiments, an approach for detecting that a capsule is in a designated GIT region of interest (such as in a GIT region having a suspected pathology) is disclosed in U.S. Pat. No. 11,918,343, by Given Imaging Ltd., which is hereby incorporated by reference herein in its entirety. These approaches may be utilized to detect or approximate the location(s) in the GIT where a capsule device captured particular images and to provide location indications (e.g., small bowel, colon, or colon segment, etc.) specifying the locations in the GIT where the capsule captured the images. The examples above are merely illustrative, and other approaches for detecting or indication locations within a GIT are contemplated to be within the scope of the present disclosure.
Where a capsule device 3310 implements the approaches for providing location indications, the capsule device 3310 may process in-vivo images to provide the location indications. Where another device 3320-3350 implements the approaches for providing location indications, such other devices 3320-3350 may receive captured images from the capsule device 3310 and process the received images to provide location indications. The images received by the other devices 3320-3350 include images that were not pruned by the capsule device 3310. The other devices 3320-3350 may then communicate the location indications back to the capsule device to inform the capsule device of the approximate location in the GIT where the images were captured.
At block 3220, the operation involves pruning at least a second portion of the in-vivo images at a second degree based on a second location indication specifying a second location in the GIT where the swallowable capsule apparatus captured the second portion of the in-vivo images. As mentioned above, the pruning may be performed based on, e.g., a rate of pruning, a similarity threshold, and/or a difference threshold, among other possibilities. The second location indication may be provided by a capsule device, a wearable device, a mobile device, a healthcare provide device, and/or a cloud system, among other possibilities. The degree of pruning that the capsule device performs based on the second location indication may be based on a predetermined rate of pruning, a predetermined similarity threshold, and/or a predetermined difference threshold for the location specified by the second location indication.
In various embodiments, the first degree of pruning for the first location may be greater than the second degree of pruning for the second location, such that in-vivo images of the first location are pruned to a greater degree than in-vivo images of the second location. In cases where the first location is a small bowel and the second location is a colon, the degree of pruning images captured in the small bowel may be greater than the degree of pruning images captured in the colon. The higher degree of pruning in the small bowel may be beneficial, for example, because the small bowel is narrower, so a capsule device may travel through a small bowel more slowly and capture many similar images. Therefore, pruning images in the small bowel to a higher degree may reduce the occurrences of redundant images.
In cases where the first location is a location having no suspected pathology and the second location is a location having a suspected pathology, the degree of pruning images captured in the location having no suspected pathology may be greater than the degree of pruning images captured in the location having the suspected pathology. The higher degree of pruning in the location having no suspected pathology may be beneficial, for example, because the is some interest (but less interest) in images of a location having no suspected pathology, and keeping fewer images of such a location saves on capsule device resources.
Other scenarios are contemplated to be within the scope of the present disclosure.
Accordingly, systems, devices, methods, and applications for capsule endoscopy procedures have been described herein. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of aspects of the disclosed technology. However, it is apparent to one skilled in the art that the disclosed technology can be practiced without using every aspect presented herein.
Unless specifically stated otherwise, as apparent from the preceding discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing,” “computing,” “storing,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
Different aspects are disclosed herein. Features of certain aspects can be combined with features of other aspects; thus certain aspects can be combinations of features of multiple aspects.
The embodiments disclosed herein are examples of the disclosure and may be embodied in various forms. For instance, although certain embodiments herein are described as separate embodiments, each of the embodiments herein may be combined with one or more of the other embodiments herein. Specific structural and functional details disclosed herein are not to be interpreted as limiting, but as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure. Like reference numerals may refer to similar or identical elements throughout the description of the figures.
The phrases “in an embodiment,” “in embodiments,” “in various embodiments,” “in some embodiments,” or “in other embodiments” may each refer to one or more of the same or different embodiments in accordance with the present disclosure. A phrase in the form “A or B” means “(A), (B), or (A and B).” A phrase in the form “at least one of A, B, or C” means “(A); (B); (C); (A and B); (A and C); (B and C); or (A, B, and C).”
The systems, devices, and/or servers described herein may utilize one or more processors to receive various information and transform the received information to generate an output. The processors may include any type of computing device, computational circuit, or any type of controller or processing circuit capable of executing a series of instructions that are stored in a memory. The processor may include multiple processors and/or multicore central processing units (CPUs) and may include any type of device, such as a microprocessor, graphics processing unit (GPU), digital signal processor, microcontroller, programmable logic device (PLD), field programmable gate array (FPGA), or the like. The processor may also include a memory to store data and/or instructions that, when executed by the one or more processors, causes the one or more processors to perform one or more methods and/or algorithms.
Any of the herein described methods, programs, algorithms or codes may be converted to, or expressed in, a programming language or computer program. The terms “programming language” and “computer program,” as used herein, each include any language used to specify instructions to a computer, and include (but is not limited to) the following languages and their derivatives: Assembler, Basic, Batch files, BCPL, C, C+, C++, Delphi, Fortran, Java, JavaScript, machine code, operating system command languages, Pascal, Perl, PL1, Python, scripting languages, Visual Basic, metalanguages which themselves specify programs, and all first, second, third, fourth, fifth, or further generation computer languages. Also included are database and other data schemas, and any other meta-languages. No distinction is made between languages which are interpreted, compiled, or use both compiled and interpreted approaches. No distinction is made between compiled and source versions of a program. Thus, reference to a program, where the programming language could exist in more than one state (such as source, compiled, object, or linked) is a reference to any and all such states. Reference to a program may encompass the actual instructions and/or the intent of those instructions.
While several embodiments of the disclosure have been described herein and/or shown in the drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of particular embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the claims appended hereto.
The present application is a continuation in part of U.S. patent application Ser. No. 17/611,363, filed Nov. 15, 2021, which is a U.S. National Stage Application filed under 35 U.S.C. § 371 (a) of International Patent Application No. PCT/US2020/033341, filed May 17, 2020, which claims the benefit of and priority to U.S. Provisional Application No. 62/849,508, filed May 17, 2019, and to U.S. Provisional Application No. 62/867,050, filed Jun. 26, 2019. The entire contents of each of the foregoing applications are hereby incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
62867050 | Jun 2019 | US | |
62849508 | May 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17611363 | Nov 2021 | US |
Child | 18882079 | US |