Technology for promoting a user's health is known. In particular, connected health and consumer level diagnostic devices have been used among consumers to improve their long-term health care. For example, biometric devices may be used to ensure users are walking around and moving enough to prevent long-term muscular-skeletal problems and other health conditions.
Comprehensive consumer level biometric and diagnostic devices for oral hygiene, however, are not commonly known or available. Consumer level oral diagnostic devices have been introduced, but these devices are often lacking in functionality for improving oral health, such as for gingivitis detection and health product recommendations. Further, none of the current devices effectively utilize a consumer-grade camera and machine learning techniques for performing such diagnoses. Thus, an intraoral device using a consumer grade camera for determining oral cavity conditions, such as gingivitis, is desired.
The present disclosure may be directed, in one aspect, to a method, device, and/or system for determining a gingivitis condition within an oral cavity. For example, a system may include an intraoral device and/or one or more processors. The intraoral device may include a light source configured to emit light within the oral cavity a camera configured to capture an image of one or more objects within the oral cavity. The one or more processors may be configured to receive the image of the one or more objects within the oral cavity, wherein the one or more objects comprising at least one tooth and gums surrounding the at least one tooth; differentiate the at least tooth and the gums surrounding the at least one tooth as a tooth segment and a gums segment; input the gums segment into a machine learning model; and determine, via the machine learning model, the gingivitis condition based on the gums segment input into the machine learning model. The camera may be a consumer grade camera. The machine learning model may determine the gingivitis condition via a deep learning technique.
The present disclosure will become more fully understood from the detailed description and the accompanying drawings, wherein:
The following description of the preferred embodiment(s) is merely exemplary in nature and is in no way intended to limit the invention or inventions. The description of illustrative embodiments is intended to be read in connection with the accompanying drawings, which are to be considered part of the entire written description. In the description of the exemplary embodiments disclosed herein, any reference to direction or orientation is merely intended for convenience of description and is not intended in any way to limit the scope of the present inventions. Relative terms such as “lower,” “upper,” “horizontal,” “vertical,” “above,” “below,” “up,” “down,” “left,” “right.” “top.” “bottom,” “front” and “rear” as well as derivatives thereof (e.g., “horizontally,” “downwardly,” “upwardly,” etc.) should be construed to refer to the orientation as then described or as shown in the drawing under discussion. These relative terms are for convenience of description only and do not require a particular orientation unless explicitly indicated as such.
Terms such as “attached,” “affixed.” “connected.” “coupled,” “interconnected,” “secured” and other similar terms refer to a relationship wherein structures are secured or attached to one another either directly or indirectly through intervening structures, as well as both movable or rigid attachments or relationships, unless expressly described otherwise. The discussion herein describes and illustrates some possible non-limiting combinations of features that may exist alone or in other combinations of features. Furthermore, as used herein, the term “or” is to be interpreted as a logical operator that results in true whenever one or more of its operands are true. Furthermore, as used herein, the phrase “based on” is to be interpreted as meaning “based at least in part on,” and therefore is not limited to an interpretation of “based entirely on.”
As used throughout, ranges are used as shorthand for describing each and every value that is within the range. Any value within the range can be selected as the terminus of the range. In addition, all references cited herein are hereby incorporated by referenced in their entireties. In the event of a conflict in a definition in the present disclosure and that of a cited reference, the present disclosure controls.
Features of the present inventions may be implemented in software, hardware, firmware, or combinations thereof. The computer programs described herein are not limited to any particular embodiment, and may be implemented in an operating system, application program, foreground or background processes, driver, or any combination thereof. The computer programs may be executed on a single computer or server processor or multiple computer or server processors.
Processors described herein may be any central processing unit (CPU), microprocessor, micro-controller, computational, or programmable device or circuit configured for executing computer program instructions (e.g., code). Various processors may be embodied in computer and/or server hardware of any suitable type (e.g., desktop, laptop, notebook, tablets, cellular phones, etc.) and may include all the usual ancillary components necessary to form a functional data processing device including without limitation a bus, software and data storage such as volatile and non-volatile memory, input/output devices, graphical user interfaces (GUIs), removable data storage, and wired and/or wireless communication interface devices including Wi-Fi, Bluetooth, LAN, etc.
Computer-executable instructions or programs (e.g., software or code) and data described herein may be programmed into and tangibly embodied in a non-transitory computer-readable medium that is accessible to and retrievable by a respective processor as described herein which configures and directs the processor to perform the desired functions and processes by executing the instructions encoded in the medium. A device embodying a programmable processor configured to such non-transitory computer-executable instructions or programs may be referred to as a “programmable device”, or “device”, and multiple programmable devices in mutual communication may be referred to as a “programmable system.” It should be noted that non-transitory “computer-readable medium” as described herein may include, without limitation, any suitable volatile or non-volatile memory including random access memory (RAM) and various types thereof, read-only memory (ROM) and various types thereof, USB flash memory, and magnetic or optical data storage devices (e.g., internal/external hard disks, floppy discs, magnetic tape CD-ROM, DVD-ROM, optical disk. ZIP™ drive, Blu-ray disk, and others), which may be written to and/or read by a processor operably connected to the medium.
In certain examples, the present inventions may be embodied in the form of computer-implemented processes and apparatuses such as processor-based data processing and communication systems or computer systems for practicing those processes. The present inventions may also be embodied in the form of software or computer program code embodied in a non-transitory computer-readable storage medium, which when loaded into and executed by the data processing and communications systems or computer systems, the computer program code segments configure the processor to create specific logic circuits configured for implementing the processes.
Health and/or diagnostic devices (e.g., connected health and/or diagnostic devices) may be used by consumers concerned with their short and/or long-term health care. One or more oral diagnostic devices (e.g., consumer level oral diagnostic devices) may be described herein. These devices may provide detection of an oral care condition (e.g., gingivitis) and/or provide health/or product recommendations for improving oral health.
One or more intraoral cameras (e.g., cost-effective intraoral cameras, such as consumer grade cameras) may be coupled to oral diagnostic devices and/or may connect (e.g., wirelessly connect) to smart devices. Smart devices may include mobile phones, tablets, laptops, and the like. The intraoral cameras may allow users to capture (e.g., efficiently capture) color images and/or video of the oral cavity of the user. The images may be RGB color images or images relating to one or more other color spaces. As described herein, the oral diagnostic device may detect and/or determine conditions of the oral cavity, such as gingivitis, based on the captured images and/or video. In other examples, the images and/or video may be sent to one or more persons (such as an oral care professional) to diagnose oral tissue health and hygiene.
The oral diagnostic device may be capable of real-time detection and/or diagnosis of one or more conditions within the oral cavity. Although the disclosure focuses on gingivitis detection, such conditions may include, for example, gingivitis, plaque, teeth whiteness measurements, over brushing determinations, receding gums, periodontitis, tonsillitis, and the like. The detection and/or diagnosis of the conditions within the oral cavity may be performed via one or more artificial intelligence techniques, such as machine learning techniques. In particular, the detection and/or diagnosis of the conditions within the oral cavity may be performed via deep learning techniques, as described herein.
Diagnostic results may be provided (e.g., displayed) to a user. For example, results relating to a gingivitis diagnosis of a user may be displayed via the oral diagnostic device, one or more external devices (such as a mobile device), and the like. Other information may be displayed to the user, such as potential health implications of the diagnosis, dental visit suggestions, awards and/or recognitions for improved or excellent oral health, and/or hygiene product recommendations. Diagnostics may be saved to a smart device and/or sent (e.g., sent directly) to the user of the oral care device and/or oral care professionals allowing for monitoring of oral care health and/or hygiene progress. In examples where a smart device is used with an application (such as a smart-phone App), a virtual assistant may be activated to provide advice on improving oral health. For example, a virtual assistant may suggest brushing techniques, usage instructions of oral care products, and the like, to remedy the diagnosed oral care condition.
Referring now to the figures,
Visual inspection and probing techniques have traditionally been used for detection and diagnosis of conditions within the oral cavity (such as gingival) in patients. Although visual inspections may be accurate, such techniques may be subjective due to differences in training, experience, and location of the oral care professionals, thereby creating errors in early diagnosis of oral cavity conditions (such as gingivitis). In contrast, artificial intelligence (e.g., machine learning and deep learning) techniques may provide an effective, automated, and accurate diagnoses of several diseases (e.g., gingivitis), as described herein.
Device 100 may contain light sources 102, which may be light emitting diodes (LEDs) (e.g., standard LEDs, organic LEDs, etc.), superluminescent diodes (SLEDs), lasers, arc lamps, a combination of the aforementioned radiation sources, other compact light sources for illuminating the oral cavity, etc. The light sources 102 may be located on one or more portions of device 100, such as on one or more portions of the hand or neck of device 100. For example, the light sources 102 may be located on a top, bottom, and/or side(s) of neck of device 100.
Device 100 may include one or more cameras, such as camera 104. Cameras 104 may be a consumer level camera that may be used to capture images of objects within the oral cavity, such as gums, teeth, etc. Camera 104 (e.g., consumer level camera) may incorporate one or more RGB cameras for imaging tissues. Camera 104 may provide qualitative information about tissue health of an oral cavity. For example, inflammation in tissue (such as gum tissue) may be captured and/or visually diagnosed as likely unhealthy. Camera 104 and light sources 102 may work in concert, such as being directed in a substantially same direction so that the camera 104 may capture an image that is well illuminated. For example, camera 104 may capture an image of at least one tooth within the oral cavity of a user and/or gums adjacent to the at least one tooth. The at least one tooth and/or gums may be well illuminated Two or more captured images may be stitched together to form a complete view of the oral cavity (e.g., the entire oral cavity). One or more single images may be provided to a machine learning model and/or one or more stitched images may be provided to a machine learning model, as described herein.
Camera 104 may be actuated via camera button 106. Button 106 may freeze an image prior to capturing the image and button 106 may be used to cause the camera to capture the image. In examples, a first touch of button 106 may be used to freeze the image within camera 104, and a second (e.g., consecutive) touch of button 106 may be used to capture the image. In examples differing pressures upon button 106 may cause the image to be frozen and/or captured by camera 104.
Device 100 may include a power button 108, which may be used to turn the device 100 on or turn the device 100 off. Although physical buttons may be used to power on/off device 100, capture an image, emit a light, etc. it should be understood that such actions may be actuated in one or more various ways, such as via a voice, a tap, received digital signals, and the like. In examples, device 100 may include a hand piece holder 110 that may include a power on/off function (e.g., automatic on/off function). Device 100 may include a magnetic sensor 116, which may be used to determine if device 100 is connected to a holder. In examples, upon the device 100 being connected to the holder, the device may transmit information relating to images captured within an oral cavity, as described herein.
Device 100 may include a processor 112 configured to perform one or more calculations and/or to cause one or more actions. For example, processor 112 may process data (e.g., image data), such as image data captured within an oral cavity. Example processors may be electronic circuits, systems, modules, subsystems, sub modules, devices and combinations thereof, such as Central Processing Units (CPU's), microprocessors, microcontrollers, processing units, control units, tangible media for recording and/or a combination thereof. Storage device 118 may be configured to store derived data from the processor 112.
Device 100 may be wirelessly connected to a mobile phone 210A and/or laptop 210B (collectively referred to as mobile device 210. Mobile device 210 may include one more other devices, such as a tablet, watch, etc. Device 100 may be wirelessly connected to one or more other devices, such as an external server 220 (e.g., the Internet/Cloud). In some examples device 100 may be self-contained. In examples in which device 100 is self-contained, the machine learning (e.g., deep learning) model may be being implemented directly on device 100. In other examples, pre-processing may be performed on device 100 and/or the pre-processed data may be sent to an external processor (e.g., on server 220) to perform machine learning techniques. In the self-contained example, the device 100 may be connected (e.g., wirelessly connected) to a smart user device or an external server (e.g., the Internet/Cloud) for updating the machine learning algorithms, updating a virtual assistant product and recommendation database, and/or sending data to a third party.
As described herein, machine learning may be used to diagnose and/or evaluate tissue health and oral hygiene. For example, images captured by the cameras (e.g., consumer grade intraoral cameras) may capture images within the oral cavity and present the images to a machine learning model. The machine learning model may be a deep learning model, although it should be understood that one or more other machine learning models may be used.
As known by those of skill in the art, machine learning may work with hand crafted features and relatively simple trainable classifiers. With typical machine learning, feature extraction may be time consuming for training of the model and inference. Deep learning is a part of machine learning based on artificial neural networks with representative learning. Deep learning has an inbuilt automatic multi stage feature learning process that learns rich hierarchical representations (e.g., features). With typical deep learning models, model training may be time consuming and require high computation power. Prediction can be very fast and does not necessary require high computation power. Prediction can be performed on edge, smart phones, embedded devices, etc. Deep learning may be performed on the images as described herein as it may be difficult to apply typical machine learning due to the difficulty of extracting inflammation features of tissue (e.g., gums) within an oral cavity. To apply deep learning techniques, gum detection and segmentation may be performed to remove tooth features of the image. Data augmentation may be performed, for example, on the training data so that additional features of the oral cavity may be evaluated by the deep learning model.
At 302, images may be identified and/or received that depict oral care conditions (e.g., gingivitis) and that may not depict oral care conditions. For example, images may be provided that show signs of gingivitis and images may be provided that do not show signs of gingivitis. In images depicting oral care conditions, differing degrees of the oral care condition may be shown. For example, images may be provided that shows of severs cases of gingivitis, mild cases of gingivitis, no gingivitis, and the like. Each of the images may be labeled with an indication of the severity (or absence) of gingivitis. The images and associated indication of the severity of the oral care conditions may be labeled by dental professionals, such as dentists. The images and associated indications of the presence and/or degrees of oral care conditions may be stored on a database, may be provided to the training model via wireless and/or wireless methods, and the like.
Objects within the images of the oral cavities may be segmented. For example, the oral cavities within the images may be segmented into a tooth portion and a tissue (e.g., gums) portion. The images being segmented may assist in the training of the deep learning model, as the training will be based on relevant portions of the oral cavity. For example, gingivitis detection is based on gum conditions, and not based on conditions of teeth within an oral cavity. To obtain optimal image training (e.g., classification) results, gum regions and teeth regions may be segmented. The gum regions (e.g., only the gum regions) may be presented to the model so that tooth features do not impact the trained image classifier.
For gum detection and segmentation, the image may be converted from the RGB color space to the YCrCb color space. The image may be converted from the RGB color space to the YCrCb color space as many (e.g., most) gum regions may have a Min-Max YCrCb color range. For example, many gum regions may have an YCrCb color range from (0, 138, 77) to (255, 173, 127). It should be understood, however, that such ranges are non-limiting and are for illustration purposes only. The YCrCb values may vary depending on the intraoral cameras used. Because gum regions may have outliers, such as light reflection, a post outlier removal and/or region joint operations may be used to obtain a gum region (e.g., an entire gum region). The post outlier removal may be used in addition, or alternatively to, using color ranges to perform gum image thresholding.
At 304, the images depicting oral care conditions and the associated indication of the oral care condition may be provided to a model (e.g., deep learning model) for training. The images (e.g., data) provided to the model for training may be augmented so that additional data may be provided to the training model. For example, images (e.g., original images) may be provided to the training model, and the images may be flipped (e.g., horizontally or vertically flipped) to provide additional image data to the training model. In other examples, the image may be cropped (e.g., randomly cropped, such as by cropping the image 10% or 20%) to provide additional image data to the training model. Such augmentation techniques are intended as examples only. One or more other techniques for increasing the images/data may be used. Increasing the images (e.g., image data) may increase the accuracy of the oral care condition predictions, as the training data may have additional data for the model to process and learn from. The model (e.g., machine learning model) may be housed on the device 100 and/or on one or more other devices, such as an external server, a mobile device, and the like.
The images provided to the training model may be segmented and/or un-segmented. For example, segmented gum portions may be provided to the training model. In the training phase, the model may be trained to identify common features of data in one or more data sets (e.g., classified data sets). For example, the model may be trained to identify common features of the images showing the presence or absence of gingivitis, the degrees of gingivitis (if present), and the like. The model may be trained by associating properties of provided images with the associated identifiers. For example, the model may be trained by associating oral care conditions (such as gingivitis), the degree of the oral care conditions, and the associated indications provided to the training model. As provided herein, the associated conditions may include indications of the oral care conditions, in which the indications are provided by one or more sources, such as a dentist and the like. Validation accuracy may be tracked during training of the model. The training of the model may end when validation accuracy and volition loss converge.
At 306, an image of an oral cavity may be presented to the trained model. The images may include an oral cavity in which an oral care condition may be present or not present. If an oral care condition, such as gingivitis, is present, the oral care condition may be exhibited in degrees (e.g., severe, mild, etc.). The image may be segmented or unsegmented. For example, the image may include the teeth and gums, or the image may contain only the gums. The quality of the image may be determined before the model determines an oral care condition of the oral cavity. For example, the focus of the image may be determined via the Tenengrad technique. The Tenengrad technique may convolves an image with Sobel operators and sum the square of the magnitudes greater than a threshold. Equation 1.0 below shows the Tenengrad technique:
where S and S′ are Sobel's kernel and its corresponding transpose, respectively:
At 308, the image may be presented to the model (e.g., the trained or learned model). The model may reside on device 100 or may reside external to device 100. At 310, the model may determine (e.g., predict) an oral care condition of the oral cavity within the presented image. The oral care condition may include the presence of an oral care condition (e.g., gingivitis), the absence of the oral care condition (e.g. the oral care is healthy), a location of the oral care condition, the degree of the oral care condition present within the oral cavity, and the like. For example, the predicted oral care condition may include a prediction of whether the gum in the image is inflamed, where the condition (e.g., inflammation) may be found, remediation possibilities for the oral care conditions, etc. The model may determine that the oral cavity within the image includes a mild case of gingivitis at teeth numbered 29 and 30. The model may indicate that the user of device should treat the oral care condition with, for example, a special tooth paste. In another example the model may predict a severe case of gingivitis and indicate that the user of the device should immediately be seen by a dentist to treat the gingivitis.
As shown on
As described herein,
As shown on
The device (e.g., device 100) may include a smart device application that may include a virtual assistant to improve oral health, device usage, and/or case of understanding results. The virtual assistant may inform users of the meaning of results of diagnostics of the oral care condition and/or positional information of where problem areas may be located in the oral cavity. The assistant may provide brushing and/or product recommendations to improve overall health based on the individuals diagnostic. Diagnostic information may be relayed to the user through a display device (such as on a display of mobile device 210) and/or may be sent to a third party (e.g., a dentist) via an external server.
At 604, an image of one or more objects (e.g., portions) within the oral cavity may be captured. The image may be captured via a consumer-grade camera. The camera may be coupled and/or housed upon a device, such as device 100. The camera may include one or more lenses, such as one or more super-wide lenses that may focus (e.g., automatically focus). The image may include the gums and/or teeth within an oral cavity.
At 606, the image of the oral cavity (e.g., data indicative of the oral cavity) may be received by one or more processors. The processor may reside on the device 100, on an external device (such as a mobile device or external server), or a combination thereto. The image may be used to train a model (e.g., deep learning model) or to determine an oral care condition, as described herein. In examples in which the image is used to train the model an associated indication of an oral care condition may be provided to the model. For example, when an image is used to train the model an indication of an oral care condition (such as gingivitis), a degree of the condition, or an absence of the condition may be provided. The associated indication may be provided by a dental professional, such as a dentist. Images (e.g., data relating to the images) may be augmented by, for example, rotating the images, cropping the images, etc. In examples in which the image is provided to the model (e.g., trained model) to determine (e.g., predict) an oral care condition, an associated indication of the oral care condition may not be provided. The images may be checked and/or filtered to determine whether the image is of a sufficient quality (e.g., proper focus) to train the model and/or to be used by the model to predict an oral care condition.
At 608, portions of the oral cavity within the image may be segmented. For example, the teeth and gums of the image may be segmented, as described herein. The teeth may be removed from the image if the teeth do not provide data relating to an oral care condition, such as a gingivitis condition. The gums may be extracted from the image so that the gums may be provided to the model for training of the model or determination of an oral care condition using the model. At 610, the segmented gums may be input into the model (e.g., deep learning model) for determination of an oral care condition. For example, the segmented gums may be input into the model (e.g., deep learning model) for determination of whether gingivitis is present on the gums, absent on the gums, and/or to what degree of severity the gingivitis is present on the gums.
At 612, the machine learning model (e.g., deep learning model) may determine an oral care condition that may be present within the oral cavity captured in the image. The machine learning model may determine the presences of the oral care condition, the severity of the condition, the absence of the condition, and the like. Information relating to the oral care condition may be provided to the user or another entity (e.g., dentist, virtual assistant, etc.). The device may display the information relating to the determined oral care condition on the device (e.g., device 100) or on an external device (such as a mobile device). The device may provide an indication of which area of the oral cavity may be affected by the oral care condition, what remediation steps may be performed to treat the oral care condition, the urgency of remediating the oral care condition, etc. Information relating to the oral care condition may be displayed, audibly provided, textually provided, and the like. The information relating to the oral care condition may be provided to the user of the device and/or to one more others, such as a dentist of the user, an online marketplace for ordering of remediation products, and the like. One or more processors may perform one or more of the action described in process 600.
While the inventions have been described with respect to specific examples including presently preferred modes of carrying out the inventions, those skilled in the art will appreciate that there are numerous variations and permutations of the above described systems and techniques. It is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the present inventions. Thus, the spirit and scope of the inventions should be construed broadly as set forth in the appended claims.
The present application claims priority to U.S. Provisional Patent Application Ser. No. 63/224,020, filed Jul. 21, 2021, the entirety of which is incorporated herein by reference.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/036479 | 7/8/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63224020 | Jul 2021 | US |