The present disclosure relates to a method and a device for processing a scan image from a three-dimensional scanner. Specifically, the disclosure relates to a method and a device for recording a process of constructing a three-dimensional image model based on information acquired from a three-dimensional oral scanner.
An oral scanner is an optical device inserted into a patient's oral cavity to acquire a three-dimensional image of the oral cavity by scanning teeth.
The oral scanner scans an inner shape of the oral cavity into multiple 2D images, which are converted into a three-dimensional image model through a predetermined computational process. The scanned images or the converted three-dimensional images may be displayed on a display of an electronic device coupled to the oral scanner so that a user (e.g., a dentist or a dental hygienist) operating the oral scanner can view the images in real time.
The acquired 3D image model may be provided to a dental laboratory for fabrication of an artificial structure, an artificial tooth, or the like required for treatment. In this case, to understand how the three-dimensional image was formed through a scanning process, it may be necessary to record the user's scanning process, i.e., the process of forming the three-dimensional image, and provide the recorded scanning process to the dental laboratory. In addition, the scanning process shown on the display may be provided to an oral scanner manufacturer, and used as a reference for improving the performance of the oral scanner.
However, capturing and recording a screen displayed on the display of an electronic device may be a significant burden on a processor of the electronic device, and thus recording an unnecessary screen region may adversely affect the computational operation of the electronic device that converts scanned images into a three-dimensional image.
In addition, when an unnecessary screen region is recorded, a video file in which the recorded image is stored may also be large, causing inconvenience when delivering the video file to the dental laboratory.
On the other hand, when a screen region to be recorded is too narrow, the disadvantage that necessary information is not properly recorded may occur.
The present disclosure is to solve the above-described problems of the prior art, and enables configuring a screen region to be recorded when recording a scanning process of a three-dimensional scanner displayed on a display screen.
As one aspect of the present disclosure, an electronic device for processing a scan image of a three-dimensional scanner may be suggested. An electronic device according to an aspect of the present disclosure comprises: a communication circuit communicatively connected to a three-dimensional scanner; an input device; a display; and at least one processor, wherein the at least one processor is configured to: generate a three-dimensional image of a shape of an oral cavity, based on an image acquired from the three-dimensional scanner via the communication circuit; display, on the display, a screen comprising a first region in which at least the three-dimensional image is displayed; receive a user input for selectively recording a predetermined region of the screen; and generate a video image by recording a region of the screen corresponding to the user input in response to the user input, and wherein the recorded region comprises a portion of the first region.
As one aspect of the present disclosure, a method for processing a scan image of a three-dimensional scanner may be suggested. A method according to an aspect of the present disclosure comprises generating a three-dimensional image of a shape of an oral cavity by at least one processor based on an image acquired from a three-dimensional scanner; displaying a screen comprising a first region on a display by the at least one processor, at least the three-dimensional image being displayed in the first region; receiving, by the at least one processor, a user input for selectively recording a predetermined region of the screen; and generating, by the at least one processor, a video image by recording a region of the screen corresponding to the user input in response to the user input, wherein the recorded region comprises a portion of the first region.
As one aspect of the present disclosure, a non-transitory computer-readable recording medium recording commands for processing a scan image of a three-dimensional scanner may be suggested. A non-transitory computer-readable recording medium according to the present disclosure, when executed by at least one processor, causes the at least one processor to perform an operation, wherein the commands cause the at least one processor to: generate a three-dimensional image of a shape of an oral cavity based on an image acquired from a three-dimensional scanner; display, on a display, a screen comprising a first region in which at least the three-dimensional image is displayed; receive a user input for selectively recording a predetermined region of the screen; and generate a video image by recording a region of the screen corresponding to the user input in response to the user input, and wherein the recorded region comprises a portion of the first region.
As one aspect of the present disclosure, a system for a three-dimensional scanning may be suggested. A system according to the present disclosure comprises a three-dimensional scanner comprising an input device and configured to scan a shape of an oral cavity; and an electronic device communicably coupled to the three-dimensional scanner, wherein the electronic device comprises: a communication circuit communicatively connected to the three-dimensional scanner; an input device; a display; and at least one processor, wherein the at least one processor is configured to: generate a three-dimensional image of the shape of the oral cavity based on an image acquired from the three-dimensional scanner via the communication circuit; display, on the display, a screen comprising a first region in which at least the three-dimensional image is displayed; receive a user input for selectively recording a predetermined region of the screen; and generate a video image by recording a region of the screen corresponding to the user input in response to the user input, and wherein the recorded region comprises a portion of the first region.
The method and the device for processing a scanned image, according to the present disclosure, may reduce the burden on a processor of an electronic device to enable effective scanning and recording operations, and may also reduce the size of a recording file to enable convenient delivery of the recording file to a dental laboratory or a manufacturer of a three-dimensional scanner, thereby increasing user convenience.
Embodiments of the present disclosure are illustrated for describing the technical idea of the present disclosure. The scope of the claims according to the present disclosure is not limited to the embodiments described below or to the detailed descriptions of these embodiments.
All technical or scientific terms used in the present disclosure have meanings that are generally understood by a person having ordinary knowledge in the art to which the present disclosure pertains, unless otherwise specified. The terms used in the present disclosure are selected for the purpose of clearer explanation of the present disclosure, and are not intended to limit the scope of claims according to the present disclosure.
The expressions “include,” “provided with,” “have” and the like used in the present disclosure should be understood as open-ended terms connoting the possibility of inclusion of other embodiments, unless otherwise mentioned in a phrase or sentence including the expressions.
A singular expression used in the present disclosure can include meanings of plurality, unless otherwise mentioned, and the same is applied to a singular expression stated in the claims. The terms “first,” “second,” etc. used the present disclosure are used to distinguish a plurality of elements from one another, and are not intended to limit the order or importance of the relevant elements.
The term “unit” used in the present disclosure means a software element or hardware element, such as a field-programmable gate array (FPGA) or an application specific integrated circuit (ASIC). However, a “unit” is not limited to software and hardware. A “unit” may be configured to be stored in an addressable storage medium or may be configured to run on one or more processors. Therefore, for example, a “unit” may include elements, such as software elements, object-oriented software elements, class elements, and task elements, as well as processors, functions, attributes, procedures, subroutines, segments of program codes, drivers, firmware, micro-codes, circuits, data, databases, data structures, tables, arrays, and variables. Functions provided in elements and “unit” may be combined into a smaller number of elements and “units” or further subdivided into additional elements and “units.”
The expression “based on” used in the present disclosure is used to describe one or more factors that influences a decision, an action of determination, or an operation described in a phrase or sentence including the relevant expression, and this expression does not exclude an additional factor influencing the decision, the action of determination, or the operation.
In the present disclosure, when a certain element is described as being “coupled to” or “connected to” another element, it should be understood that the certain element may be connected or coupled directly to the other element or that the certain element may be connected or coupled to the other element via a new intervening element.
In the present disclosure, artificial intelligence (AI) may refer to a technology that imitates human learning ability, reasoning ability, and perception ability and implements the same with a computer, and may include concepts of machine learning and symbolic logic. Machine learning (ML) may be an algorithmic technique that automatically classifies or learns the characteristics of input data. Artificial intelligence technology may use machine learning algorithms to analyze input data, learn the results of the analysis, and make determinations or predictions based on the results of the learning. Further, technologies that mimic the cognitive and decision-making functions of the human brain by using machine learning algorithms may also be considered to fall within the realm of artificial intelligence. For example, technical fields of language understanding, visual comprehension, reasoning and prediction, knowledge representation, and action control may be included.
In the present disclosure, machine learning may refer to a process of training a neural network model by using experience in processing data. Machine learning may imply that computer software improves its ability to process data on its own. A neural network model is constructed by modeling correlations between data, wherein the correlations may be represented by multiple parameters. The neural network model extracts features from given data and analyzes the features to derive correlations between data, and repeating this process to optimize the parameters of the neural network model may be referred to as machine learning. For example, a neural network model can learn the mapping (correlation) between inputs and outputs with respect to data provided as input-output pairs. Alternatively, even when only input data is provided, a neural network model can learn the relationships between the provided data by deriving regularities in the data.
In the present disclosure, an artificial intelligence learning model, a machine learning model, or a neural network model may be designed to implement a human brain structure on a computer and may include multiple network nodes that mimic neurons in a human neural network and have weights. Multiple network nodes may have connections to one another by mimicking the synaptic activity of neurons sending and receiving signals through synapses. In an artificial intelligence learning model, multiple network nodes can be located on layers having different depths to send and receive data based on convolutional connections. The artificial intelligence learning model may be, for example, an artificial neural network, a convolutional neural network, etc.
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. In the accompanying drawings, identical or corresponding elements are indicated by identical reference numerals. In the following description of embodiments, repeated descriptions of the identical or corresponding elements will be omitted. However, even when a description of an element is omitted, such an element is not intended to be excluded in an embodiment.
The oral scanner 200 according to various embodiments may be inserted into the oral cavity and scan the interior of the oral cavity in a non-contact manner, thereby acquiring an image of the oral cavity. The image of the oral cavity may include at least one tooth, gingiva, and artificial structures insertable into the oral cavity (e.g., orthodontic devices including brackets and wires, implants, dentures, and orthodontic aids inserted into the oral cavity). The oral scanner 200 may use a light source (or a projector) to emit light to the oral cavity of the subject 20 (e.g., at least one tooth or gingiva of the subject 20), and may receive light reflected from the oral cavity of the subject 20 through a camera (or at least one image sensor). According to another embodiment, the oral scanner 200 may scan a diagnostic model of the oral cavity to acquire an image of the diagnostic model of the oral cavity. When the diagnostic model of the oral cavity is a diagnostic model that mimics the shape of the oral cavity of subject 20, an image of the diagnostic model of the oral cavity may be an image of the oral cavity of the subject. For ease of explanation, the following description assumes, but is not limited to, acquiring an image of the oral cavity by scanning the inside of the oral cavity of the subject 20.
The oral scanner 200 according to various embodiments may acquire a surface image of the oral cavity of the subject 20 as a two-dimensional image based on information received through a camera. The surface image of the oral cavity of the subject 20 may include at least one among at least one tooth, gingiva, artificial structure, cheek, tongue, or lip of the subject 20. The surface image of the oral cavity of subject 20 may be a two-dimensional image.
The two-dimensional image of the oral cavity acquired by the oral scanner 200 according to various embodiments may be transmitted to an electronic device 100 which is connected via a wired or wireless communication network. The electronic device 100 may be a computer device or a portable communication device. The electronic device 100 may generate a three-dimensional image (or, three-dimensional oral cavity image or a three-dimensional oral model) of the oral cavity, which is a three-dimensional representation of the oral cavity, based on the two-dimensional images of the oral cavity received from the oral scanner 200. The electronic device 100 may generate the three-dimensional image of the oral cavity by modeling the inner structure of the oral cavity in three dimensions, based on the received two-dimensional images of the oral cavity.
The oral scanner 200 according to another embodiment may scan the oral cavity of the subject 20 to acquire two-dimensional images of the oral cavity, generate a three-dimensional image of the oral cavity based on the acquired two-dimensional images of the oral cavity, and transmit the generated three-dimensional image of the oral cavity to the electronic device 100.
The electronic device 100 according to various embodiments may be communicatively connected to a cloud server (not shown). In this case, the electronic device 100 may transmit two-dimensional images of the oral cavity of the subject 20 or a three-dimensional image of the oral cavity to the cloud server, and the cloud server may store the two-dimensional images of the oral cavity of the subject 20 or the three-dimensional image of the oral cavity, which has been received from the electronic device 100.
According to another embodiment, in addition to a handheld scanner that is inserted into the oral cavity of subject 20, a table scanner (not shown) which is fixed and used in a specific location may be used as the oral scanner. The table scanner can generate a three-dimensional image of a diagnostic model of the oral cavity by scanning the diagnostic model of the oral cavity. In the above case, a light source (or a projector) and a camera of the table scanner are fixed, allowing the user to scan the diagnostic model of the oral cavity while moving the diagnostic model of the oral cavity.
The oral scanner 200 according to various embodiments may include a processor 201, a memory 202, a communication circuit 203, a light source 204, a camera 205, an input device 206, and/or a sensor module 207. At least one of the elements included in the oral scanner 200 may be omitted, or other elements may be added to the oral scanner 200. Additionally or alternatively, some elements may be integrated, or implemented as a single or multiple entities. At least some elements in the oral scanner 200 may be connected to each other via a bus, a general purpose input/output (GPIO), a serial peripheral interface (SPI), or a mobile industry processor interface (MIPI) so as to transmit and/or receive data and/or signals.
The processor 201 of the oral scanner 200 according to various embodiments is an element capable of performing operations or data processing related to control and/or communication of the elements of the oral scanner 200, and may be operatively coupled to the elements of the oral scanner 200. The processor 201 may load commands or data received from other elements of the oral scanner 200 into the memory 202, may process the commands or data stored in the memory 202, and may store resulting data. According to various embodiments, the memory 202 of the oral scanner 200 may store instructions for the above-described operations of the processor 201.
According to various embodiments, the communication circuit 203 of the oral scanner 200 may establish a wired or wireless communication channel with an external device (e.g., the electronic device 100), and may transmit and receive various types of data to and from the external device. According to an embodiment, for wired communication with the external device, the communication circuit 203 may include at least one port to be connected to the external device with a wired cable. In the above case, the communication circuit 203 may communicate with the external device that is connected to a wire via the at least one port. According to an embodiment, the communication circuit 203 may be configured to include a cellular communication module so as to be connected to a cellular network (e.g., 3G, LTE, 5G, Wibro, or Wimax). According to various embodiments, the communication circuit 203 may include a short-range communication module to transmit and receive data to and from external devices by using short-range communication (e.g., Wi-Fi, Bluetooth, Bluetooth Low Energy (BLE), UWB), but the present disclosure is not limited thereto. According to an embodiment, the communication circuit 203 may include a contactless communication module for contactless communication. The contactless communication may include at least one contactless proximity communication technology, for example, near-field communication (NFC) communication, radio frequency identification (RFID) communication, or magnetic secure transmission (MST) communication.
According to various embodiments, the light source 204 of the oral scanner 200 may emit light toward the oral cavity of the subject 20. For example, the light emitted from the light source 204 may be structured light having a predetermined pattern (e.g., a stripe pattern in which differently colored straight lines are continuously shown). The pattern of the structured light may be generated, for example, using a pattern mask or a digital micro-mirror device (DMD), but the present disclosure is not limited thereto. The camera 205 of the oral scanner 200 according to various embodiments may acquire an image of the oral cavity of the subject 20 by receiving light reflected by the oral cavity of the subject 20. The camera 205 may include, for example, a left camera corresponding to the left field of view and a right camera corresponding to the right field of view in order to construct a three-dimensional image by using optical triangulation. The camera 205 may include at least one image sensor, such as a CCD sensor or a CMOS sensor.
The input device 206 of the oral scanner 200 according to various embodiments may receive a user input for controlling the oral scanner 200. The input device 206 may include buttons for receiving push manipulation from the user 10, a touch panel for detecting touch from the user 10, and a speech recognition device including a microphone. For example, the user 10 may use the input device 206 to control the start or stop of scanning.
The sensor module 207 of the three-dimensional scanner 200 according to various embodiments may detect an operational state of the three-dimensional scanner 200 or an external environmental state (e.g., a user's motion) and generate an electrical signal corresponding to the detected state. The sensor module 207 may include, for example, at least one of a gyro sensor, an accelerometer, a gesture sensor, a proximity sensor, or an infrared sensor. The user 10 may use the sensor module 207 to control the start or stop of scanning. For example, when the user 10 is moving while holding the three-dimensional scanner 200 in hand, the three-dimensional scanner 200 may be controlled to start a scanning operation of the processor 201 when an angular velocity measured by the sensor module 207 exceeds a predetermined threshold.
According to an embodiment, the oral scanner 200 may start scanning by receiving a user input for starting scanning through the input device 206 of the oral scanner 200 or an input device 206 of the electronic device 100, or in response to processing in a processor 201 of the oral scanner 200 or the processor 201 of the electronic device 100. When the user 10 scans the inside of the oral cavity of the subject 20 by using the oral scanner 200, the oral scanner 200 may generate a two-dimensional image of the oral cavity of the subject 20, and may transmit the two-dimensional image of the oral cavity of the subject 20 to the electronic device 100 in real time. The electronic device 100 may display the received two-dimensional image of the oral cavity of the subject 20 on a display. Further, the electronic device 100 may generate (construct) a three-dimensional image of the oral cavity of the subject 20, based on the two-dimensional images of the oral cavity of the subject 20, and may display the three-dimensional image of the oral cavity on the display. The electronic device 100 may display the three-dimensional image, which is being generated, on the display in real time.
The electronic device 100 according to various embodiments may include at least one processor 101, at least one memory 103, a communication circuit 105, a display 107, and/or an input device 109. At least one of the elements included in the electronic device 100 may be omitted, or other elements may be added to the electronic device 100. Additionally or alternatively, some elements may be integrated, or implemented as a single or multiple entities. At least some elements in the electronic device 100 may be connected to each other via a bus, a general purpose input/output (GPIO), a serial peripheral interface (SPI), or a mobile industry processor interface (MIPI) so as to exchange data and/or signals.
According to various embodiments, the at least one processor 101 of the electronic device 100 may be an element capable of performing operations or data processing related to control and/or communication of the elements of the electronic device 100 (e.g., memory 103). The at least one processor 101 may be operatively coupled to the elements of the electronic device 100, for example. The at least one processor 101 may load commands or data received from other elements of the electronic device 100 into the at least one memory 103, may process the commands or data stored in the at least one memory 103, and store the resulting data.
According to various embodiments, the at least one memory 103 of the electronic device 100 may store instructions for operations of the at least one processor 101. The at least one memory 103 may store correlation models constructed based on a machine learning algorithm. The at least one memory 103 may store data (e.g., a two-dimensional image of the oral cavity acquired through oral scanning) received from the oral scanner 200.
According to various embodiments, the communication circuit 105 of the electronic device 100 may establish a wired or wireless communication channel with an external device (e.g., the oral scanner 200 or the cloud server), and may transmit or receive various types of data to or from the external device. According to an embodiment, for wired communication with the external device, the communication circuit 105 may include at least one port so as to be connected to the external device through a wired cable. In the above case, the communication circuit 105 may communicate with the external device that is connected to a wire via the at least one port. According to an embodiment, the communication circuit 105 may be configured to include a cellular communication module so as to be connected to a cellular network (e.g., 3G, LTE, 5G, Wibro, or Wimax). According to various embodiments, the communication circuit 105 may include a short-range communication module to transmit and receive data to and from external devices by using short-range communication (e.g., Wi-Fi, Bluetooth, Bluetooth Low Energy (BLE), or UWB), but the present disclosure is not limited thereto. According to an embodiment, the communication circuit 105 may include a contactless communication module for contactless communication. The contactless communication may include at least one contactless proximity communication technology, for example, near-field communication (NFC) communication, radio frequency identification (RFID) communication, or magnetic secure transmission (MST) communication.
The display 107 of the electronic device 100 according to various embodiments may display various screens based on control of the processor 101. The processor 101 may display, on the display 107, a two-dimensional image of the oral cavity of subject 20 received from the oral scanner 200 and/or a three-dimensional image of the oral cavity in which the inner structure of the oral cavity is modeled. For example, the two-dimensional image and/or the three-dimensional image of the oral cavity may be displayed through a particular application. In the above case, the user 10 may edit, save, and delete the two-dimensional image and/or the three-dimensional image of the oral cavity.
The input device 109 of the electronic device 100 according to various embodiments may receive commands or data, which are to be used by an element (e.g., the at least one processor 101) of the electronic device 100, from an external source (e.g., a user) of the electronic device 100. The input device 109 may include, for example, a microphone, a mouse, or a keyboard. According to an embodiment, the input device 109 may be implemented in the form of a touch sensor panel that may be coupled to the display 107 to recognize contact or proximity of various external objects.
According to various embodiments, the electronic device 100 may convert each of the multiple two-dimensional images 310 of the oral cavity of the subject 20 into a set of multiple points having three-dimensional coordinate values. For example, the electronic device 100 may convert each of the multiple two-dimensional images 310 into a point cloud, which is a set of data points having three-dimensional coordinate values. For example, a set of point clouds, which are three-dimensional coordinate values based on the multiple two-dimensional images 310, may be stored as raw data for the oral cavity of subject 20. By aligning the point clouds, each of which is a set of data points having three-dimensional coordinate values, the electronic device 100 may complete a full dental model.
According to various embodiments, the electronic device 100 may reconfigure (reconstruct) a three-dimensional image of the oral cavity. For example, the electronic device 100 may reconfigure the three-dimensional image 320 of the oral cavity of the subject 20 by merging a set of point clouds stored as raw data by using a Poisson algorithm to reconfigure multiple points and transforming the points into a closed three-dimensional surface.
When the data acquisition program is executed, the electronic device 100 may display images scanned by the oral scanner 100 and may provide user interfaces for processing and manipulating the images. In an embodiment, the data acquisition program screen 400 includes user interfaces including a data display region 410, a live view region 420, a model view region 430, a function box region 440, a function option region 450, and an icon display region 460. The user interfaces are illustrative, and the data acquisition program screen 400 may include any additional user interfaces as necessary.
The data display region 410 is a region for displaying images received from the oral scanner and a three-dimensional image generated based thereon. In an embodiment, the data display region 410 includes the live view region 420 and the model view region 430. In an embodiment, the data display region 410 includes the function option region 450.
The live view region 420 displays images received from the oral scanner 100. In an embodiment, the electronic device 100 may display two-dimensional images of the oral cavity, which is currently being scanned by the oral scanner 200, on the live view region 420 in real time. The live view region 420 is moveable and resizable by the user and is separable from the data acquisition program screen 400. In an embodiment, the user can configure the live view region 420 so as not to be displayed.
The electronic device 100 may display, in the model view region 430, a three-dimensional image model generated from the two-dimensional images acquired from the oral scanner 100.
The function box region 440 includes a user interface for providing functions of modifying/editing or analyzing the displayed three-dimensional image model, and a user interface for displaying a device state. In an embodiment, the function box region 440 includes a trimming function interface 442 for selecting and deleting unnecessary portions of data acquired during a scanning process, a function tool interface 444 for editing or storing a generated three-dimensional image, a treatment information interface 446 for displaying an image regarding treatment information of each tooth, and a device state display interface 448 for displaying the device state of the oral scanner 200. For example, the functional tool interface 444 includes a playback control interface 444-1 for playing back a recorded screen to be described later in relation to
In response to a user input of selecting one function interface in the function box regions 440, the electronic device 100 may display detailed options thereof in the function option region 450. In an embodiment, when the user selects the playback control interface 444-1 of the function tool interface 444 in the function box region 450, the electronic device 100 may display, in the function option region 450, options for playback of the recorded screen, such as a play/stop button, an interface for controlling a playback position/speed, etc. When a function that does not require detailed options is selected in the function box region 440, the function option region 450 may not be displayed.
The icon display region 460 is a region for providing screen recording and capturing functions, and may include a recording region configuration icon 462 and a recording start/end icon 464. The recording region configuration icon 462 provides an interface for configuring a screen region to be recorded, which will be described later in connection with
The electronic device 100 may convert multiple images received from the oral scanner 200 into a three-dimensional image 520, for example, by using the method described with respect to
In an embodiment, the size of the generated three-dimensional image 520 may change as various regions of the oral cavity are scanned at various angles. In an embodiment, the model view region 430 may be enlarged/reduced based on the size of the generated three-dimensional image 520.
The process of change in the three-dimensional image 520 that is generated as scanning progresses may be recorded by recording at least a partial region of the data acquisition program screen 400. In an embodiment, the electronic device 100 may start or end screen recording in response to a user input of selecting the recording start/end icon 464 in the icon display region 460 of the data acquisition program screen 400.
In an embodiment, when a user clicks (initiates input) the input device 206 of the oral scanner 200 or a capture/recording button (not shown), the electronic device 100 may start or end screen recording in response thereto. Additionally or alternatively, the screen recording may be started/ended in response to a user input, such as a double-click or long-click on the input device 206 of the oral scanner 200. According to the embodiment, when it is necessary to start/end recording while operating the oral scanner 200, the user does not need to separately manipulate the input device 109 of the electronic device 100 coupled to the oral scanner 200, thereby increasing user convenience and also contributing to the hygiene of the user operating the oral scanner.
In an embodiment, the electronic device 100 may automatically start screen recording when a preconfigured condition is met. For example, during scanning of the oral cavity using the oral scanner 200, the electronic device 100 may detect that an image received from the oral scanner 200 or a generated three-dimensional image includes teeth. The electronic device 100 may automatically start screen recording in response to the detection of the teeth. In an embodiment, the electronic device 100 may determine whether the image received from the oral scanner 200 or the generated three-dimensional image includes teeth, by using a machine learning model which has been trained using images of teeth as training data. In the embodiment, the electronic device 100 may stop screen recording when the predetermined condition is met, such as when a tooth is not detected in the scanned image or three-dimensional image for a predetermined time. According to the embodiment, the user does not need to manipulate the input device 109 of the electronic device 100, which is coupled to the oral scanner 200, in order to start/end recording, thereby increasing user convenience and contributing to user hygiene. Also, since recording is not performed while the oral cavity is not being scanned, it is possible to prevent unnecessary portions from being recorded.
When the screen recording ends, the electronic device 100 may store the recording as a video image file in the memory 103 of the electronic device 100. In an embodiment, the video image file may be stored in a remote storage, such as a cloud server. In an embodiment, the electronic device 100 may adjust the resolution of the recorded video based on a user input.
According to embodiments of the present disclosure, the electronic device 100 may determine, based on a user input from the user of the oral scanner 200, a target region, which is to be recorded when recording the data acquisition program screen 400. In an embodiment, the user may directly or indirectly input, into the electronic device 100, a user input for selecting a desired recording region from among preconfigured recording regions. As described later in connection with the embodiment illustrated in
In an embodiment, the electronic device 100 may configure a recording region by using a screen coordinate system. According to the embodiment illustrated in
Recording the entire region of the data acquisition program screen 400, as illustrated in
According to the embodiment illustrated in
Recording only the data display region of the data acquisition program screen 400, as illustrated in
According to the embodiment illustrated in
When recording only the model view region of the data acquisition program screen 400 as illustrated in
In an embodiment, the electronic device 100 tracks and records a region corresponding to a location of a user input, received via the input device 109 of the electronic device 100 or the input device 206 of the oral scanner 200, on the display.
As illustrated in
In an embodiment, the electronic device 100 may record a region 650 that includes all of additional regions along with the region 640 corresponding to the user input location. In another embodiment, the electronic device 100 may record a region (not shown) that includes the entire region from the additional regions to the pointer 645 corresponding to the user input location. The additional regions may include the model view region 430. However, this is only an example, and any other regions may be recorded along with the region corresponding to the user input location. In an embodiment, the user-designated region that will be described later with reference to
According to the embodiment illustrated in
In the embodiment illustrated in
As described with respect to
When the region 650 that includes both the region 640 corresponding to the location of the user input and the additional regions is recorded, the size of the entire recording region 650 may change as the location of the user input changes. In this case, the electronic device 100 may resize a recorded image to fit the size of a video image file in which the recorded image is stored.
Recording the region 650, which includes the region 640 corresponding to the user input location and any necessary additional regions as illustrated in
As illustrated in
In an embodiment, the electronic device 100 may selectively record only a portion of the region of the data acquisition program screen 400 based on a user input for arbitrarily configuring a region to be recorded.
In an embodiment, a user uses an input device (e.g., a mouse, a touch-sensitive display, etc.) of the electronic device 100 to configure a desired recording region. The user may configure the recording region through a drag input. The user may configure a necessary recording region in consideration of the information to be stored in a recording file. A recording region may be configured before or during scanning. In an embodiment, the user may configure a desired recording region through an input device of the oral scanner 200 and/or movement of the oral scanner 200. For example, a recording region may be configured based on the movement of the oral scanner detected through the sensor module 207 of the oral scanner 200. In an embodiment, the region 710 including the live view region 420 and the model view region 440 may be configured as a recording region. Alternatively, only a portion 720 of the model view region 430 may be configured as a recording region. Without being limited thereto, the recording region may be arbitrarily configured.
As illustrated in
When the user selects the full screen recording 810, the entire region of the data acquisition program screen 400, for example, as illustrated in
When the user-designated region recording 840 is selected, the user may configure the location and size of the recording region, for example, as illustrated in
When the pointer tracking recording 850 is selected, a region including a region corresponding to the location of the user input is configured as a region to be recorded, as illustrated in
In step 910, intraoral scanning using the oral scanner 200 is started. The oral scanner 200 may start scanning in response to an input using the input device 206 of the oral scanner 200, or in response to an input using a scan start interface 451 of the function option region 450 of the data acquisition program screen 400. The oral scanner 200 may transmit an image of a shape of a subject's oral cavity to the electronic device 100.
In step 920, the electronic device 100 receives the image of the shape of the oral cavity from the oral scanner 200 via the communication circuit 105 connected to the oral scanner 200. The received image may be a two-dimensional image.
In step 930, the electronic device 100 generates a three-dimensional image of the shape of the oral cavity based on the image of the shape of the oral cavity received from the oral scanner 200.
In step 940, the electronic device 100 displays the three-dimensional image generated based on the images received from the oral scanner 200 on a screen of the display 107. In an embodiment, the images regarding the shape of the oral cavity may be displayed on the data acquisition program screen 400. The data acquisition program screen may include a first screen region (e.g., the model view region 430) for displaying a three-dimensional image of the shape of the oral cavity, a second screen region (e.g., the live view region 420) for displaying a two-dimensional image of the shape of the oral cavity received from the oral scanner 200, and a third screen region (e.g., the function box region 440, the function option region 450, or the icon display region 460) for displaying an interface that provides functions for controlling the three-dimensional image of the shape of the oral cavity.
In step 950, the electronic device 100 receives a user input for selectively recording a predetermined region of the display screen, such as the data acquisition program screen 400. In the present embodiment, step 950 is illustrated as being performed subsequent to step 940. However, step 950 may be performed at any point in the oral cavity image recording method 900 illustrated in
In step 960, in response to the user input for selectively recording a screen region, the electronic device 100 records a region corresponding to the user input to generate a video image. In an embodiment, a stored file may be stored in the memory 103 of the electronic device 100 and/or in a storage of a remote server.
In an embodiment, when the user input instructs selective recording of the first screen region, the electronic device 100 records a region that includes the first screen region but does not include the second screen region and the third screen region. In another embodiment, when the user input instructs selective recording of the first screen region and the second screen region, the electronic device 100 records a region that includes the first screen region and the second screen region but does not include the third screen region. In another embodiment, when the user input instructs recording of a screen region that includes at least one of the first, second, and third screen regions, the electronic device 100 records a region corresponding to the user input. When the user input instructs recording of the entire screen region, the electronic device 100 may record the entire screen region including the first, second, and third regions.
In step 1010, a procedure of configuring a recording region of the data acquisition program screen 400 is started. In an embodiment, the electronic device 100 may start the recording region configuration procedure in response to an input of selecting the recording region configuration icon 441 on the data acquisition program screen 400.
In step 1020, the electronic device 100 configures a recording region in response to an input of selecting one of predetermined recording regions, a user-designated recording region, or a pointer tracking recording region. The predetermined recording regions may include the entire region of the data acquisition program screen 400, the data display region 410, and the model view region 430.
When the user-designated recording region is selected, the electronic device 100 provides, in step 1030, a user interface for configuring the recording region. In an embodiment, the user may configure the recording region as illustrated in
In step 1040, the electronic device 100 stores the configured recording region in the memory 103, etc.
Various embodiments of the present disclosure may be implemented as software recorded in a machine-readable recording medium. The software may be software for implementing the above-mentioned various embodiments of the present disclosure. The software may be inferred from various embodiments of the present disclosure by programmers in a technical field to which the present disclosure belongs. For example, the software may be a machine-readable command (e.g., code or a code segment) or program. A machine may be a device capable of operating according to an instruction called from the recording medium, and may be, for example, a computer. In an embodiment, the machine may be the device 100 according to embodiments of the present disclosure. In an embodiment, a processor of the machine may execute a called command to cause elements of the machine to perform a function corresponding to the command. In an embodiment, the processor may be the at least one processor 101 according to embodiments of the present disclosure. The recording medium may refer to any type of recording medium which stores data capable of being read by the machine. The recording medium may include, for example, ROM, RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like. In an embodiment, the recording medium may be the at least one memory 103. In an embodiment, the recording medium may be distributed to computer systems which are connected to each other through a network. The software may be distributed, stored, and executed in the computer systems. The recording medium may be a non-transitory recording medium. The non-transitory recording medium refers to a tangible medium that exists irrespective of whether data is stored semi-permanently or temporarily, and does not include a transitorily transmitted signal.
Although the technical idea of the present disclosure has been described by the examples described in some embodiments and illustrated in the accompanying drawings, it should be noted that various substitutions, modifications, and changes can be made without departing from the technical scope of the present disclosure which can be understood by those skilled in the art to which the present disclosure pertains. In addition, it should be noted that such substitutions, modifications, and changes are intended to fall within the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2021-0069828 | May 2021 | KR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/KR2022/007420 | 5/25/2022 | WO |