This application claims the priority of the Korean Patent Applications, NO 10-2022-0174721, filed on Dec. 14, 2022, NO 10-2022-0177285, filed on Dec. 16, 2022, NO 10-2022-0177282, filed on Dec. 16, 2022, and NO 10-2022-0177280, filed on Dec. 16, 2022, in the Korean Intellectual Property Office. The entire disclosures of all these applications are hereby incorporated by reference.
The present disclosure relates to a method and a system for AR object tracking. More specifically, the present disclosure relates to a method and a system for obtaining 3D information within a single image and tracking an augmented reality (AR) object based on the obtained 3D information.
Augmented Reality (AR) refers to a computer graphics technique that synthesizes virtual objects or information with the real environment to make the virtual objects look like existing with physical objects in the original environment.
Creating and exploring virtual annotations in the real environment through devices such as mobile phones is a typical mobile augmented reality application.
Here, annotation (AR annotation) in the augmented reality environment means virtual information registered to an object in the real environment.
For example, augmented reality technology based on AR annotation may include a method wherein virtual information is overlapped and displayed on an actual image captured by a user through the display of a mobile electronic device.
The augmented reality technology has the advantage of providing the user with realistic content that combines real objects and virtual information.
The main point here is that to produce or play augmented reality content, it is essential to place real objects and virtual information to have predetermined poses at predetermined positions within a predetermined 3D space.
Conventionally, a GPS orientation sensor, a position estimation and navigation system, and/or various computer vision methods are used to place virtual information at an accurate location and/or posture with respect to a real object.
A typical example employed for accurate positioning of virtual information is the Simultaneous Localization and Mapping (SLAM) method.
Specifically, according to the conventional SLAM method, an actual object input through a captured image is matched to a learned 3D space in a database, and virtual information is augmented and displayed on the 3D space based on the position and/or posture information of an input means (e.g., a camera).
However, the conventional method above reveals a problem in accurately placing virtual information at a position in the area other than the learned 3D space.
Moreover, the conventional method shows a problem that virtual information may not be placed accurately outside the learned 3D space.
In other words, the conventional method requires a plurality of images captured from different viewpoints to obtain 3D information without involving a depth camera and introduces the inconvenience of having to perform additional tasks of analyzing a plurality of obtained captured images and reconstructing 3D information.
However, the easy integration of depth cameras with the mobile environment is still in its early stage of technology development.
Therefore, there is a need to develop new methods that may solve the problems above while efficiently implementing AR annotations.
An object of the present disclosure is to provide a method and a system for AR object tracking which track an augmented reality object based on 3D information obtained from a single image.
Specifically, the present disclosure according to one embodiment provides a method and a system for AR object tracking which reconstruct 3D information within a single image based on a 2D or 3D model with a preconfigured shape and track an augmented reality object based on the reconstructed 3D information.
Also, the present disclosure according to one embodiment provides a method and a system for AR object tracking which store and share an AR object tracking model based on reconstructed 3D information.
Technical objects to be achieved by the present disclosure and embodiments according to the present disclosure are not limited to the technical objects described above, and other technical objects may also be addressed.
A method for AR object tracking according to an embodiment of the present disclosure, by which a tracking application executed by at least one processor of a terminal performs AR object tracking, comprises obtaining first image data; providing a primitive model with a preconfigured 2D or 3D shape; determining at least one of the provided primitive model as a primitive application model; displaying the primitive application model on the first image data; performing alignment between the primitive application model and a target object within the first image data; setting attribute values specifying the shape of the primitive application model; obtaining 3D depth data including each descriptor of the target object and a distance value corresponding to the descriptor based on the set attribute values; and performing AR object tracking based on the 3D depth data.
At this time, the performing of alignment includes matching edges of the primitive application model to the edges of the target object.
Also, the setting of the attribute values includes setting attribute values of the primitive application model based on actual measurements of the shape of the target object.
Also, the performing of the AR object tracking based on the 3D depth data includes generating a 3D definition model based on the 3D depth data, wherein the 3D depth model is a model trained to track changes in the 6 degrees of freedom (DoF) parameters of a predetermined object.
Also, the performing of the AR object tracking based on the 3D depth data further comprises determining a target virtual object to be augmented and displayed based on the target object, generating an AR environment model by anchoring the target virtual object and the 3D definition model, and performing the AR object tracking based on the AR environment model.
Also, the performing of the AR object tracking based on the AR environment model includes obtaining second image data, detecting a target object within the second image data based on the AR environment model, and augmenting and displaying the target virtual object on the second image data based on the AR environment model.
Also, the augmenting and displaying of the target virtual object on the second image data includes tracking changes in the 6 DoF parameters of a target object within the second image data based on the AR environment model, tracking changes in the 6 DoF parameters of the target virtual object according to the changes in the 6 DoF parameters of the target object based on the AR environment model, and augmenting and displaying the target virtual object on the second image data based on the changes in the 6 DoF parameters of the tracked target virtual object.
Also, the method further comprises configuring a group member having rights for sharing at least one of the 3D definition model and the AR environment model.
Meanwhile, a system for AR object tracking according to an embodiment of the present disclosure comprises at least one memory storing a tracking application; and at least one processor performing AR object tracking by reading the tracking application stored in the memory, wherein commands of the tracking application include commands for performing obtaining first image data, providing a primitive model with a preconfigured 2D or 3D shape, determining at least one of the provided primitive model as a primitive application model, displaying the primitive application model on the first image data, performing alignment between the primitive application model and a target object within the first image data, setting attribute values specifying the shape of the primitive application model, obtaining 3D depth data including each descriptor of the target object and a distance value corresponding to the descriptor based on the set attribute values, and performing AR object tracking based on the 3D depth data.
At this time, the commands of the tracking application further include commands for performing generating a 3D definition model based on the 3D depth data, determining a target virtual object to be augmented and displayed based on the target object, generating an AR environment model by anchoring the target virtual object and the 3D definition model, and performing the AR object tracking based on the AR environment model.
A method and a system for AR object tracking according to an embodiment of the present disclosure may perform object tracking in the 3D space through a single image without involving a separate depth camera by tracking an augmented reality object based on the 3D information obtained from the single image.
Also, a method and a system for AR object tracking according to an embodiment of the present disclosure may improve data processing efficiency and accuracy during the 3D information reconstruction process by reconstructing 3D information within a single image and tracking an augmented reality object based on a 2D or 3D model with a preconfigured shape.
Also, a method and a system for AR object tracking according to an embodiment of the present disclosure may minimize unnecessary consumption of resources, such as preventing redundant efforts for tracking a predetermined object, and simultaneously improving the quality of an object tracking service by storing and sharing an AR object tracking model based on the reconstructed 3D information.
However, it should be noted that the technical effects of the present disclosure are not limited to the technical effects described above, and other technical effects not mentioned herein may be understood to those skilled in the art to which the present disclosure belongs from the description below.
Since the present disclosure may be modified in various ways and may provide various embodiments, specific embodiments will be depicted in the appended drawings and described in detail with reference to the drawings. The effects and characteristics of the present disclosure and a method for achieving them will be clearly understood by referring to the embodiments described later in detail together with the appended drawings. However, it should be noted that the present disclosure is not limited to the embodiment disclosed below but may be implemented in various forms. In the following embodiments, the terms such as “first” and “second” are introduced to distinguish one element from the others, and thus the technical scope of the present disclosure should not be limited by those terms. Also, a singular expression should be understood to indicate a plural expression unless otherwise explicitly stated. The term “include” or “have” is used to indicate existence of an embodied feature or constituting element in the present specification; and should not be understood to preclude the possibility of adding one or more other features or constituting elements. Also, constituting elements in the figure may be exaggerated or shrunk for the convenience of descriptions. For example, since the size and thickness of each element in the figure have been arbitrarily modified for the convenience of descriptions, it should be noted that the present disclosure is not necessarily limited to what has been shown in the figure.
In what follows, embodiments of the present disclosure will be described in detail with reference to appended drawings. Throughout the specification, the same or corresponding constituting element is assigned the same reference number, and repeated descriptions thereof will be omitted.
Referring to
In the embodiment, the AR object providing system 1000 that implements the AR object providing service may include a terminal 100, an AR object providing server 200, and a network 300.
At this time, the terminal 100 and the AR object providing server 200 may be connected to each other through the network 300.
Here, the network 300 according to the embodiment refers to a connection structure that allows information exchange between individual nodes, such as the terminal 100 and/or the AR object providing server 200.
Examples of the network 300 include the 3rd Generation Partnership Project (3GPP) network, Long Term Evolution (LTE) network, World Interoperability for Microwave Access (WIMAX) network, Internet, Local Area Network (LAN), Wireless Local Area Network (WLAN), Wide Area Network (WAN), Personal Area Network (PAN), Bluetooth network, satellite broadcasting network, analog broadcasting network, and/or Digital Multimedia Broadcasting (DMB) network. However, the network according to the present disclosure is not limited to the examples above.
Hereinafter, the terminal 100 and the AR object providing server 200 that implement the AR object providing system 1000 will be described in detail with reference to the appended drawings.
The terminal 100 according to an embodiment of the present disclosure may be a predetermined computing device equipped with a tracking application (in what follows, an application) providing an AR object providing service.
Specifically, from a hardware point of view, the terminal 100 may include a mobile type computing device 100-1 and/or a desktop type computing device 100-2 equipped with an application.
Here, the mobile type computing device 100-1 may be a mobile device equipped with an application.
For example, the mobile type computing device 100-1 may include a smartphone, a mobile phone, a digital broadcasting device, a personal digital assistant (PDA), a portable multimedia player (PMP), and/or a tablet PC.
Also, the desktop type computing device 100-2 may be a wired/wireless communication-based device equipped with an application.
For example, the desktop type computing device 100-2 may include a stationary desktop PC, a laptop computer, and/or a personal computer such as an ultrabook.
Depending on the embodiment, the terminal 100 may further include a predetermined server computing device that provides an AR object providing service environment.
Meanwhile, referring to
Specifically, the memory 110 may store an application 111.
At this time, the application 111 may store one or more of various applications, data, and commands for providing an AR object providing service environment.
In other words, the memory 110 may store commands and data used to create an AR object providing service environment.
Also, the memory 110 may include a program area and a data area.
Here, the program area according to the embodiment may be linked between an operating system (OS) that boots the terminal 100 and functional elements.
Also, the data area according to the embodiment may store data generated according to the use of the terminal 100.
Also, the memory 110 may include at least one or more non-transitory computer-readable storage media and transitory computer-readable storage media.
For example, the memory 110 may be implemented using various storage devices such as a ROM, an EPROM, a flash drive, and a hard drive and may include a web storage that performs the storage function of the memory 110 on the Internet.
The processor assembly 120 may include at least one or more processors capable of executing instructions of the application 111 stored in the memory 110 to perform various tasks for creating an AR object providing service environment.
In the embodiment, the processor assembly 120 may control the overall operation of the constituting elements through the application 111 of the memory 110 to provide an AR object providing service.
Specifically, the processor assembly 120 may be a system-on-chip (SOC) suitable for the terminal 100 that includes a central processing unit (CPU) and/or a graphics processing unit (GPU).
Also, the processor assembly 120 may execute the operating system (OS) and/or application programs stored in the memory 110.
Also, the processor assembly 120 may control each constituting element mounted on the terminal 100.
Also, the processor assembly 120 may communicate internally with each constituting element via a system bus and may include one or more predetermined bus structures, including a local bus.
Also, the processor assembly 120 may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, micro-controllers, microprocessors, and/or electrical units for performing other functions.
The communication processor 130 may include one or more devices for communicating with external devices. The communication processor 130 may communicate with external devices through a wireless network.
Specifically, the communication processor 130 may communicate with the terminal 100 that stores a content source for implementing an AR object providing service environment.
Also, the communication processor 130 may communicate with various user input components, such as a controller that receives user input.
In the embodiment, the communication processor 130 may transmit and receive various data related to the AR object providing service to and from another terminal 100 and/or an external server.
The communication processor 130 may transmit and receive data wirelessly to and from a base station, an external terminal 100, and an arbitrary server on a mobile communication network constructed through communication devices capable of performing technical standards or communication methods for mobile communication (e.g., Long Term Evolution (LTE), Long Term Evolution-Advanced (LTE-A), 5G New Radio (NR), WIFI) or short-distance communication.
Also, the communication processor 130 may further include at least one short-range communication module among a Near Field Communication (NFC) chip, a Bluetooth chip, an RFID reader, and a Zigbee chip for short-range communication.
The communication processor 130 may receive data including a link for receiving an AR library, which is a data set that provides an AR environment, through the short-range communication module.
The sensor system 160 may include various sensors such as an image sensor 161, a position sensor (IMU) 163, an audio sensor 165, a distance sensor, a proximity sensor, and a touch sensor.
Here, the image sensor 161 may capture images (images and/or videos) of the physical space around the terminal 100.
Specifically, the image sensor 161 may capture a predetermined physical space through a camera disposed toward the outside of the terminal 100.
In the embodiment, the image sensor 161 may be placed on the front or/and back of the terminal 100 and capture the physical space in the direction along which the image sensor 161 is disposed.
In the embodiment, the image sensor 161 may capture and acquire various images (e.g., shooted videos of identification code) related to the AR object providing service.
The image sensor 161 may include an image sensor device and an image processing module.
Specifically, the image sensor 161 may process still images or moving images obtained by an image sensor device (e.g., CMOS or CCD).
Also, the image sensor 161 may use an image processing module to process still images or moving images obtained through the image sensor device, extract necessary information, and transmit the extracted information to the processor.
The image sensor 161 may be a camera assembly including at least one or more cameras.
Here, the camera assembly may include a general-purpose camera that captures images in the visible light band and may further include a special camera such as an infrared camera or a stereo camera.
Also, depending on the embodiments, the image sensor 161 as described above may operate by being included in the terminal 100 or may be included in an external device (e.g., an external server) to operate in conjunction with the communication processor 130 and the interface unit 140.
The position sensor (IMU) 163 may detect at least one or more of the movement and acceleration of the terminal 100. For example, the position sensor 163 may be built from a combination of various position sensors such as accelerometers, gyroscopes, and/or magnetometers.
Also, the position sensor (IMU) 163 may recognize spatial information on the physical space around the terminal 100 in conjunction with the position communication processor 130, such as a GPS module of the communication processor 130.
The audio sensor 165 may recognize sounds around the terminal 100.
Specifically, the audio sensor 165 may include a microphone capable of detecting a voice input from a user using the terminal 100.
In the embodiment, the audio sensor 165 may receive voice data required for the AR object providing service from the user.
The interface unit 140 may connect the terminal 100 to one or more other devices to allow communication between them.
Specifically, the interface unit 140 may include a wired and/or wireless communication device compatible with one or more different communication protocols.
Through this interface unit 140, the terminal 100 may be connected to various input and output devices.
For example, the interface unit 140 may be connected to an audio output device such as a headset port or a speaker to output audio signals.
In the example, it is assumed that the audio output device is connected through the interface unit 140; however, embodiments in which the audio output device is installed inside the terminal 100 are equally supported.
Also, for example, the interface unit 140 may be connected to an input device such as a keyboard and/or a mouse to obtain user input.
The interface unit 140 may be implemented using at least one of a wired/wireless headset port, an external charger port, a wired/wireless data port, a memory card port, a port for connecting a device equipped with an identification module, an audio Input/Output (I/O) port, a video I/O port, an earphone port, a power amplifier, an RF circuit, a transceiver, and other communication circuits.
The input system 150 may detect user input (e.g., a gesture, a voice command, a button operation, or other types of input) related to the AR object providing service.
Specifically, the input system 150 may include a predetermined button, a touch sensor, and/or an image sensor 161 that receives a user motion input.
Also, by being connected to an external controller through the interface unit 140, the input system 150 may receive user input.
The display system 170 may output various information related to the AR object providing service as a graphic image.
In the embodiment, the display system 170 may display various user interfaces for the AR object providing service, shooted videos of identification code, guide objects, augmented reality web environment access links, an augmented reality (web) environment, object shooting guides, additional object shooting guides, shooted videos, primitive models, 3D definition models, AR environment models, and/or virtual objects.
The display system 170 may be built using at least one of, but is not limited to, a liquid crystal display (LCD), thin film transistor-liquid crystal display (TFT LCD), organic light-emitting diode (OLED), flexible display, 3D display, and/or e-ink display.
Additionally, depending on the embodiment, the display system 170 may include a display 171 that outputs an image and a touch sensor 173 that detects a user's touch input.
For example, the display 171 may implement a touch screen by forming a mutual layer structure or being integrated with a touch sensor 173.
The touch screen may provide an input interface between the terminal 100 and the user and, at the same time, an output interface between the terminal 100 and the user.
Meanwhile, the terminal 100 according to an embodiment of the present disclosure may perform deep learning related to an object tracking service in conjunction with a predetermined deep learning neural network.
Here, the deep learning neural network according to the embodiment may include, but is not limited to, the Convolution Neural Network (CNN), Deep Plane Sweep Network (DPSNet), Attention Guided Network (AGN), Regions with CNN features (R-CNN), Fast R-CNN, Faster R-CNN, Mask R-CNN, and/or U-Net network.
Specifically, in the embodiment, the terminal 100 may perform monocular depth estimation (MDE) in conjunction with a predetermined deep learning neural network (e.g., CNN).
For reference, monocular depth estimation (MDE) is a deep learning technique that uses single image data as input and outputs 3D depth data for the single input image data.
Also, in the embodiment, the terminal 100 may perform semantic segmentation (SS) in conjunction with a predetermined deep learning neural network (e.g., CNN).
For reference, semantic segmentation (SS) may refer to a deep learning technique that segments and recognizes each object included in a predetermined image in physically meaningful units.
At this time, depending on the embodiments, the terminal 100 may perform monocular depth estimation (MDE) and semantic segmentation (SS) in parallel. Meanwhile, depending on the embodiments, the terminal 100 may further perform at least part of the functional operations performed by the AR object providing server 200, which will be described later.
Meanwhile, the AR object providing server 200 according to an embodiment of the present disclosure may perform a series of processes for providing an AR object providing service.
Specifically, the AR object providing server 200 according to the embodiment may provide an AR object providing service by exchanging data required to operate an identification code-based AR object providing process in an external device, such as the terminal 100, with the external device.
More specifically, the AR object providing server 200 according to the embodiment may provide an environment in which an application 111 operates in an external device (in the embodiment, the mobile type computing device 100-1 and/or desktop type computing device 100-2).
For this purpose, the AR object providing server 200 may include an application program, data, and/or commands for operating the application 111 and may transmit and receive various data based thereon to and from the external device.
Also, in the embodiment, the AR object providing server 200 may create an AR project.
Here, the AR project according to the embodiment may mean an environment that produces a data set (in the embodiment, an AR library) for providing a predetermined augmented reality environment based on a target object.
Also, in the embodiment, the AR object providing server 200 may generate at least one AR library based on the created AR project.
At this time, in the embodiment, the AR library may include a target object including a target identification code, a target virtual object, anchoring information, augmented reality environment setting information, an augmented reality web environment access link matched to the target identification code and/or an augmented reality web environment that matches the target identification code.
Also, in the embodiment, the AR object providing server 200 may build an AR library database based on at least one AR library generated.
Also, in the embodiment, the AR object providing server 200 may recognize a predetermined target identification code.
Here, the target identification code according to the embodiment may mean a target object that provides an augmented reality environment access link connected to a predetermined augmented reality environment.
Also, in the embodiment, the AR object providing server 200 may provide a predetermined augmented reality web environment access link based on the recognized target identification code.
Here, the augmented reality web environment access link according to the embodiment may mean a Uniform Resource Locator (URL) directing to a predetermined augmented reality environment (in the embodiment, augmented reality web environment) implemented based on the web environment and/or an image including a URL (hereinafter, a URL image).
Also, in the embodiment, the AR object providing server 200 may provide a predetermined augmented reality web environment based on the provided augmented reality web environment access link.
Also, in the embodiment, the AR object providing server 200 may recognize a predetermined target object in the provided augmented reality web environment.
Here, the target object according to the embodiment may mean an object that provides a criterion for tracking a virtual object in a predetermined augmented reality environment and/or an object that provides a criterion for tracking changes in the 6 DoF and scale parameters of a virtual object displayed on a predetermined augmented reality environment.
Also, in the embodiment, the AR object providing server 200 may determine a target criterion object.
Here, the target criterion object according to the embodiment may mean a 3D definition model for a target object for which tracking is to be performed.
Also, in the embodiment, the AR object providing server 200 may determine the target virtual object.
Here, the target virtual object according to the embodiment may mean a 3D virtual object for augmented display in conjunction with the target criterion object.
Also, in the embodiment, the AR object providing server 200 may provide an AR object providing service that augments the target virtual object on a recognized target object.
Also, in the embodiment, the AR object providing server 200 may perform deep learning required for an object tracking service in conjunction with a predetermined deep-learning neural network.
In the embodiment, the AR object providing server 200 may perform monocular depth estimation (MDE) and semantic segmentation (SS) in parallel in conjunction with a predetermined deep learning neural network (e.g., CNN).
Specifically, in the embodiment, the AR object providing server 200 may read a predetermined deep neural network driving program built to perform the deep learning from the memory module 230.
Also, the AR object providing server 200 may perform deep learning required for the following object tracking service according to the predetermined deep neural network driving program.
Here, the deep learning neural network according to the embodiment may include, but is not limited to, the Convolution Neural Network (CNN), Deep Plane Sweep Network (DPSNet), Attention Guided Network (AGN), Regions with CNN features (R-CNN), Fast R-CNN, Faster R-CNN, Mask R-CNN, and/or U-Net network.
At this time, depending on the embodiments, the deep learning neural network may be directly included in the AR object providing server 200 or may be implemented as a separate device and/or a server from the AR object providing server 200.
In the following description, it is assumed that the deep learning neural network is described as being included in the AR object providing server 200, but the present disclosure is not limited to the specific assumption.
Also, in the embodiment, the AR object providing server 200 may store and manage various application programs, commands, and/or data for implementing the AR object providing service.
In the embodiment, the AR object providing server 200 may store and manage at least one or more AR projects, an AR library, a target object including a target identification code and a target criterion object, a target virtual object, a primitive model, a primitive application model, primitive model attribute values, a guide object, an augmented reality web environment access link, an augmented reality web environment, user account information, group member information, an AR environment library, an AR environment model, a 3D definition model, an object shooting guide, an additional object shooting guide, shooted videos, key frame images, learning data, 3D depth data, deep learning algorithms, and/or a user interface.
However, the functional operations that the AR object providing server 200 according to the embodiment of the present disclosure may perform are not limited to the above, and other functional operations may be further performed.
Meanwhile, referring further to
Here, the memory module 230 may store one or more of the operating system (OS), various application programs, data, and commands for providing the AR object providing service.
Also, the memory module 230 may include a program area and a data area.
At this time, the program area according to the embodiment may be linked between an operating system (OS) that boots the server and functional elements.
Also, the data area according to the embodiment may store data generated according to the use of the server.
Also, the memory module 230 may be implemented using various storage devices such as a ROM, a RAM, an EPROM, a flash drive, and a hard drive and may be implemented using a web storage that performs the storage function of the memory module on the Internet.
Also, the memory module 230 may be a recording module removable from the server.
Meanwhile, the processor module 210 may control the overall operation of the individual units described above to implement the AR object providing service.
Specifically, the processor module 210 may be a system-on-chip (SOC) suitable for the server that includes a central processing unit (CPU) and/or a graphics processing unit (GPU).
Also, the processor module 210 may execute the operating system (OS) and/or application programs stored in the memory module 230.
Also, the processor module 210 may control individual constituting elements installed in the server.
Also, the processor module 210 may communicate internally with each constituting element via a system bus and may include one or more predetermined bus structures, including a local bus.
Also, the processor module 210 may be implemented using at least one of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, micro-controllers, microprocessors, and/or electrical units for performing other functions.
In the description above, it was assumed that the AR object providing server 200 according to an embodiment of the present disclosure performs the functional operations described above; however, depending on the embodiments, an external device (e.g., the terminal 100) may perform at least part of the functional operations performed by the AR object providing server 200, or the AR object providing server 200 may further perform at least part of the functional operations performed by the external device, where various embodiments may be implemented in a similar manner.
In what follows, a method for providing an AR object tracking service by an application 111 executed by at least one or more processors of the terminal 100 according to an embodiment of the present disclosure will be described in detail with reference to
At least one or more processors of the terminal 100 according to an embodiment of the present disclosure may execute at least one or more applications 111 stored in at least one or more memories 110 or make the applications operate in the background.
In what follows, the process in which at least one or more processors of the terminal 100 execute the commands of the application 111 to perform the method for providing an AR object tracking service will be described by assuming that the application 111 performs the process.
Referring to
Specifically, the application 111 according to the embodiment may provide a membership subscription process that registers user account information on the platform providing an object tracking service (in what follows, a service platform).
More specifically, in the embodiment, the application 111 may provide a user interface through which user account information may be entered (in what follows, a membership subscription interface).
For example, the user account information may include a user ID, password, name, age, gender, and/or email address.
Also, in the embodiment, the application 111 may register the user account information obtained through the membership subscription interface to the service platform in conjunction with the AR object providing server 200.
For example, the application 111 may transmit the user account information obtained based on the membership subscription interface to the AR object providing server 200.
At this time, the AR object providing server 200 which has received the user account information may store and manage the received user account information on the memory module 230.
Therefore, the application 111 may implement the membership subscription process which registers the user account information on the service platform.
Also, in the embodiment, the application 111 may grant use rights for the object tracking service to a user whose user account information has been registered with the service platform.
Also, in the embodiment, the application 111 may configure group members of an AR environment library S103.
Here, the AR environment library according to the embodiment may mean a library that provides at least one AR environment model.
At this time, the AR environment model according to the embodiment may mean a predetermined 3D definition model and a model including a predetermined virtual object anchored to the 3D definition model.
Here, the 3D definition model according to the embodiment may mean a model trained to track the changes in the 6 DoF parameters of a predetermined object.
Specifically, the application 111 according to the embodiment may configure group members with the rights to share the AR environment library (including a track library, which will be described later).
At this time, a group member may be at least one other user who has registered an account on the service platform.
More specifically, in the embodiment, when the application 111 obtains use rights for the object tracking service through the membership subscription service, the application 111 may provide a user interface (in what follows, a member configuration interface) through which a group member may be configured.
Then the application 111 may configure at least one other user as a group member based on the user input obtained from the provided member configuration interface.
Through the operation above, the application 111 may subsequently provide a function of sharing various data (in the embodiment, the AR environment model and/or 3D definition model) among group members based on the service platform.
Also, in the embodiment, the application 111 may determine a target criterion object S305.
Here, a target criterion object according to the embodiment may mean a 3D definition model for the target object for which tracking is to be performed.
In other words, the target criterion object CO may be a model trained to track the changes in the 6 DoF parameters of the target object for which tracking is to be performed.
For reference, referring to
Specifically, 6 DoF parameters may include rotation data (R values) that include measurements of left-to-right rotation (Roll) around X-axis, forward-to-backward rotation (Pitch) around Y-axis, and up-down rotation (Yaw) around Z-axis in the 3D orthogonal coordinate system.
Further, 6 DoF parameters may include translational data (T values) that include measurements of forward/backward, left/right, and up/down translational motions in the 3D orthogonal coordinate system.
Returning to the disclosure, the target criterion object according to the embodiment may include descriptors of the object and distance information corresponding to each descriptor (in what follows, 3D depth data).
The target criterion object may be a model trained to track the changes in the 6 DoF parameters of the object based on the 3D depth data.
More specifically, the application 111 according to the embodiment may determine the target criterion object CO based on 1) a predetermined 3D definition model within a track library.
At this time, the track library according to the embodiment may mean a library that provides at least one 3D definition model.
For example, the preconfigured, predetermined 3D definition model may include a 2D rectangular model, a 3D cube model, and a 3D cylinder model.
Also, in the embodiment, the application 111 may obtain user input that selects at least one from among 3D definition models within the track library.
Also, in the embodiment, the application 111 may read and download a 3D definition model selected according to the user input from the track library.
In this way, the application 111 may determine the 3D definition model according to the user's selection as a target criterion object.
Meanwhile, in the embodiment, the application 111 may determine a target criterion object based on 2) the object shape.
In the embodiment, the object may mean an object contained in a real-time image obtained by capturing the 3D space through the image sensor 161.
Referring to
Specifically, the application according to the embodiment may provide an object capture guide describing how to capture an object for which tracking is to be performed.
In the embodiment, the object capture guide may include information guiding to capture a target object at least one or more times from at least one or more viewpoints (i.e., camera viewpoints).
Also, in the embodiment, the application 111 may obtain learning data based on the image data captured according to the object capture guide S203.
Here, the learning data according to the embodiment may mean the base data intended for obtaining a target criterion object (3D definition model).
Specifically, in the embodiment, the application 111 may obtain at least one image data of an object captured from at least one viewpoint.
At this time, when one image data is obtained, the application 111 may obtain learning data including the single image data.
On the other hand, when a plurality of image data are obtained, the application 111 may obtain learning data including the plurality of image data and 6 DoF parameters describing the relationships among a plurality of viewpoints from which the plurality of image data are captured.
Also, in the embodiment, the application 111 may calculate the 3D depth data based on the obtained learning data S205.
Here, in other words, the 3D depth data according to the embodiment may mean information that includes individual descriptors of an object and distance values corresponding to the individual descriptors.
In other words, the 3D depth data may be image data for which the ray casting technique is implemented.
Specifically, referring to
Here, referring to
In the embodiment, the primitive model 10 may be implemented using a predetermined 2D rectangular model 10-1, 3D cube model 10-2, or 3D cylinder model 10-3.
At this time, in the embodiment, the primitive model 10 may include a plurality of descriptors specifying the model shape and distance information corresponding to each of the plurality of descriptors.
Specifically, in the embodiment, the application 111 may provide a plurality of primitive models 10 according to a predetermined method (e.g., list datatype).
Also, in the embodiment, the application 111 may determine at least one of the provided primitive models 10 as a primitive application model S303.
Here, the primitive application model according to the embodiment may mean the primitive model 10 to be overlaid and displayed on single image data for the purpose of calculating 3D depth data.
Specifically, in the embodiment, the application 111 may provide a user interface (in what follows, a primitive model 10 selection interface) through which at least one of a plurality of primitive models 10 may be selected.
Also, the application 111 may determine the primitive model 10 selected according to the user input based on the primitive model 10 selection interface as a primitive application model.
In other words, in the embodiment, the application 111 may calculate 3D depth data using the primitive model 10 determined to have the most similar shape to the object according to the user's cognitive judgment.
Through the operation above, the application 111 may improve data processing efficiency and user convenience in the 3D depth data calculation process.
In another embodiment, the application 111 may perform semantic segmentation on a target object within single image data in conjunction with a predetermined deep learning neural network.
Then the application 111 may detect the edge of the target object through the semantic segmentation performed.
Also, the application 111 may compare the edge shape of a detected target object with the edge shape of each of the plurality of primitive models 10.
Also, the application 111 may select a primitive model 10 having a similarity higher than a predetermined threshold (e.g., a similarity higher than a preset ratio (%)) with the edge shape of a target object from a comparison result.
Then the application 111 may provide a user interface (in what follows, a recommendation model selection interface) through which one of the selected primitive models (in what follows, primitive recommendation models) may be selected as a primitive application model.
Also, the application 111 may determine the primitive recommendation model selected according to the user input based on the recommendation model selection interface as a primitive application model.
In this way, the application 111 may automatically detect and provide a primitive model 10 that has the most similar shape to the target object among the plurality of primitive models 10.
Accordingly, the application 111 may support calculating 3D depth data using the primitive model 10 determined based on objective data analysis.
Also, in the embodiment, the application 111 may perform alignment between the primitive application model and the target object S305.
Specifically, referring to
More specifically, in the embodiment, the application 111 may display the primitive application model 20: 20-1, 20-2, 20-3 by overlaying the primitive application model at a predetermined position within single image data (SID).
In the embodiment, the application 111 may overlay and display the primitive application model 20 at a position within a predetermined radius from a target object within the single image data (SID).
Also, the application 111 may place each descriptor of the overlaid primitive application model 20 at each predetermined point on the target object.
At this time, in the embodiment, when the position of each descriptor of the primitive application model 20 displayed on the single image data (SID) is changed, the primitive application model 20 may change its shape according to the edges changed in conjunction with the change status of the changed descriptors.
In other words, the shape of the primitive application model 20 may be adjusted to have a shape similar to that of the target object by shape deformation according to a position change of each descriptor.
Returning to the description of the embodiment, in the embodiment, the application 111 may place each descriptor of the primitive application model 20 at each predetermined point on the target object based on user input.
Specifically, the application 111 may provide a user interface (in what follows, align interface) that may change the position coordinates of descriptors of the primitive application model 20 displayed on single image data (SID).
Also, the application 111 may position each descriptor included in the primitive application model 20 at each predetermined point on the target object according to user input based on the align interface.
In other words, the application 111 may support the user to freely place each descriptor of the primitive application model 20 at each predetermined point on the target object deemed to correspond to the descriptor.
Accordingly, the application 111 may perform alignment to ensure that the edge shape of the primitive application model 20 and the edge shape of the target object have a similarity greater than a predetermined threshold.
In another embodiment, the application 111 may automatically place each descriptor of the primitive application model 20 at each predetermined point on the target object.
At this time, the application 111 may automatically place each descriptor of the primitive application model 20 at each predetermined point on the target object so that the primitive application model 20 is aligned with the target object.
Specifically, the application 111 may automatically place each descriptor of the primitive application model 20 at each predetermined position on the target object so that the primitive application model 20 is aligned with the target object.
The embodiment of the present disclosure does not specify or limit the algorithm itself for deriving the position coordinates of each descriptor.
Also, the application 111 may change the position of each descriptor of the primitive application model 20 according to the derived position coordinates of each descriptor.
Therefore, the application 111 may perform alignment between the primitive application model 20 and the target object.
Accordingly, the application 111 may more easily and quickly perform alignment that relates the shapes of the primitive application model 20 to those of the target object.
At this time, in the embodiment, the application 111 may determine the area occupied by the primitive application model 20 aligned with the target object as a target object area.
Then the application 111 may calculate 3D depth data based on the determined target object area.
Also, in the embodiment, the application 111 may set attribute values for the primitive application model 20 for which alignment is performed S307.
Here, the attribute values according to the embodiment may be information that sets various parameter values that specify the shape of a predetermined object.
In the embodiment, the attribute values may be information that sets values such as scale, diameter, and/or radius for each edge included in a predetermined object.
Specifically, referring to
In other words, the application 111 may set the attribute values of the primitive application model 20 based on the attribute values measured for the actual object.
More specifically, the application according to the embodiment may provide a user interface (in what follows, a model attribute interface) through which the attribute values of the primitive application model 20 may be set.
Additionally, the application 111 may set attribute values of the primitive application model 20 based on user input based on the model attribute interface.
At this time, in a preferred embodiment, the user input for setting the attribute values is performed based on accurate measurements of attribute values for the actual object.
In other words, in the embodiment, the user may measure attribute values such as scale, diameter, and/or radius for each predetermined edge of a real object and apply user input that sets the attribute values of the primitive application model 20 based on the measured attribute values.
Also, in the embodiment, the application 111 may calculate 3D depth data based on set attribute values S309.
In other words, referring to
Specifically, in the embodiment, the application 111 may read, from the memory 110, a plurality of descriptors initially set for the primitive application model 20 and distance information for each of the plurality of descriptors (in what follows, initial attribute value information).
Also, the application 111 may calculate 3D depth data through comparison between the read initial attribute value information and the current attribute value information.
For example, the application 111 may obtain the initial distance value for the first edge of the primitive application model 20 based on the initial attribute value information.
Also, in the embodiment, the application 111 may obtain the current length value (i.e., scale value) for the first edge of the primitive application model 20 based on current attribute value information.
Also, in the embodiment, the application 111 may perform a comparison between the obtained initial distance value and the current length value.
Also, in the embodiment, the application 111 may estimate the distance value according to the current length value in comparison to the initial distance value.
Therefore, in the embodiment, the application 111 may calculate 3D depth data based on the estimated current distance value.
In this way, the application 111 according to the embodiment may accurately and efficiently estimate and reconstruct 3D information (in the embodiment, 3D depth data) for tracking a target object from single image data.
On the other hand, 2) when learning data includes a plurality of image data (i.e., when 3D depth data are calculated based on a plurality of image data), the application 111 according to the embodiment may calculate 3D depth data for each of the plurality of image data in the same way as the process described above.
In other words, the application 111 may obtain a plurality of 3D depth data by calculating 3D depth data corresponding to each of the plurality of image data.
Returning to
Here, referring again to
In other words, in the embodiment, the application 111 may generate a 3D definition model trained to track the changes in the 6 DoF parameters of a target object for which tracking is to be performed by generating a 3D definition model based on 3D depth data.
Specifically, in the embodiment, the application 111, in conjunction with a predetermined deep learning neural network, may perform deep learning (in what follows, the first 3D information reconstruction deep learning) by using 3D depth data (i.e., descriptors for a target object and distance values corresponding to the respective descriptors) as input data and by using a 3D definition model based on the 3D depth data as output data.
At this time, the embodiment of the present disclosure does not specify or limit the deep learning algorithm itself, which performs 3D information reconstruction; the application 111 may perform functional operations for 3D information reconstruction deep learning based on various well-known deep learning algorithms (e.g., deep plane sweep network (DPSNet)) and/or attention guided network (AGN).
Therefore, in the embodiment, the application 111 may generate a 3D definition model according to 3D depth data.
At this time, in the embodiment, when a plurality of 3D depth data exist (i.e., when a plurality of 3D depth data are calculated using learning data that include a plurality of image data), the application 111 may generate each 3D definition model based on the corresponding 3D depth data in the same manner as described above.
In other words, the application 111 may generate a plurality of 3D definition models based on a plurality of 3D depth data.
Also, the application 111 may combine a plurality of 3D definition models into one 3D definition model according to a preconfigured method.
In what follows, for the purpose of effective description, a plurality of 3D definition models are limited to a first 3D definition model and a second 3D definition model; however, the present disclosure is not limited to the specific example.
In the embodiment, the application 111 may detect descriptors having mutually corresponding position coordinates (in what follows, common descriptors) among a plurality of descriptors within the first 3D definition model and a plurality of descriptors within the second 3D definition model.
Also, the application 111 may detect a distance value corresponding to a common descriptor within the first 3D definition model (in what follows, a first distance value).
Also, the application 111 may detect a distance value corresponding to a common descriptor within the second 3D definition model (in what follows, a second distance value).
Also, the application 111 may obtain an integrated distance value obtained by combining the detected first and second distance values into a single value according to a preconfigured method (e.g., averaging operation).
Also, the application may set the obtained integrated distance value as a distance value of the common descriptor.
Also, in the embodiment, the application 111 may detect and obtain the remaining descriptors excluding the common descriptor (in what follows, specialized descriptors) from among a plurality of descriptors within the first 3D definition model and a plurality of descriptors within the second 3D definition model.
Also, in the embodiment, the application 111 may generate 3D integrated definition model which includes both the common descriptor and the specialized descriptor obtained.
Therefore, the application 111 may combine the first 3D definition model and the second 3D definition model into one 3D definition model.
However, the embodiment described above is only an example, and the embodiment of the present disclosure does not specify or limit the method itself, which combines a plurality of 3D definition models into one 3D definition model.
In another embodiment, when a plurality of 3D depth data exist (i.e., when a plurality of 3D depth data are calculated using learning data that include a plurality of image data), the application 111 may perform deep learning (in what follows, the second 3D information reconstruction deep learning) in conjunction with a predetermining deep learning neural network by using a plurality of 3D depth data as input data and by using a single 3D definition model based on a plurality of 3D depth data as output data.
Thus, in the embodiment, the application 111 may generate one 3D definition model according to a plurality of 3D depth data.
In this way, the application 111 may expand the area for precise tracking of a target object by creating a 3D definition model that reflects a plurality of 3D depth data according to a plurality of image data.
At this time, depending on the embodiments, the application 111 may register (store) and manage the generated 3D definition model on the AR project and/or AR library.
Accordingly, the application 111 may enable the user to utilize not only the built-in 3D definition models provided on a service platform but also the 3D definition models newly created by the user on the service platform in various ways.
Also, in the embodiment, the application 111 may determine the generated 3D definition model as a target criterion object S209.
In other words, based on the 3D definition model generated as described above, the application 111 may determine a target criterion object that includes each descriptor for a target object within a real-time captured image (here, an object) and distance value information corresponding to the descriptor.
Returning again to
Here, a target virtual object according to the embodiment may mean a 3D virtual object to be augmented and displayed in conjunction with the target criterion object.
At this time, the virtual object according to the embodiment may include 3D coordinate information that specifies the virtual object's 6 DoF parameters in 3D space.
Specifically, in the embodiment, the application 111 may provide a library (in what follows, a virtual object library) that provides at least one virtual object.
Also, the application 111 may obtain user input for selecting at least one of the virtual objects included in the virtual object library.
Accordingly, the application 111 may determine the virtual object selected according to the user input as the target virtual object.
In another embodiment, the application 111 may provide a user interface (in what follows, a virtual object upload interface) through which a user may upload at least one virtual object onto the service platform.
Also, the application 111 may determine the virtual object uploaded to the service platform based on user input through the virtual object upload interface as a target virtual object.
At this time, depending on the embodiments, the application 111 may determine whether a virtual object uploaded through the virtual object upload interface meets preconfigured specifications.
Also, the application 111 may upload a virtual object determined to meet preconfigured specifications onto the service platform.
Also, in the embodiment, the application 111 may generate an AR environment model based on the target criterion object and the target virtual object S109.
Here, referring to
Specifically, the application 111 according to the embodiment may perform anchoring between the target criterion object and the target virtual object.
Here, for reference, anchoring according to the embodiment may mean a functional operation for registering a target criterion object to a target virtual object so that the changes in the 6 DoF parameters of the target criterion object are reflected in the changes in the 6 DoF parameters of the target virtual object.
More specifically, the application 111 may perform anchoring between the target criterion object and the target virtual object based on the 3D depth data of the target reference object and the 3D coordinate information of the target virtual object.
At this time, the application 111 according to the embodiment may perform an anchoring process based on various well-known algorithms, where the embodiment of the present disclosure does not specify or limit the algorithm itself for performing the anchoring process.
Therefore, in the embodiment, the application 111 may generate an AR environment model EM including a target criterion object and a target virtual object anchored with respect to the target criterion object.
Also, in the embodiment, the application 111 may register (store) and manage the created AR environment model EM on the AR environment library.
In other words, the application 111 may enable the user to utilize the AR environment model EM generated through the user's terminal 100 on the service platform in various ways (e.g., object tracking, virtual object augmentation, and/or production of a new AR environment model EM.
Also, in the embodiment, the application 111 may perform AR object tracking based on the AR environment model EM S111.
Here, referring to
Specifically, the application 111 according to the embodiment may provide an AR environment library that provides at least one AR environment model EM.
Also, the application 111 may provide a user interface (in what follows, an AR environment setting interface) through which the user may select at least one of at least one AR environment model EM provided through the AR environment library.
Also, the application 111 may read and download an AR environment model selected according to user input (in what follows, a first AR environment model) based on the AR environment setting interface from the AR environment library.
Therefore, the application 111 may build an AR object tracking environment based on the first AR environment model.
To continue the description, in the embodiment, the application 111 may obtain a new captured image NI shooting a predetermined 3D space from a predetermined viewpoint in conjunction with the image sensor 161.
Also, in the embodiment, the application 111 may detect a target object (in what follows, a first tracking object) within the new captured image NI based on the first AR environment model.
At this time, the application 111 may detect an object corresponding to a target criterion object of the first AR environment model (in what follows, a first target criterion object) among at least one object included in the new captured image NI as a first tracking object.
Also, in the embodiment, the application 111 may augment and display a predetermined virtual object VO on the new captured image NI based on the first AR environment model.
Specifically, the application 111 may augment and display the target virtual object (in what follows, the first target virtual object) of the first AR environment model on the new captured image NI.
At this time, the application 111 may augment and display the first target virtual object on the new captured image NI based on the anchoring information between the first target criterion object and the first target virtual object of the first AR environment model.
Specifically, according to the anchoring information between the first target criterion object and the first target virtual object of the first AR environment model, the application 111 may augment and display the first target virtual object at a predetermined position based on the first tracking object within the new captured image NI.
In other words, the application 111 may augment and display a first virtual object at a position where anchoring information between a first target criterion object and a first target virtual object within the first AR environment model and anchoring information between a first tracking object and a first target virtual object within the new captured image NI are implemented in the same manner.
Therefore, provided that the user constructs an AR environment model EM for a desired target object on the user's working environment, the application 111 may detect the target object within a specific captured image, track changes in the 6 DoF parameters of the detected target object TO and each virtual object anchored to the target object according to a preconfigured method, and display the target object and the virtual object using a shape corresponding to the tracked changes in the 6 DoF parameters.
Meanwhile, in the embodiment, the application 111 may share an AR environment library (including a track library) in conjunction with the terminal 100 of a group member.
Specifically, the application 111 may share the AR environment library with at least one group member through the service platform.
Here, in other words, a group member according to the embodiment may mean another user who has the rights to share the AR environment library (including a track library) among other users who have registered their account on the service platform.
At this time, depending on the embodiments, the application 111 may set whether to allow sharing of each AR environment model EM within the AR environment library among group members.
In the embodiment, the application 111 may provide a user interface (in what follows, a group sharing setting interface) that may set whether to allow sharing of a predetermined AR environment model EM among group members.
Also, the application 111 may set whether to enable or disable group sharing of a predetermined AR environment model EM according to user input through the group sharing setting interface.
Also, the application 111 may share the AR environment model EM configured for group sharing with at least one group member.
At this time, in the embodiment, the AR environment model EM for which group sharing is allowed may be automatically synchronized and shared within a group in real-time through a group-shared AR environment library on the service platform.
Also, in the embodiment, the group shared AR environment model EM may be read and downloaded from the group shared AR environment library based on user (i.e., other user) input from the group member's terminal 100.
As described above, the application 111 may implement AR object tracking for a target object desired by the user using a pre-generated AR environment model EM.
Through the operation above, the application 111 may more efficiently and accurately track changes in the 6 DoF parameters of a virtual object augmented based on a target object within predetermined image data.
Accordingly, the application 111 may augment and display the virtual object on the image data according to a clear posture with relatively little data processing.
As described above, the method and the system for providing an AR object tracking service based on deep learning according to an embodiment of the present disclosure may easily and quickly perform object tracking in the 3D space through a single image without involving a separate depth camera by tracking an augmented reality object by obtaining 3D information from the single image based on deep learning.
Also, a method and a system for providing an AR object tracking service based on deep learning according to an embodiment of the present disclosure may improve data processing efficiency and accuracy and convenience of the data processing during the 3D information reconstruction process by reconstructing 3D information within a single image and tracking an augmented reality object based on a predetermined deep learning neural network.
Also, a method and a system for providing an AR object tracking service based on deep learning according to an embodiment of the present disclosure may minimize unnecessary consumption of resources, such as preventing redundant efforts for tracking a predetermined object, and simultaneously improving the quality of an object tracking service by storing and sharing an AR object tracking model based on the reconstructed 3D information.
Meanwhile, the embodiments of the present disclosure descried above may be implemented in the form of program commands which may be executed through various constituting elements of a computer and recorded in a computer-readable recording medium. The computer-readable recording medium may include program commands, data files, and data structures separately or in combination thereof. The program commands recorded in the computer-readable recording medium may be those designed and configured specifically for the present disclosure or may be those commonly available for those skilled in the field of computer software. Examples of a computer-readable recoding medium may include magnetic media such as hard-disks, floppy disks, and magnetic tapes; optical media such as CD-ROMs and DVDs; magneto-optical media such as floptical disks; and hardware devices specially designed to store and execute program commands such as ROM, RAM, and flash memory. Examples of program commands include not only machine codes such as those generated by a compiler but also high-level language codes which may be executed by a computer through an interpreter and the like. The hardware device may be configured to be operated by one or more software modules to perform the operations of the present disclosure, and vice versa.
Specific implementation of the present disclosure are embodiments, which does not limit the technical scope of the present disclosure in any way. For the clarity of the specification, descriptions of conventional electronic structures, control systems, software, and other functional aspects of the systems may be omitted. Also, connection of lines between constituting elements shown in the figure or connecting members illustrates functional connections and/or physical or circuit connections, which may be replaceable in an actual device or represented by additional, various functional, physical, or circuit connection. Also, if not explicitly stated otherwise, “essential” or “important” elements may not necessarily refer to constituting elements needed for application of the present disclosure.
Also, although detailed descriptions of the present disclosure have been given with reference to preferred embodiments of the present disclosure, it should be understood by those skilled in the corresponding technical field or by those having common knowledge in the corresponding technical field that the present disclosure may be modified and changed in various ways without departing from the technical principles and scope specified in the appended claims. Therefore, the technical scope of the present disclosure is not limited to the specifications provided in the detailed descriptions of this document but has to be defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0174721 | Dec 2022 | KR | national |
10-2022-0177280 | Dec 2022 | KR | national |
10-2022-0177282 | Dec 2022 | KR | national |
10-2022-0177285 | Dec 2022 | KR | national |