This application claims priority to and the benefit of Korean Patent Application No. 2023-0146466, filed on Oct. 30, 2023, the disclosure of which is incorporated herein by reference in its entirety.
The present disclosure relates to the metaverse, and more specifically, to a method and apparatus for generating identification information for a virtual space to search for a virtual space.
“Metaverse” is a portmanteau of “meta,” which refers to conceptualization and transcendence, and “universe,” which refers to the world, and the word refers to a digital or transcendent world in which virtuality and reality are fused together. Recently, with the development of 5G technology and virtual technology (augmented reality: AR/virtual reality: VR), the metaverse has emerged as a virtual convergence space in which people engage in leisure activities and economic activities.
With the development of metaverse technology, service for various virtual spaces is being provided, and the number of virtual spaces for which service is provided is expected to increase further in the future. As the number of virtual spaces for which service is provided increases, there is a demand for a search method that allows users to search for a virtual space for which the user desires service from among various virtual spaces.
The present disclosure is directed to providing a method and apparatus for generating identification information for a virtual space that are capable of improving user accessibility to a virtual space and supporting users in searching for a virtual space.
The technical objectives of the present disclosure are not limited to the above, and may be variously expanded without departing from the technical concept and field of the present disclosure.
According to an aspect of the present disclosure, there is provided a method of generating identification information for a virtual space, which includes: setting a plurality of crawling points for each crawling area in a three-dimensional virtual grid shape for a virtual space; collecting at least one of a street view image and an aerial view image in a preset direction at the crawling point; and generating a spatial descriptor for each of the crawling points from the collected image.
According to an aspect of the present disclosure, there is provided a method of generating identification information for a virtual space, which includes: setting a street view crawling point and an aerial view crawling point for a virtual space; obtaining a street view image in a preset direction at the street view crawling point, and collecting an aerial view image in a preset direction at the aerial view crawling point; and generating spatial descriptors for each of the street view crawling point and the aerial view crawling point from each of the street view image and the aerial view image.
According to an aspect of the present disclosure, there is provided an apparatus for generating identification information for a virtual space, which includes: a memory; and at least one processor electrically connected to the memory, wherein the processor collects at least one of a street view image and an aerial view image in a preset direction at a plurality of crawling points that are set for each crawling area in a three-dimensional virtual grid shape for a virtual space, and generates a spatial descriptor for the crawling point from the collected image.
The above and other objects, features and advantages of the present disclosure will become more apparent to those of ordinary skill in the art by describing exemplary embodiments thereof in detail with reference to the accompanying drawings, in which:
While embodiments according to the concept of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of embodiment in the drawings and will herein be described in detail. It should be understood, however, that there is no intent to limit the embodiments according to the concept of the present disclosure to the particular forms disclosed, but on the contrary, the embodiments according to the concept of the present disclosure are to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure. Like numbers refer to like elements throughout the description of the drawings.
Hereinafter, embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings.
Referring to
The terminal 140 may provide the virtual space search server 110 with an image of a target space, and request a search for the target space. The virtual space search server 110 may search for the target space in a virtual space according to the search request for the target space received from the terminal 140 and provide the found target space to the terminal 140. Here, the virtual space may be a virtual space for which service is provided from various metaverse platforms.
In an embodiment, the virtual space search server 110 may perform a search using identification information for the virtual space. The identification information for the virtual space may be generated by the identification information generation apparatus 120 and stored in the identification information storage 130, and according to an embodiment, the virtual space search server 110 may generate the identification information for the virtual space. The identification information for the virtual space may be generated and assigned for each crawling area in a three-dimensional virtual grid shape and for each crawling point set within the crawling area.
The virtual space search server 110 extracts identification information from the image of the target space and searches for identification information for a virtual space corresponding to the extracted identification information in the identification information storage 130. A space for a crawling point corresponding to the found identification information may be provided to the user as a search result.
The method of generating identification information according to the embodiment of the present disclosure may be performed in a computing device including a memory and a processor electrically connected to the memory, and the above described identification information generation apparatus is an example of such a computing device.
The computing device according to the embodiment of the present disclosure sets crawling points for a virtual space (S210). The computing device may set a plurality of crawling points for each crawling area in a three-dimensional virtual grid shape for a virtual space, and according to an embodiment, may set crawling points without setting separate crawling areas. The crawling point may include an aerial view crawling point for collecting aerial view images and a street view crawling point for collecting street view images.
The computing device collects at least one of a street view image and an aerial view image in a preset direction at the crawling point (S220). A street view image may be collected at the street view crawling point, and an aerial view image may be collected at the aerial view crawling point. The street view image may be an image taken from the ground in the virtual space, for example, an image taken by a virtual mobile robot. The aerial view image may be an image taken from the airspace in the virtual space, for example, an image taken by a virtual drone.
The computing device generates identification information, i.e., a spatial descriptor, for each crawling point from the image collected in operation S220 (S230). The computing device may, as an example, generate a spatial descriptor directly from the collected image or may convert the collected image and generate a spatial descriptor from the converted image, and the computing device may convert the collected image into a depth image and generate a spatial descriptor for each crawling point from the depth image.
The three-dimensional virtual grids may, as an example, be grids having an octree structure, and the size of the crawling area, i.e., the size of the virtual grid, may be set proportional to the altitude of the crawling area. In other words, the size of a crawling area that is relatively close to the ground may be set to be smaller than the size of a crawling area at a high altitude relatively far from the ground.
The lowest crawling area 310 among the crawling areas is an area that includes the ground of the virtual space, the street view crawling points may be set in the lowest crawling area 310 among the crawling areas, and the aerial view crawling points may be set for all crawling areas.
The aerial view crawling points 410 may be set at at least one preset altitude in the crawling area, and may be set to be spaced a preset interval from each other as shown in
In addition, the number of the aerial view crawling points 410 may be set to be proportional to the size of the crawling area and inversely proportional to the altitude of the aerial view crawling points 410. In other words, the higher the set position of the aerial view crawling point, the fewer aerial view crawling points there may be.
Street view crawling points 510 may be set to a virtual road and a virtual intersection that are included in the lowest crawling area. The street view crawling points set on the virtual road may be set to be spaced a preset interval from each other.
The interval of the street view crawling points set on the virtual road may be variously determined depending on the embodiment, and may be adjusted according to a street environment. The street environment may be, as an example, the density of virtual buildings around the virtual road. For example, when the density of virtual buildings around the virtual road is high, the interval of the street view crawling points set on the virtual road may decrease, and when the density of virtual buildings around the virtual road is low, the interval of the street view crawling points set on the virtual road may increase.
The computing device may detect a road and an intersection from an aerial view image of the ground, i.e., the street, and may set street view crawling points on the detected road and intersection.
According to an embodiment of the present disclosure, spatial descriptors are generated not only from images of the ground but also from images of the airspace, thereby enabling the generation of precise identification information for a virtual space.
As described above, the computing device may use a virtual drone to collect aerial view images and a virtual mobile robot to collect street view images. As shown in
The computing device rotates the virtual drone according to a camera field of view (FOV) of the virtual drone to capture images covering 360 degrees in all directions. The number of captured images is determined according to the camera FOV of the virtual drone. In
The virtual mobile robot also captures images covering 360 degrees in all directions at a street view crawling point, and connects the captured images to generate a street view image in the form of a panoramic image.
The computing device rotates the virtual mobile robot according to a camera FOV of the virtual mobile robot to capture the images covering 360 degrees in all directions. The number of the captured images is determined according to the camera FOV of the virtual mobile robot.
According to an embodiment of the present disclosure, by generating spatial descriptors from a 360-degree image in all directions, it is possible to generate precise identification information for the virtual space.
The computing device converts each of the aerial view image and the street view image in the form of a panoramic image into a depth image, and generates a spatial descriptor for the depth image.
The aerial view image and the street view image are 2D images, and the computing device may generate a depth image using various algorithms that generate depth values from 2D images, and may generate spatial descriptors using various image descriptor generation tools.
According to an embodiment of the present disclosure, by generating identification information based on features of images rather than images, the storage space of the identification information may be reduced.
The technical content described above may be implemented in the form of program instructions executable by various computer means and may be recorded on computer readable media. The computer readable media may be provided with program instructions, data files, data structures, and the like alone or in combination. The program instructions stored in the computer readable media may be specially designed and constructed for the purposes of the present disclosure or may be well known and available to those skilled in the art of computer software. The computer readable storage media include hardware devices configured to store and execute program instructions. For example, the computer readable storage media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as a CD-ROM and a digital video disk (DVD), magneto-optical media such as floptical disks, a ROM, a RAM, a flash memory, etc. The program instructions include not only machine language code made by a compiler but also high level code that can be used by an interpreter etc., which is executed by a computer. A hardware device may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.
As is apparent from the above, according to an embodiment of the present disclosure, spatial descriptors are generated not only from images of the ground but also from images of the airspace, and also are generated from a 360-degree image in all directions, thereby enabling generation of precise identification information for a virtual space.
In addition, according to an embodiment of the present disclosure, identification information is generated based on features of images rather than images, thereby reducing the storage space of the identification information.
While the present disclosure has been shown and described with respect to particulars, such as specific components, embodiments, and drawings, the embodiments are used to aid in the understanding of the present disclosure rather than limiting the present disclosure, and those skilled in the art should appreciate that various changes and modifications are possible without departing from the spirit and scope of the disclosure. Therefore, the spirit of the present disclosure is not defined by the embodiments, and the scope of the present disclosure is to cover not only the following claims but also all modifications and equivalents derived from the claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2023-0146466 | Oct 2023 | KR | national |