This application claims the benefit under 35 USC § 119 (a) of Korean Patent Application No. 10-2023-0156519, filed on Nov. 13, 2023, in the Korean Intellectual Property Office, the entire disclosure of which is incorporated herein by reference for all purposes.
The following description relates to a method and apparatus with three-dimensional object perception.
Technical automation of a perception process has been implemented through a neural network model implemented, for example, by a processor configured as a special computing structure, which provides intuitive mapping for computation between an input pattern and an output pattern after considerable training. Such a trained capability of generating the mapping may be referred to as a learning ability of the neural network model. Furthermore, a neural network model trained and specialized through special training has, for example, a generalization ability to provide a relatively accurate output with respect to an untrained input pattern.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In one general aspect, a method of detecting a three-dimensional (3D) object includes: receiving an input image with respect to a 3D space, an input point cloud with respect to the 3D space, and an input language with respect to a target object in the 3D space; using an encoding model to generate candidate image features of partial areas of the input image, a point cloud feature of the input point cloud, and a linguistic feature of the input language; selecting a target image feature corresponding to the linguistic feature from among the candidate image features based on similarity scores of similarities between the candidate image features and the linguistic feature; generating a decoding output by executing a multi-modal decoding model based on the target image feature and the point cloud feature; and detecting a 3D bounding box corresponding to the target object by executing an object detection model based on the decoding output.
The generating of the candidate image features, the point cloud feature, and the linguistic feature may include: generating the linguistic feature corresponding to the input language by a language encoding model performing inference on the input language; generating candidate image features corresponding to partial areas of the input image by executing an image encoding model and a region proposal model based on the input image; and generating a point cloud feature corresponding to the input point cloud by executing a point cloud encoding model based on the input point cloud.
The method may further include generating extended expressions each including (i) a position field that indicates a geometric characteristic of the target object based on the input language and including (ii) a class field indicates a class of the target object, and wherein the linguistic feature is generated based on the extended expressions.
Objects of a same class and with different geometric characteristics may be distinguished from each other based on the position fields of the extended expressions.
The position field may be learned through training.
The generating of the decoding output may include: generating image tokens by segmenting the target image feature; generating point cloud tokens by segmenting the point cloud feature; generating first position information indicating relative positions of the respective image tokens; generating second position information indicating relative positions of the respective point cloud tokens; and executing the multi-modal decoding model with key data and value data based on the image tokens, the point cloud tokens, the first position information, and the second position information.
The generating of the decoding output may further include executing the multi-modal decoding model with query data based on detection guide information indicating detection position candidates with a possibility of detecting the target object in the 3D space.
The detection position candidates may be distributed non-uniformly.
The multi-modal decoding model may generate the decoding output by extracting a correlation from the target image feature, the point cloud feature, and the detection guide information.
A non-transitory computer-readable storage medium may store instructions that, when executed by a processor, cause the processor to perform any of the methods.
In another general aspect, an electronic device includes: one or more processors; and a memory storing instructions configured to cause the one or more processors to: receive an input image with respect to a three-dimensional (3D) space, an input point cloud with respect to the 3D space, and an input language with respect to a target object in the 3D space; use an encoding model to generate candidate image features of partial areas of the input image, a point cloud feature of the input point cloud, and a linguistic feature of the input language; select a target image feature corresponding to the linguistic feature from the candidate image features based on similarity scores of similarities between the candidate image features and the linguistic feature; generate a decoding output by executing a multi-modal decoding model based on the target image feature and the point cloud feature; and detect a 3D bounding box corresponding to the target object by executing an object detection model based on the decoding output.
The instructions may be further configured to cause the one or more processors to: generate a linguistic feature corresponding to the input language by a language encoding model performing inference on the input language, generate candidate image features corresponding to partial areas of the input image by executing an image encoding model and a region proposal model based on the input image, and generate a point cloud feature corresponding to the input point cloud by executing a point cloud encoding model based on the input point cloud.
The instructions may be further configured to cause the one or more processors to generate extended expressions each including (i) a position field indicating a geometric characteristic of the target object based on the input language and (ii) a class field indicating a class of the target object are generated, and wherein the linguistic feature is generated based on the extended expressions.
Objects of a same class with different geometric characteristics may be distinguished from each other based on the position field.
The position field may be learned through training.
The instructions may be further configured to cause the one or more processors to: generate image tokens by segmenting the target image feature, generate point cloud tokens by segmenting the point cloud feature, generate first position information indicating relative positions of the respective image tokens, generate second position information indicating relative positions of the respective point cloud tokens, and execute the multi-modal decoding model with key data and value data based on the image tokens, the point cloud tokens, the first position information, and the second position information.
The instructions may be further configured to cause the one or more processors to execute the multi-modal decoding model with query data based on detection guide information indicating detection position candidates with a possibility of detecting the target object in the 3D space.
The detection position candidates may be non-uniformly distributed.
The multi-modal decoding model may be configured to generate the decoding output by extracting a correlation from the target image feature, the point cloud feature, and the detection guide information.
In another general aspect, a vehicle includes: a camera configured to generate an input image with respect to a three-dimensional (3D) space; a light detection and ranging (lidar) sensor configured to generate an input point cloud with respect to the 3D space; one or more processors configured to: receive the input image with respect to the 3D space, the input point cloud with respect to the 3D space, and an input language with respect to a target object in the 3D space; use an encoding model to generate candidate image features of partial areas of the input image, a point cloud feature of the input point cloud, and a linguistic feature of the input language; select a target image feature corresponding to the linguistic feature from the candidate image features based on similarity scores of similarities between the candidate image features and the linguistic feature; generate a decoding output by executing a multi-modal decoding model based on the target image feature and the point cloud feature; and detect a 3D bounding box corresponding to the target object by executing an object detection model based on the decoding output; and a control system configured to control the vehicle based on the 3D bounding box.
Other features and aspects will be apparent from the following detailed description, the drawings, and the claims.
Throughout the drawings and the detailed description, unless otherwise described or provided, the same or like drawing reference numerals will be understood to refer to the same or like elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known after an understanding of the disclosure of this application may be omitted for increased clarity and conciseness.
The features described herein may be embodied in different forms and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application.
The terminology used herein is for describing various examples only and is not to be used to limit the disclosure. The articles “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items. As non-limiting examples, terms “comprise” or “comprises,” “include” or “includes,” and “have” or “has” specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof.
Throughout the specification, when a component or element is described as being “connected to,” “coupled to,” or “joined to” another component or element, it may be directly “connected to,” “coupled to,” or “joined to” the other component or element, or there may reasonably be one or more other components or elements intervening therebetween. When a component or element is described as being “directly connected to,” “directly coupled to,” or “directly joined to” another component or element, there can be no other elements intervening therebetween. Likewise, expressions, for example, “between” and “immediately between” and “adjacent to” and “immediately adjacent to” may also be construed as described in the foregoing.
Although terms such as “first,” “second,” and “third”, or A, B, (a), (b), and the like may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Each of these terminologies is not used to define an essence, order, or sequence of corresponding members, components, regions, layers, or sections, for example, but used merely to distinguish the corresponding members, components, regions, layers, or sections from other members, components, regions, layers, or sections. Thus, a first member, component, region, layer, or section referred to in the examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples.
Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains and based on an understanding of the disclosure of the present application. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the disclosure of the present application and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein. The use of the term “may” herein with respect to an example or embodiment, e.g., as to what an example or embodiment may include or implement, means that at least one example or embodiment exists where such a feature is included or implemented, while all examples are not limited thereto.
The 3D object perception model 100 may receive an input image 101 with respect to a 3D space, an input language 102 with respect to a target object in the 3D space, and an input point cloud 103 with respect to the 3D space. The input language 102 may be words or phrases in a human language. The target object may be an object that is a target of perception. The 3D object perception model 100 may detect a 3D bounding box 141 corresponding to the target object using the vision-language model 110, the point cloud encoding model 120, the multi-modal decoding model 130, and the object detection model 140.
The 3D space may represent an actual/physical space, e.g., one near a vehicle. The input image 101 may be a result of a camera capturing at least a portion of the 3D space. For example, the input image 101 may be a color image, such as an RGB image, an infrared image, a multi-band image, or the like. The camera may include sub-cameras with different views (positions and directions). The input image 101 may include sub-images respectively corresponding to the different views captured by the sub-cameras. For example, sub-images of views in six directions may be generated through sub-cameras of views in six directions. The input point cloud 103 may be a result of sensing at least a portion of the 3D space by a light detection and ranging (lidar) sensor (or a RADAR, or any suitable sensor that may produce a 3D point cloud). A scene shown in the input image 101 and a scene shown in the input point cloud 103 may at least partially overlap. For example, the camera (or the sub-cameras) and the lidar sensor may be installed in a vehicle and may capture the surroundings of the vehicle at a 360-degree angle (full circumferential coverage is not required). The target object may be included in an area where sensing regions of the camera(s) and lidar overlap. A virtual space corresponding to the 3D space may be defined by the input image 101 and the input point cloud 103. The 3D bounding box 141 may be formed in the virtual space.
The vision-language model 110 may learn a relationship between vision information (e.g., image information) and language information (e.g., text information or speech information) and may solve a problem based on the relationship, for example, discerning a 3D bounding box of the target object. The vision-language model 110 may learn various objects through the vision information and the language information and may be used for open vocabulary object detection (VOD) according to a learning characteristic of the vision-language model 110. “OVOD” generally refers to an ability to detect objects that have been subjected to vocabulary learning (training) as well as objects that have not been subjected to vocabulary learning. To elaborate, open VOD models may transfer the multi-modal (e.g., image mode and point cloud mode) capabilities of pre-trained visual-language models to object detection. Open VOD generally expands traditional object detection to open categories and frees the requirement of troublesome annotations, which often must be done manually.
The 3D object perception model 100 may generate candidate image features of partial areas (regions) of the input image 101, a linguistic feature of the input language 102, and a point cloud feature 121 of the input point cloud 103 using respectively corresponding encoding models. The encoding models may include an image encoding model of the vision-language model 110, a language encoding model of the vision-language model 110, and the point cloud encoding model 120. The image encoding model, the language encoding model, and the point cloud encoding model 120 may be respective neural network models that able to generate a feature related to the corresponding input data. For example, an encoding model may be a convolutional neural network (CNN) or a transformer encoder.
The vision-language model 110 may include a language encoding model (e.g., language encoding model 220), an image encoding model (e.g., image encoding model 240), and a region proposal model (e.g., region proposal model 250). The image encoding model may be executed based on the input image 101 and may generate (infer) an image feature of the input image 101, and the region proposal model may determine candidate image features corresponding to partial areas of the input image from the image feature. The language encoding model may be executed based on (applied to) the input language 102 and may generate/infer a linguistic feature corresponding to the input language 102. The vision-language model 110 may determine similarity scores of similarity between candidate image features and the linguistic feature and may select a target image feature 111 corresponding to the linguistic feature from the candidate image features, with the selecting based on the similarity scores. The target image feature 111 may be referred to as an attended image feature because the target image feature 111 is selected through a linguistic feature. The point cloud encoding model 120 may be executed based on (applied to) the input point cloud 103 and may generate/infer the point cloud feature 121 corresponding to the input point cloud 103.
The multi-modal decoding model 130 may be executed based on the target image feature 111 (a first mode) and the point cloud feature 121 (a second mode) and may generate a decoding output. The multi-modal decoding model 130 may analyze a correlation between the target image feature 111 and the point cloud feature 121, the correlation having different modalities. The multi-modal decoding model 130 may determine a shape of the target object from the target image feature 111 corresponding to two-dimensional (2D) image information, may identify points corresponding to the shape of the target object from the point cloud feature 121, and may select a detection position corresponding to a position of the target object, the detection position selected from among detection position candidates of detection guide information 104 and the detection based on positions of the points.
The multi-modal decoding model 130 may be used to detect various objects based on open VOD of the vision-language model 110. A typical prior object perception model may determine and learn a class of a target object in advance (e.g., by training), and addition of class not previously learned may require full re-learning of both the model. That is, typical object perception models require full retraining when a new class is to be learned. The open VOD of the vision-language model 110 may generate the target image features 111 of various objects, including a new class, without requiring a new learning process (e.g., training). The multi-modal decoding model 130 may identify shapes of various objects including an object of a new class with the target image feature 111 and may generate a decoding output to determine the 3D bounding boxes 141 of the objects.
The multi-modal decoding model 130 may be a transformer decoder. The multi-modal decoding model 130 may perform decoding by extracting a correlation from query data, key data, and value data. The key data and the value data may be determined based on the target image feature 111 and the point cloud feature 121, and the query data may be determined based on the detection guide information 104. The detection guide information 104 may indicate detection position candidates having a possibility of detecting a target object in the 3D space.
The object detection model 140 may be executed based on (applied to) the decoding output and to detect the 3D bounding box 141 corresponding to the target object. The object detection model 140 may be a neural network model, such as multi-layer perceptron (MLP).
The language encoding model 220 may generate a linguistic feature 261 based on an input language 210. For example, the input language 210 may include text information (text or representation thereof) and/or utterance information. Extended expressions 211 may be generated based on the input language 210. The extended expressions 211 may each include (i) a position field 2111 indicating a geometric characteristic of the corresponding target object and (ii) a class field 2112 indicating a class of the corresponding target object. The class field 2112 may have a different value for a different class (e.g., a vehicle, a person, a traffic signal, and a lane). That is, in some instances there may be multiple extended expressions 211 with a same position field 2111 but with different class fields 2112.
The geometric characteristic of the position field 2111 may include a geometric position, a geometric shape, and/or the like. The position field 2111 may have a different value for a different position (e.g., a short range, a long range, a left side of an image, a right side of an image, an upper side of an image, a lower side of an image, an upper left side of an image, a lower left side of an image, an upper right side of an image, a lower right side of an image). Objects having the same class with different geometric characteristics may be distinguished based on the position field 2111. For example, an object of class “vehicle” in the upper side of an image may be distinguished from an object of class “vehicle” in the lower side of the image based on the position field 2111. In another example, if an “occlusion state” is included as a part/sub-field of the position field 2111, a vehicle whose position field 2111 indicates that the vehicle is in a complete state, i.e., “without occlusion”, may be distinguished from a vehicle “with occlusion” in its position field 2111, even if the two are at the same position.
The position field 2111 may have a learnable characteristic (may be trainable). Although a description is provided below, briefly, a value of the position field 2111 may be determined in a training process of the multi-modal decoding model. The input language 210 may correspond to a prompt (a supplemental input piece of information that may guide an inference on a primary input). Based on contextual optimization of the training process, a value of the position field 2111 may be optimized to distinguish and detect various geometric characteristics of each object (such as occlusion/non-occlusion). Objects of the same class with various geometric characteristics may appear, and as the multi-modal decoding model distinguishes and learns the objects of the same class with varying geometric characteristics, the detection accuracy of the multi-modal decoding model for objects of a same corresponding class may be improved.
The image encoding model 240 may generate an image feature corresponding to an input image 230. The region proposal model 250 may determine candidate image features 262 corresponding to partial areas (regions) of the input image 230 based on the image feature. The partial areas may be areas where an object is highly likely to exist.
A score table 260 may include similarity scores 263 between the linguistic feature 261 and the candidate image features 262. For example, the similarity scores 263 may be determined based on the Euclidean distance between a linguistic feature and a candidate image feature.
Sub-features SF_1, SF_2, and SF_3 of the linguistic feature 261 may correspond to the extended expressions 211, respectively. For example, the first sub-feature SF_1 may correspond to a first extended expression of the extended expressions 211, the second sub-feature SF_2 may correspond to a second extended expression of the extended expressions 211, and the third sub-feature SF_3 may correspond to a third extended expression of the extended expressions 211.
When a target object is determined/detected, the extended expressions 211 having the target object as the class field 2112 may be configured. For example, when a vehicle is determined to be a target object, the extended expressions 211 at various positions having the vehicle as a class may be configured. The extended expressions 211 of each class may be determined during the training process of the multi-modal decoding model. When the input language 210 designates multiple classes, the extended expressions 211 and the linguistic feature 261 may be configured for each class.
From among the candidate image features 262, at least some of the candidate image features having the highest similarity scores of similarity to the sub-features SF_1, SF_2, and SF_3 of the linguistic feature 261 may be selected to be target image features. For example, when the third candidate image feature CIF_3 indicates/has the highest similarity score (SS_31) to the first sub-feature SF_1, the third candidate image feature CIF_3 may be selected to be a target image feature for the first sub-feature SF_1. Similarly, when the fifth candidate image feature CIF_5 indicates/has the highest similarity score to the second sub-feature SF_2, the fifth candidate image feature CIF_5 may be selected to be a target image feature for the second sub-feature SF_2. When the first candidate image feature CIF_1 indicates/has the highest similarity score to the third sub-feature SF_3, the first candidate image feature CIF_1 may be selected to be a target image feature for the third sub-feature SF_3. As described above, a target image feature for of highest similarity of each sub-feature may be selected and a 3D bounding box corresponding to each target image feature may be determined.
Key data and value data (K, V) may be determined based on the image tokens 402, the point cloud tokens 404, the first position information of the position information 405, and the second position information of the position information 405. The key data may be the same as the value data. For example, a matching pair according to the relative positions of the image tokens 402 and the point cloud tokens 404 may be combined (e.g., concatenation) and the key data and the value data may be sequentially configured by adding the first position information and/or the second position information to the combined matching pair.
Detection guide information 406 may indicate detection position candidates having a possibility of detecting a target object in the 3D space. The detection guide information 406 may configure the query data (Q). The detection position candidates may indicate non-uniform positions. Although a description is provided below, briefly, the detection position candidates may be optimized by a detection guide model in a training process of a multi-modal decoding model 410. The detection position candidates may indicate non-uniform positions as an optimization result.
The multi-modal decoding model 410 may correspond to a transformer decoder. The multi-modal decoding model 410 may be executed based on (i.e., applied to) the query data, the key data, and the value data, and may generate/infer a decoding output. The multi-modal decoding model 410 may generate a decoding output by extracting a correlation from the target image feature 401, the point cloud feature 403, and the detection guide information 406 based on the query data, the key data, and the value data. The object detection model 420 may be executed based on (applied to) the decoding output and may determine a 3D bounding box 421. The 3D bounding box 421 may correspond to one of the detection position candidates.
The language encoding model 520 may generate linguistic features 561 based on input languages 510 (e.g., words/phrases in the form of text data; as used herein “word” includes a short phrase). The input languages 510 may specify/name respective classes. The input languages 510 may not include position information like the position field 2111 of
The image encoding model 540 may generate an image feature corresponding to the input image 530. The region proposal model 550 may determine candidate image features 562 corresponding to partial areas (regions, possibly non-overlapping) of the input image 530 based on the image feature. The partial areas may be areas where an object is highly likely to exist. An area position loss 571 may represent a difference between ground truth (GT) object areas and object areas proposed by the region proposal model 550. The region proposal model 550 may gain an ability to propose areas where an object is highly likely to exist according to training based on the area position loss 571.
The language features 561 may respectively correspond to the input languages 510. For example, a first linguistic feature LF_1 may correspond to a first input language of the input languages 510, a second linguistic feature LF_2 may correspond to a second input language of the input languages 510, and a third linguistic feature LF_3 may correspond to a third input language of the input languages 510.
The candidate image features 562 may be arranged to match the linguistic features. For example, an image feature corresponding to a class of the first linguistic feature may be determined to be a first candidate image feature CIF_1, an image feature corresponding to a class of the second linguistic feature may be determined to be a second candidate image feature CIF_2, and an image feature corresponding to a class of the third linguistic feature may be determined to be a third candidate image feature CIF_3.
A score table 560 may include similarity scores 563 of similarities between the linguistic features 561 and the candidate image features 562. For example, the similarity scores 563 may be determined based on a Euclidean distance. The similarity scores 563 may be trained such that diagonal elements may have great values based on an alignment loss 572, and off-diagonal elements may have small values. For example, in the score table 560, SS_11, SS_22, and SS_33 may correspond to diagonal elements and SS_12, SS_13, SS_21, SS_23, SS_31, and SS_32 may correspond to off-diagonal elements. Training according to the alignment loss 572 may increase similarity score values of the diagonal elements and may decrease similarity score values of the off-diagonal elements. As a result of training, the score table 560 may be a diagonal matrix or may be close to a diagonal matrix.
Based on the training that is based on the alignment loss 572, pairs of linguistic features and candidate image features (e.g., a pair of LF_1 and CIF_1, a pair of LF_2 and CIF_2, and a pair of LF_3 and CIF_3) with respect to (associated with) the same object may have similar feature values and the language encoding model 520 and the image encoding model 540 may be trained to generate the linguistic features 561 and the candidate image features 562 such that the corresponding pairs have similar feature values. With such training, the vision-language model 500 may gain an ability to solve a problem of open VOD by training based on the alignment loss 572.
The vision-language model 610 may generate/infer a target image feature 611 based on the input image 601 and the input language 602. The point cloud encoding model 620 may generate/infer a point cloud feature 621 based on the input point cloud 603. The detection guide model 640 may generate/infer detection guide information from the guide coordinate information 604. The multi-modal decoding model 630 may generate/infer a decoding output based on receiving, as input, the target image feature 611, the point cloud feature 621, and the detection guide information. The object detection model 650 may determine the 3D bounding box 651 based on the decoding output.
The 3D object perception model 600 may be trained based on a box position loss 661 of the 3D bounding box 651. The box position loss 661 may correspond to a difference between the 3D bounding box 651 and a GT bounding box. In this case, when the vision-language model 610 (e.g., a language encoding model, an image encoding model, and a region proposal model), a class field 6022, and the point cloud encoding model 620 are frozen, a position field 6021, the multi-modal decoding model 630, the detection guide model 640, and the object detection model 650 may be trained. For example, the vision-language model 610 may be installed in the 3D object perception model 600 in a state in which the vision-language model 610 is pre-trained based on the manner described with reference to
The multi-modal decoding model 630 may be used to detect various objects based on open VOD of the vision-language model 610. A typical previous object perception model may determine and learn a class of a target object in advance, and an addition of a new class may require fully retraining the model (including training for the class already learned). The open VOD of the vision-language model 610 may generate the target image feature 611 of various objects including a new class without such a new learning process. The multi-modal decoding model 630 may identify shapes of various objects including a new object with the target image feature 611 and may generate a decoding output to determine the 3D bounding box 651 of the objects.
The input language 602 may be extended to expressions including respective position fields 6021 and class fields 6022. The position field 6021 may have a learnable characteristic and the class field 6022 may be frozen (not learned). For example, the position field 6021 may include learnable parameters having a predetermined size and the parameters may be trained to decrease the box position loss 661 during the training process of the multi-modal decoding model 630.
The position field 6021 may be initialized to an arbitrary value and may be contextually optimized using the box position loss 661 to contain spatial information. As a result of learning, the position field 6021 and the class field 6022 may be paired and a spatial identity may be assigned to each of the objects of the same class having different geometric characteristics. The training method may not encode spatial information in an extrinsic method but may cause the 3D object perception model to learn spatial information in an intrinsic method using a 3D label, thereby, many queries may be initialized at a statistically significant position by the detection guide model 640 compared to a conventional method in which anchors are distributed at an equal/regular interval.
The detection guide model 640 may generate detection guide information based on the guide coordinate information 604. The detection guide information may indicate detection position candidates that have a possibility of detecting a target object of the input language 602 in the 3D space. The guide coordinate information 604 may indicate uniform positions with respect to the 3D space and the detection guide information may indicate non-uniform positions with respect to the 3D space (by the detection guide model 640 adjusting the uniform positions). Based on the training of the detection guide model 640, the uniform positions of the guide coordinate information 604 may be adjusted to the non-uniform positions of the detection guide information. The detection guide model 640 may adjust the uniform positions to the non-uniform positions such that it provides (via influence on the multi-modal decoding model 630) a high possibility of detecting the target object based on a training data set.
The detection guide model 640 may be a neural network model, such as a 1*1
Operation 720 may include (i) an operation of generating/inferring the linguistic feature corresponding to the input language by executing a language encoding model based on (applying the model to) the input language, (ii) an operation of generating the candidate image features corresponding to the partial areas of the input image by executing an image encoding model and a region proposal model based on the input image, and (iii) an operation of generating a point cloud feature corresponding to the input point cloud by executing a point cloud encoding model based on the input point cloud.
Extended expressions including respective position fields (indicating a geometric characteristic of the target object based on the input language) and respective class fields indicating a class of the target object may be generated, and the linguistic feature may be generated based on the extended expressions. Objects of the same class but with different geometric characteristics may be distinguished based on the position field. The position field may have a learnable characteristic (may be learnable).
Operation 740 may include (i) an operation of generating image tokens by segmenting the target image feature, (ii) an operation of generating point cloud tokens by segmenting the point cloud feature, an operation of generating first position information indicating relative positions of the image tokens, (iii) an operation of generating second position information indicating relative positions of the point cloud tokens, and (iii) an operation of executing the multi-modal decoding model with key data and value data based on inputs thereto such as the image tokens, the point cloud tokens, the first position information, and the second position information.
Operation 740 may include an operation of executing the multi-modal decoding model with query data based on detection guide information which indicates detection position candidates with a possibility of detecting the target object in the 3D space. The detection position candidates may indicate non-uniform positions (position candidates non-uniformly distributed). The multi-modal decoding model may generate a decoding output by extracting a correlation from the target image feature, the point cloud feature, and the detection guide information.
The memory 820 may be connected to the processor 810 and may store instructions executable by the processor 810, data to be computed by the processor 810, data processed by the processor 810, or a combination thereof. In practice, the processor 810 may be a combination of processors, which are mentioned below. The memory 820 may include a non-transitory computer-readable medium (for example, a high-speed random access memory) and/or a non-volatile computer-readable medium (e.g., a disk storage device, a flash memory device, or other non-volatile solid-state memory devices).
The processor 810 may execute the instructions to perform the operations described above with reference to
The camera 910 may generate an input image with respect to a 3D space, and the lidar sensor 920 may generate an input point cloud with respect to the 3D space. The input language may be set by a user or may be set in advance. The processor 930 may execute instructions to perform operations described above with reference to
The computing apparatuses, the vehicles, the electronic devices, the processors, the memories, the image sensors, the vehicle/operation function hardware, the ADAS/AD systems, the displays, the information output system and hardware, the storage devices, and other apparatuses, devices, units, modules, and components described herein with respect to
The methods illustrated in
Instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above may be written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the one or more processors or computers to operate as a machine or special-purpose computer to perform the operations that are performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the one or more processors or computers, such as machine code produced by a compiler. In another example, the instructions or software includes higher-level code that is executed by the one or more processors or computer using an interpreter. The instructions or software may be written using any programming language based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions herein, which disclose algorithms for performing the operations that are performed by the hardware components and the methods as described above.
The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RW, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card micro or a card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and provide the instructions or software and any associated data, data files, and data structures to one or more processors or computers so that the one or more processors or computers can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers.
While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents.
Therefore, in addition to the above disclosure, the scope of the disclosure may also be defined by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
| Number | Date | Country | Kind |
|---|---|---|---|
| 10-2023-0156519 | Nov 2023 | KR | national |