This application is based on and claims priority under 35 U.S.C. 119 to Chinese Patent application No. 202010023167.5, filed on Jan. 9, 2020, in the China National Intellectual Property Administration. The entire disclosure of the above applications is incorporated herein by reference.
The present disclosure relates to the field of image processing, and in particular to a method and device for segmenting an image, and a storage medium.
With the continuous development of mobile Internet technology, more and more functions can be provided by a mobile equipment, and therefore, the importance of the mobile equipment in the production and life is also increasing. Among the multiple functions provided by the mobile equipment, the shooting function has become a basic function that the users use daily.
The present disclosure provides a method and device for segmenting an image, and a storage medium.
According to an embodiment of the disclosure, a method for segmenting an image is provided. The method includes: acquiring the image; obtaining an initial segmentation result by segmenting a target ground region of the image; and determining a ground segmentation result of the image based on the initial segmentation result and a current accumulation map, where the current accumulation map includes a ground segmentation result of previous image.
According to an embodiment of the disclosure, a device for segmenting an image is provided. The device includes: a memory and at least one processor, where the at least one processor is configured to read and execute instructions stored in the memory to: acquire the image; obtain an initial segmentation result by segmenting a target ground region of the image; and determine a ground segmentation result of the image based on the initial segmentation result and a current accumulation map, where the current accumulation map includes a ground segmentation result of previous image.
According to an embodiment of the disclosure, a non-transitory computer readable storage medium is provided. When instructions in the storage medium are executed by a processor of an electronic equipment, the electronic equipment executes the method for segmenting an image.
The accompanying drawings here are incorporated into the specification and constitute a part of the specification. They show the embodiments that comply with the present disclosure, are used to explain the principle of the present disclosure together with the specification, and do not constitute an improper limitation to the present disclosure.
In order to enable those of ordinary skill in the art to understand the solutions of the present disclosure, the solutions in the embodiments of the present disclosure will be described in conjunction with the accompanying drawings.
It should be noted that the terms “first”, “second” and the like in the specification, claims and accompanying drawings of the present disclosure are used to distinguish similar objects, and are not used to describe a particular order or sequence. It should be understood that the data used as such may be interchanged where appropriate, so that the embodiments of the present disclosure described herein can be implemented in a sequence other than those illustrated or described herein. The implementation manners described in the following embodiments do not represent all implementation manners consistent with the present disclosure. Rather, they are merely examples of devices and methods consistent with some aspects of the present disclosure as described in detail in the appended claims.
At present, in order to provide user experience and increase the fun and vividness of shooting, some client developers based on the shooting function combine the AR (Augmented Reality) technology with the shooting function of the mobile equipment, so that the user can add an item in the virtual world to the shot real world when shooting an image or video.
In the related art, the item in the virtual world is usually added to a certain region in the real world that is shot, and therefore, it is necessary to perform image region segmentation on the shot image first to obtain a region segmentation result. The region segmentation result is the region of the item in the virtual world to be added, and then AR rendering is performed on the region to obtain the image after the item in the virtual world is added.
For example, as shown in
In the related art, a ground region in an image is segmented based on a preset ground segmentation algorithm. Due to differences in a real world included in acquired frames of the image, in the ground segmentation result obtained by using the preset ground segmentation algorithm, there will be jitter at the junction of the ground region of items in a virtual world to be added obtained by segmentation and other regions, so that the effect of the obtained ground segmentation result is poor.
In some embodiments, the electronic equipment may be an image collection equipment for collecting one or more images. For example, a mobile phone camera or a video camera collects an image of the real world, and the image is used as an image to be segmented; and then the method for segmenting an image provided by the embodiments of the present disclosure is executed on this image. The electronic equipment may also be a server for providing services for the image collection equipment. After collecting an image, the image collection equipment may send the image to a corresponding server, and then, the server executes the method for segmenting an image provided by the embodiments of the present disclosure on the image and feeds a obtained ground segmentation result back to the image collection equipment. This is not limited by the embodiments of the present disclosure, and is hereinafter referred to as an electronic equipment.
As shown in
In step S21, an image is acquired by an image collection equipment.
In step S22, an initial segmentation result is obtained by segmenting a target ground region of the image by a preset ground segmentation algorithm.
In step S23, a ground segmentation result of the image is determined based on the initial segmentation result and a current accumulation map.
Wherein the current accumulation map includes a ground segmentation result of previous image of the image.
The image collection equipment collects an image, and obtains an initial segmentation result by segmenting a target ground region in the image based on the preset ground segmentation algorithm, where the initial segmentation result is not the ground segmentation result adopted in subsequent AR rendering, instead, the initial segmentation result is a ground segmentation result of the image optimized by a ground segmentation result of previous image.
Thus, when determining the ground segmentation result of the image, references are made to the ground segmentation result of the previous image, so that a smooth transition can be achieved between respective ground segmentation results of images. Thereby, the jitter at the junction of the target ground region of the items in the virtual world to be added and other regions in the obtained ground segmentation result is reduced, so that the effect of the obtained ground segmentation result is improved.
For the step S21 above, the image is acquired by the image collection equipment.
In some embodiments, an electronic equipment may acquire the image in many ways.
In some embodiments, when the electronic equipment is the image collection equipment itself, the images collected by the electronic equipment during image or video shooting processes are images to be segmented.
In some embodiments, when the electronic equipment is a server for providing services for the image collection equipment, the electronic equipment receives the image sent by the image collection equipment. For example, during the image or video shooting processes, the image collection equipment collects one or more images, and sends of the one or more images to the electronic equipment, so that the electronic equipment can acquire the one or more images collected by the image collection equipment.
For the step S22 above, the initial segmentation result is obtained by segmenting the target ground region of the image by the preset ground segmentation algorithm.
In some embodiments, the target ground region may be all ground regions in the image, and may also be a certain type of ground region in the image. For example, when the target ground region in the image includes a plastic runway and an asphalt road, the ground region may be an asphalt road region.
In some embodiments, for different target ground regions, the adopted preset ground segmentation algorithm may also be adjusted accordingly. For example, when the target ground region is all the ground regions in the image, the adopted preset ground segmentation algorithm is a ground segmentation algorithm for segmenting all the ground regions in the image. When the target ground region is a certain type of ground region in the image, the adopted preset ground segmentation algorithm is a ground segmentation algorithm for segmenting this type of ground region in the image, and so on.
In some embodiments, the initial segmentation result is obtained by calculating the target ground region based on the preset ground segmentation algorithm. The initial segmentation result is a two-dimensional bitmap. A value of each pixel in the two-dimensional bitmap represents a probability that the content in the image corresponding to the position of the each pixel is the target ground region. The range of the value of the each pixel is [0, 1].
In some embodiments, the ground segmentation algorithm may adopt a semantic segmentation model based on deep neural networks, where the semantic segmentation model can segment the ground of indoor and outdoor scenes finely and have good robustness for different scenes. The model structure of the semantic segmentation model may be constructed based on the Unet (Unity Networking) framework. Moreover, the input of the semantic segmentation model may be an RGB image, and the output of the semantic segmentation model may be a mask image of a target ground region of the RGB image. Wherein each pixel in the mask image represents the probability that a pixel at the corresponding position in the input image is the ground, and the range of the probability may be [0, 1]. Wherein in the RGB image, R represents Red, G represents Green, and B represents Blue.
In some embodiments, during the model training processes of the above semantic segmentation model, the adopted loss function may be cross-entropy loss, and is used to update parameters of the semantic segmentation model, and the optimizer may be an Adam (Adaptive moment estimation) optimizer. Moreover, when the IOU (Intersection over Union) between the ground prediction result of the semantic segmentation model obtained by training and the real ground is the largest, it is determined that the training is completed.
For the step S23 above, the ground segmentation result of the image is determined based on the initial segmentation result and the current accumulation map.
In some embodiments, after the initial segmentation result of the image is obtained, the initial segmentation result is optimized based on the initial segmentation result and the ground segmentation result of the previous image, to obtain the ground segmentation result of the image, to thereby improve the stability of the obtained ground segmentation result.
In some embodiments, when the ground segmentation result of the image has not been obtained, the image collection equipment displays the image obtained by performing AR rendering on the ground segmentation result of the previous image, and the ground segmentation result of the previous image can be called the current accumulation map.
In some embodiments, the step S23 may include: the ground segmentation result is determined by a 3D (3Dimensions) rendering engine and a 3D virtual scene corresponding to the 3D rendering engine.
In some embodiments, a 3D rendering engine can be pre-installed in the electronic equipment, a 3D virtual scene corresponding to the installed 3D rendering engine can be preset. The electronic equipment can call the 3D rendering engine. In this way, the 3D rendering engine can map the initial segmentation result and the current accumulation map to the 3D virtual scene, and thus, a scene that the image collection equipment collects the image in the real scene is simulated by using the 3D virtual scene, and the ground segmentation result of the image is determined based on the simulated result.
In some embodiments, the 3D rendering engine is a software that renders a virtual 3D scene into a 2D (2Dimensions) image. The virtual 3D scene includes object 3D models and their surface material descriptions, virtual cameras, lights and other elements. When executing rendering, the 3D rendering engine may project a 3D scene into a 2D space through a viewing angle of a specified virtual camera, just like the effect of using real camera equipment to shoot a real world scene. The surface material of the 3D model describes how to calculate the color of each pixel in the covered region after the model is projected into the 2D space. The user of the 3D rendering engine can specify which virtual camera is used for rendering and to which image the rendering result is output.
In some embodiments, as shown in
In some embodiments, the first plane model 330 and the second plane model 340 are horizontally placed in the 3D virtual scene.
In some embodiments, the first plane model 330 and the second plane model 340 may be regarded as, square plane three-dimensional models with a certain area that are horizontally placed in the 3D virtual scene. The first plane model 330 and the second plane model 340 are different.
A first camera coordinate system corresponding to the master camera 310 is synchronized with a third camera coordinate system of the image collection equipment. The master camera is configured to render the second plane model.
In some embodiments, the position and orientation of the camera of the master camera 310 in the 3D virtual scene are synchronized with the position and orientation of the camera of the image collection equipment in the real world. That is, assuming that the 3D virtual scene is taken as the real world, the camera of the master camera 310 is the camera of the image collection equipment for collecting the image. Further, the first camera coordinate system corresponding to the master camera 310 is synchronized with the third camera coordinate system of the image collection equipment. Therefore, the master camera 310 can use a projection matrix of the image collection equipment.
In this way, the content of each image obtained by the simulated shooting of the master camera 310 is aligned with the content of each image obtained by the shooting of the image collection equipment. The master camera 310 may render the second plane model 340. In some embodiments, the master camera 310 may be an AR camera.
In some embodiments, the projection matrix is: a matrix for representing a coordinate conversion relationship among the third camera coordinate system, the image coordinate system and the world coordinate system of the image collection equipment. For a specified image collection equipment, the projection matrix may be preset and stored in this image collection equipment. For example, when the image collection equipment is a mobile phone, an API (Application Programming Interface) of an operating system of the mobile phone may be used to query and obtain the projection matrix of the mobile phone directly.
The map camera 320 is a camera of which a position is synchronized with that of the master camera and of which a camera faces a horizontal plane of the 3D virtual scene. The map camera 320 is configured to render the first plane model.
In some embodiments, the position of the camera of the map camera 320 in the 3D virtual scene is synchronized with the position of the camera of the image collection equipment in the real world, but the camera of the map camera 320 faces the horizontal plane of the 3D virtual scene. That is, assuming that the 3D virtual scene is taken as the real world, the map camera 320 is the image collection equipment, and the camera of the map camera 320 faces directly below and is configured to vertically shoot the ground of the real world. The right direction of the second camera coordinate system corresponding to the map camera 320 is parallel to the X axis of the world coordinate system. In some embodiments, the plane on which the X axis and the Z axis of the world coordinate system are located is the horizontal plane of the real world, and the Y axis of the world coordinate system is perpendicular to the horizontal plane of the real world.
In some embodiments, the projection matrix of the map camera 320 is an orthogonal matrix, and the map camera 320 is configured to render the first plane model 330. The length-to-width ratio of the obtained projected image is 1:1, that is, when an image in other coordinate systems is projected into the second camera coordinate system corresponding to the map camera 320, the length-to-width ratio of the obtained image is 1:1.
As shown in
In some embodiments, on the basis of the 3D virtual scene shown in
In step S41: the first plane model, the second plane model and the map camera are moved to align with the master camera vertically.
The first plane model, the second plane model and the map camera are moved to the positions vertically aligned with the master camera. The “vertically aligned” means that: in the 3D virtual scene, the X-axis coordinate components of the first plane model, the second plane model and the map camera after the movement are the same as the X-axis coordinate component of the master camera, and the Z-axis coordinate components of the first plane model, the second plane model and the map camera after the movement are the same as the Z-axis coordinate component of the master camera.
In step S42: a first top view map of the current accumulation map in a second camera coordinate system corresponding to the map camera is determined.
Since the current accumulation map is the ground segmentation result of the previous image and the current accumulation map is presented on a display screen of the image collection equipment, the ground segmentation result is located in the third camera coordinate system corresponding to the image collection equipment. Based on this, when determining the ground segmentation result of the image, the 3D rendering engine needs to convert the image into the 3D virtual scene corresponding to the 3D rendering engine.
In some embodiments, since the projection matrix used by the master camera is the projection matrix of the image collection equipment, when the map camera is moved to the position vertically aligned with the master camera in the step S41, a positional relationship between the map camera and the master camera can be established, where since the first camera coordinate system corresponding to the master camera is synchronized with the third camera coordinate system of the image collection equipment, a corresponding relationship between the map camera and the image collection equipment can be established. In this way, according to the corresponding relationship between the map camera and the image collection equipment and the projection matrix of the master camera, the first top view map of the current accumulation map in the second camera coordinate system corresponding to the map camera is determined through coordinate conversion.
In some embodiments, the length-to-width ratio of the first top view map is 1:1. The first top view map can be used to represent the span of the first top view map on the X axis and the Z axis in the second camera coordinate system corresponding to the map camera. i.e.: (x1_min, x1_max, z1_min, z1_max), that is, the area covered on the horizontal plane when the current accumulation map is projected onto the horizontal plane of the second camera coordinate system corresponding to the map camera.
In step S43: a second top view map of the initial segmentation result is obtained by rendering the first plane model based on the initial segmentation result and the first top view map.
In some embodiments, since the initial segmentation result is obtained by performing target ground segmentation on the image by using the preset ground segmentation algorithm, the initial segmentation result is also located in the third camera coordinate system corresponding to the image collection equipment. Based on this, when determining the ground segmentation result of the image, the 3D rendering engine also needs to convert the image into the 3D virtual scene corresponding to the 3D rendering engine.
In this way, after the first top view map is obtained, the first plane model can be rendered based on the initial segmentation result and the first top view map to obtain the second top view map of the initial segmentation result in the second camera coordinate system.
In some embodiment, based on the initial segmentation result obtained in the step S22 and the first top view map, according to the corresponding relationship between the master camera and the image collection equipment obtained by position movement in the step S41 and the projection matrix of the master camera, the second top view map of the initial segmentation result in the second camera coordinate system corresponding to the map camera is obtained through coordinate conversion.
Similar to the first top view map, the length-to-width ratio of the second top view map is 1:1, and the second top view map represents the span of the second top view map on the X axis and the Z axis in the second camera coordinate system corresponding to the map camera. i.e.: (x2_min, x2_max, z2_min, z2_max), that is, the area covered on the horizontal plane when the image corresponding to the initial segmentation result is projected onto the horizontal plane of the second camera coordinate system corresponding to the map camera.
In some embodiments, the step S43 may be implemented by the 3D rendering engine executing a preset ground projection accumulation program (map_projection). The input of the ground projection accumulation program is: the above initial segmentation result, the first top view map and the projection matrix of the master camera, and the output is: the above second top view map.
In some embodiment, the ground projection accumulation program is configured to project the initial segmentation result located in the third camera coordinate system corresponding to the image collection equipment onto the horizontal plane in the world coordinate system, that is, onto the XZ plane of the world coordinate system, as shown in
In step S44: the ground segmentation result of the image is obtained by rendering the second plane model based on the initial segmentation result and the second top view map.
The ground segmentation result of the image is presented on the display screen of the image collection equipment, that is, the ground segmentation result of the image is located in the third camera coordinate system corresponding to the image collection equipment. The second top view map of the initial segmentation result in the second camera coordinate system is located in the second camera coordinate system corresponding to the map camera, that is, the second top view map is located in the 3D virtual scene corresponding to the 3D rendering engine. Therefore, when determining the ground segmentation result of the image, the 3D rendering engine needs to reconvert the second top view map from the 3D virtual scene corresponding to the 3D rendering engine into the above third camera coordinate system.
In this way, after the second top view map of the initial segmentation result in the second camera coordinate system is obtained, the second plane model can be rendered based on the initial segmentation result and the second top view map to obtain the ground segmentation result of the image.
In some embodiment, based on the initial segmentation result and the first top view map, according to the corresponding relationship between the master camera and the image collection equipment obtained by position movement in the step S41 above and the projection matrix of the master camera, the ground segmentation result of the image is obtained through coordinate conversion.
In some embodiments, the step S44 may be implemented by the 3D rendering engine executing a preset screen space reprojection program (ss_mask). The input of the ground projection accumulation program is: the above initial segmentation result, the second top view map and the projection matrix of the master camera, and the output is: the ground segmentation result of the image.
In some embodiments, as shown in
In step S61, universal coordinates of the initial segmentation result in a preset world coordinate system are acquired.
Since the initial segmentation result is obtained by performing target ground segmentation on the image by using the preset ground segmentation algorithm, the initial segmentation result is also located in the third camera coordinate system corresponding to the image collection equipment for collecting the image. Based on this, the initial segmentation result can be projected into the preset world coordinate system through matrix transformation by using the projection matrix of the image collection equipment for collecting the image, thereby determining the universal coordinates of the initial segmentation result in the world coordinate system.
In step S62, first coordinates of the universal coordinates in the first top view map and second coordinates of the universal coordinates in the first camera coordinate system corresponding to the master camera are calculated. In some embodiments, the first coordinate may be a top view region coordinate, and the second coordinate may be a main camera space coordinate.
After the universal coordinates of the initial segmentation result in the world coordinate system are acquired, the universal coordinates can be mapped into the 3D virtual scene corresponding to the 3D rendering engine to obtain the coordinates related to the map camera and the master camera in the 3D virtual scene.
In some embodiments, the current accumulation map is the ground segmentation result of the previous image, and the image region corresponding to the initial segmentation result is a region in the image to be segmented, that is, the image regions corresponding to the current accumulation map and the initial segmentation result are regions included in the continuous images. In this way, during the continuous shooting, the changes between every two adjacent frames of image collected are small, therefore, the same region may exist in the regions corresponding to the current accumulation map and the initial segmentation result. Furthermore, since the first top view map obtained above is the first top view map of the current accumulation map in the second camera coordinate system corresponding to the map camera, the initial segmentation result may also have the same region between the second top view map in the second camera coordinate system and the first top view map.
Based on this, according to the corresponding relationship between the master camera and the image collection equipment for collecting the image obtained by position movement in the step S41 above and the projection matrix of the master camera, the first coordinates of the above universal coordinates in the first top view map can be calculated through coordinate conversion.
In some embodiments, since the projection matrix of the master camera is the projection matrix of the image collection equipment for collecting the image and the above universal coordinates are the universal coordinates of the initial segmentation result in the preset world coordinate system, the second coordinates of the universal coordinates in the first camera coordinate system corresponding to the master camera can be calculated by using the projection matrix of the master camera.
In step S63, a first probability and countdown of a first pixel corresponding to the first coordinates are determined by sampling the current accumulation map.
Wherein in the current accumulation map, a countdown of each of the first pixels is configured to characterize a number of frames for continuing retaining the first probability of the first pixel. In some embodiments, the countdown may be the number of frames with the first pixel and the first probability.
The above first top view map is obtained by projecting the current accumulation map into the second camera coordinate system corresponding to the map camera, and therefore, each pixel in the first top view map corresponds to a pixel in the current accumulation map. Furthermore, the first probability and the countdown of the pixel can be obtained by sampling the current accumulation map.
Based on this, the first coordinates calculated above are part of the region in the first top view map, and therefore, each pixel in the first coordinates also corresponds to a pixel in the current accumulation map. Furthermore, the first probability and the countdown of the pixel can be obtained by sampling the current accumulation map.
When the first probability and the countdown can not be obtained by sampling a pixel corresponding to the first coordinates in the current accumulation map, the first probability and the countdown of the pixel are set to 0.
In the current accumulation map, a countdown of a pixel represents a number of frames for continuing retaining the first probability of the pixel. For each pixel in the current accumulation map, when the pixel is located in the viewable region of the master camera in the 3D rendering scene, the countdown of the pixel can be reset to a predetermined value, that is, to the maximum value of countdowns. When the pixel is located outside the viewable region of the master camera in the 3D rendering scene, 1 can be subtracted from the countdown of the pixel. When the countdown of a pixel is reduced to 0, the first probability of the pixel is reset to 0. The countdown of each pixel in the viewable region of the master camera in the 3D rendering scene will remain at the maximum value, and the first probability thereof will remain unchanged.
In step S64, a second probability of a second pixel corresponding to the second coordinates is determined based on the initial segmentation result.
The above second coordinates are obtained by projecting the initial segmentation result into the first camera coordinate system corresponding to the master camera, and therefore, each pixel in the second coordinates corresponds to a pixel in the initial segmentation result. Furthermore, the initial segmentation result is a two-dimensional bitmap, and the value of each pixel in the two-dimensional bitmap represents the probability that the content in the image corresponding to the pixel position is the target region. Based on this, the second probability of the pixel is obtained by sampling the initial segmentation result.
In step S65, a target probability is determined based on the first probability and the second probability. The target probability is the maximum of the first probability and the second probability, in response to that the countdown is greater than zero.
In some embodiments, for each pixel corresponding to the universal coordinates, a maximum of the first probability and the second probability corresponding to the pixel is determined as a target probability of the pixel in response to that the countdown of the pixel is greater than zero.
In step S66, the target probability is the second probability in response to that the countdown is equal to zero.
In some embodiments, for each pixel corresponding to the universal coordinates, the second probability corresponding to the pixel is determined as the target probability of the pixel in response to that the countdown of the pixel is equal to zero.
In step S67, the second top view map of the initial segmentation result in the second camera coordinate system is obtained.
That is, the second top view map of the initial segmentation result in the second camera coordinate system is determined based on the target probability of each pixel corresponding to the universal coordinates.
The first coordinates are coordinates of the universal coordinates in the first top view map, and the second coordinates are coordinates of the universal coordinates in the first camera coordinate system corresponding to the master camera. Each pixel in the first coordinates may be in one-to-one correspondence to each pixel in the second coordinates.
Furthermore, for each pixel corresponding to the universal coordinates, when the countdown of the pixel is greater than zero, the maximum of the first probability and the second probability corresponding to the pixel can be determined as the target probability of the pixel, and 1 is subtracted from the countdown of the pixel.
When the countdown of the pixel is equal to zero, the second probability corresponding to the pixel can be determined as the target probability of the pixel.
In this way, the target probability of each pixel corresponding to the universal coordinates can be obtained, and furthermore, by outputting the target probability of each pixel, the second top view map of the initial segmentation result in the second camera coordinate system can be obtained.
In some embodiments, as shown in
In step S71, a third probability of the second pixel is obtained.
That is, a third probability of each pixel in the initial segmentation result is sampled.
After obtaining the initial segmentation result, since the initial segmentation result is a two-dimensional bitmap and the value of each pixel in the two-dimensional bitmap represents the probability that the content in the image corresponding to the pixel position is the target region, the third probability of each pixel in the initial segmentation result can be directly sampled.
In step S72, third coordinates are determined. The third coordinate may be a top view pixel coordinate. The third coordinates are coordinate values of a third pixel corresponding to the universal coordinates in the second top view map.
In some embodiments, the third coordinates of each pixel in the universal coordinates in the second top view map are calculated, and a fourth probability of the pixel in the second top view map is sampled when the third coordinates are located within a preset range.
The universal coordinates are coordinates corresponding to the initial segmentation result in the world coordinate system, and the second top view map is the second top view map of the initial segmentation result in the second camera coordinate system. Furthermore, according to the corresponding relationship between the master camera and the image collection equipment for collecting the image obtained by position movement in the step S41 above and the projection matrix of the master camera, the top view pixel coordinates of each pixel in the universal coordinates in the second top view map can be obtained through coordinate conversion.
When each pixel in the initial segmentation result is located in an image effective range, the pixel can be output as the ground segmentation region of the image, so in the step S72 above, the pixel in the initial segmentation result corresponding to the third coordinates being located in the image effective range can be characterized by the obtained third coordinates being located in the preset range. Therefore, when obtaining the third coordinates, it is necessary to further determine whether the third coordinates are located in the preset range, and when the obtained third coordinates are located in the preset range, the fourth probability of the pixel can be collected from the second top view map.
In some embodiments, the above preset range may be [0, 1]. When the X value of the third coordinates is less than 0, it is determined that the pixel corresponding to the third coordinates is located outside the left edge of the image effective range. When the X value is greater than 0, it is determined that the pixel corresponding to the third coordinates is located outside the right edge of the image effective range.
When the obtained third coordinates are not located in the preset range, the fourth probability of the pixel cannot be collected.
In step S73, in response to that the third coordinates are in a preset range, a fourth probability is determined, and a final probability is the maximum of the third probability and the fourth probability.
For each pixel having the fourth probability, the maximum of the third probability and the fourth probability of the pixel is determined as a final probability of the pixel.
In step S74, in response to that the third coordinates are not in the preset range, the final probability is the third probability.
For each pixel not having the fourth probability, the third probability of the pixel is determined as the final probability of the pixel.
In step S75, the ground segmentation result of the image is determined based on the final probability of each pixel in the initial segmentation result.
The above universal coordinates are coordinates of the initial segmentation result in the preset world coordinate system, and therefore, each pixel in the initial segmentation result may be in one-to-one correspondence to each pixel in the universal coordinates.
Furthermore, for each pixel corresponding to the initial segmentation result, when the pixel has the fourth probability, the maximum value of the third probability and the fourth probability of the pixel can be determined as the final probability of the pixel.
Correspondingly, when the pixel does not have the fourth probability, the third probability of the pixel can be determined as the final probability of the pixel.
In this way, the final probability of each pixel corresponding to the initial segmentation result can be obtained, and furthermore, by outputting the final probability of each pixel, the ground segmentation result of the image can be obtained.
In this way, the region existing in the initial segmentation region but not existing in the current accumulation map can be added into the ground segmentation result of the image to be segmented, and the edge of the obtained ground segmentation result can be prevented from appearing jagged.
The steps S41-S44 included in
In some embodiments, the 3D rendering engine includes a vertex shader and a pixel shader. The step S43 and step 44 may be implemented by using the vertex shader and the pixel shader.
That is, the vertex shader and the pixel shader are used to execute the steps S61-S67 and the steps S71-S75.
When the vertex shader and the pixel shader are used to execute the steps S61-S67, the vertex shader is configured to transfer a position in the world coordinate system of each pixel of a first plane model within a visible range of the map camera into the pixel shader. Multiplying a world space transformation matrix of a first plane model and a vertex position thereof can obtain a position of the vertex in the world coordinate system, and the position of the vertex in the world coordinate system can be transferred into the pixel shader to obtain the position of each pixel in the world coordinate system through automatic interpolation. Multiplying the position of the vertex in the world coordinate system and a projection matrix of the map camera can obtain a projection space position, and the projection space position is used as an output position of the vertex shader. As long as the plane model A is large enough, it can be ensured that the pixel shader can cover all the pixels of the accumulation map. Correspondingly, the pixel shader executes the steps S61-S67.
When the vertex shader and the pixel shader are used to execute the steps S71-S75, the vertex shader is configured to transfer a position in the world coordinate system of each pixel of a second plane model within a visible range of the master camera into the pixel shader. Multiplying a world space transformation matrix of the second plane model and a vertex position thereof can obtain a position of the vertex in the world coordinate system, and a position of the vertex in the world coordinate system can be transferred into the pixel shader to obtain the position of each pixel in the world coordinate system through automatic interpolation. Multiplying a position of the vertex in the world coordinate system and a projection matrix of the map camera can obtain a projection space position, and the projection space position is used as an output position of the vertex shader. Correspondingly, the pixel shader executes the steps S71-S75.
In some embodiments, after the ground segmentation result of the image is obtained, AR rendering can be performed on the image, that is, items in the preset virtual space is added into the region corresponding to the ground segmentation result, thereby increasing the fun of the shot image and improving the user experience.
The image acquisition module 810 is configured to acquire an image.
The initial segmentation module 820 is configured to obtain an initial segmentation result by segmenting a target ground region of the image based on a preset ground segmentation algorithm.
The final segmentation module 830 is configured to determine a ground segmentation result of the image based on the initial segmentation result and a current accumulation map.
Wherein the current accumulation map comprises a ground segmentation result of a previous image.
In some embodiments, the final segmentation module 830 is configured to: determine the ground segmentation result by a 3D rendering engine and a 3D virtual scene corresponding to the 3D rendering engine.
In some embodiments, the 3D virtual scene includes: a master camera, a map camera, a first plane model and a second plane model.
Wherein the first plane model and the second plane model are horizontally placed in the 3D virtual scene.
The master camera is configured to render the second plane model, and correspond to a first camera coordinate system.
The map camera is configured to render the first plane model, where the map camera is synchronized with the master camera and directed towards horizontal plane of the 3D virtual scene.
In some embodiments, the final segmentation module 830 includes: an element movement submodule, configured to move the first plane model, the second plane model and the map camera to align with the master camera vertically; a current map conversion submodule, configured to determine a first top view map of the current accumulation map; a first rendering submodule, configured to obtain a second top view map of the initial segmentation result in the second camera coordinate system by rendering the first plane model based on the initial segmentation result and the first top view map; and a second rendering submodule, configured to obtain the ground segmentation result of the image by rendering the second plane model based on the initial segmentation result and the second top view map. The first top view map and the second top view map are in a second camera coordinate system corresponding to the map camera.
In some embodiments, the first rendering submodule is configured to: acquire universal coordinates of the initial segmentation result; calculate first coordinates of the universal coordinates in the first top view map and second coordinates of the universal coordinates in the first camera coordinate system; determine a first probability and countdown of a first pixel corresponding to the first coordinates by sampling the current accumulation map; where the countdown is the number of frames with the first pixel and the first probability determine a second probability of a second pixel corresponding to the second coordinates based on the initial segmentation result; determine a target probability based on the first probability and the second probability; in response to that the countdown is greater than zero, determine that the target probability is the maximum of the first probability and the second probability; and in response to that the countdown is equal to zero, determine that the target probability is the second probability; and determine the second top view map of the initial segmentation result in the second camera coordinate system based on the target probability.
In some embodiments, the second rendering submodule is configured to: determine a third probability of the second pixel; determine third coordinates, where the third coordinates are coordinate values of a third pixel corresponding to the universal coordinates in the second top view map; in response to that the third coordinates are in a preset range, determine a fourth probability, and determine a final probability is the maximum of the third probability and the fourth probability; in response to that the third coordinates are not in the preset range, determine that the final probability is the third probability; and determine the ground segmentation result based on the final probability.
In some embodiments, the electronic equipment 900 may be a mobile phone, a computer, a digital broadcasting terminal, a message receiving or transmitting equipment, a game console, a tablet equipment, a medical equipment, a fitness equipment, a personal digital assistant and the like.
Referring to
The processing component 902 controls overall operations of the electronic equipment 900, such as operations associated with display, telephone calls, data communication, camera operations and recording operations. The processing component 902 may include one or more processors 920 to execute instructions to complete all or part of the steps of the method for segmenting an image.
The processing component 902 may include one or more modules to facilitate the interaction between the processing component 902 and other components. For example, the processing component 902 may include a multimedia module to facilitate the interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operations in the electronic equipment 900. Examples of these data include instructions for any application or method operating on the electronic equipment 900, contact data, phone book data, messages, pictures, videos and the like. The memory 904 may be implemented by any type of volatile or non-volatile storage equipment or a combination thereof, such as a static random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, a magnetic disk or an optical disk.
The power supply component 906 provides power to various components of the electronic equipment 900. The power supply component 906 may include a power supply management system, one or more power supplies, and other components associated with generation, management and distribution of power for the electronic equipment 900.
The multimedia component 908 includes a screen that provides an output interface between the electronic equipment 900 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes the touch panel, the screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touch, swipe and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or swipe action, but also detect the duration and pressure related to the touch or swipe operation. In some embodiments, the multimedia component 908 includes a front camera and/or a rear camera. When the equipment 900 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each of the front camera and the rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a microphone (MIC). When the electronic equipment 900 is in an operation mode, such as a call mode, a recording mode and a voice recognition mode, the microphone is configured to receive an external audio signal. The received audio signal may be further stored in the memory 904 or transmitted via the communication component 916. In some embodiments, the audio component 910 further includes a speaker for outputting audio signals.
The I/O interface 912 provides an interface between the processing component 902 and a peripheral interface module. The above peripheral interface module may be a keyboard, a click wheel, buttons and the like. These buttons may include, but are not limited to: a home button, a volume button, a start button and a lock button.
The sensor component 914 includes one or more sensors for providing various aspects of state evaluation to the electronic equipment 900. For example, the sensor component 914 can detect the on/off state of the equipment 900 and the relative positioning of the components. For example, the components are a display and a keypad of the electronic equipment 900. The sensor component 914 may also detect the position change of the electronic equipment 900 or a component of the electronic equipment 900, the presence or absence of contact between the user and the electronic equipment 900, the orientation or acceleration/deceleration of the electronic equipment 900, and the temperature change of the electronic equipment 900. The sensor component 914 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact. The sensor component 914 may further include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 914 may further include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
The communication component 916 is configured to facilitate wired or wireless communication between the electronic equipment 900 and other equipment. The electronic equipment 900 can access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2, 3G 4G or 5G) or a combination thereof.
In some embodiments, the communication component 916 receives a broadcast signal from an external broadcast management system or broadcasts related information via a broadcast channel.
In some embodiments, the communication component 916 further includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
In some embodiments, the electronic equipment 900 can be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic elements, and is used to execute the above method.
In some embodiments, there is further provided a storage medium including instructions, for example, a memory 904 including instructions. The above instructions may be executed by the processor 920 of the electronic equipment 900 to complete the above image region segmentation method. Optionally, the storage medium may be a non-transitory computer-readable storage medium, for example, the non-transitory computer-readable storage medium may be an ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, optical data storage equipment or the like.
The server 1000 may further include a power supply component 1026 configured to execute power supply management of the server 1000, a wired or wireless network interface 1050 configured to connect the server 1000 to the network, and an input/output (I/O) interface 1058. The server 1000 can operate based on an operating system stored in the memory 1032, for example, Windows Server™, Mac OS X™, Unix™, Linux™, FreeBSD™ or the like.
Those skilled in the art will easily think of other implementations of the present disclosure after considering the specification and practicing the invention disclosed herein. The present application is intended to cover any variations, uses or adaptations of the present disclosure. These variations, uses or adaptations follow the general principles of the present disclosure and include common knowledge or conventional technical means in the technical field that are not disclosed in the present disclosure. The specification and the embodiments are only to be regarded as exemplary, and the true scope and spirit of the present disclosure are pointed out by the following claims.
It should be understood that the present disclosure is not limited to the precise structure that has been described above and shown in the accompanying drawings, and various modifications and changes can be made without departing from its scope. The scope of the present disclosure is only limited by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
202010023167.5 | Jan 2020 | CN | national |