This application relates to texture filtering and in particular, anisotropic texture filtering.
A graphics processing unit (GPU) may be used to process geometry data (e.g. vertices defining primitives or patches) generated by an application in order to generate image data. Specifically, a GPU may determine pixel values (e.g. colour values) of an image to be stored in a frame buffer which may be output to a display.
A GPU may process the received geometry data in two phases—a geometry processing phase and a rasterization phase. In the geometry processing phase a vertex shader is applied to the received geometry data (e.g. vertices defining primitives or patches) received from an application (e.g. a game application) to transform the geometry data into the rendering space (e.g. screen space). Other functions such as clipping and culling to remove geometry (e.g. primitives or patches) that falls outside of a viewing frustum, and/or lighting/attribute processing may also be performed in the geometry processing phase.
In the rasterization phase the transformed primitives are mapped to pixels and the colour is identified for each pixel. This may comprise rasterizing the transformed geometry data (e.g. by performing scan conversion) to generate primitive fragments. The term “fragment” is used herein to mean a sample of a primitive at a sampling point, which is to be processed to render pixels of an image. In some examples, there may be a one-to-one mapping of pixels to fragments. However, in other examples there may be more fragments than pixels, and this oversampling can allow for higher quality rendering of pixel values.
The primitive fragments that are hidden (e.g. hidden by other fragments) may then be removed through a process called hidden surface removal. Texturing and/or shading may then be applied to primitive fragments that are not hidden to determine pixel values of a rendered image. For example, in some cases, the colour of a fragment may be identified by applying a texture (e.g. an image) to the fragment. As is known to those of skill in the art, a texture, which may also be referred to as a texture map, is an image which is used to represent precomputed colour, lighting, shadows etc. Texture maps are formed of a plurality of texels (i.e. colour values), which may also be referred to as texture elements or texture pixels. Applying a texture to a fragment generally comprises mapping the location of the fragment in the render space to a position or location in the texture and using the colour at that position in the texture as the texture colour for the fragment. The texture colour may then be used to determine the final colour for the fragment. A fragment whose colour is determined from a texture may be referred to as a texture mapped fragment.
As fragment positions rarely map directly to a specific texel, the texture colour of a fragment is typically identified through a process called texture filtering. In the simplest case, which may be referred to as point sampling, point filtering or nearest-neighbour interpolation, a fragment in screen space is mapped to a position in the texture (i.e. to a position in texture space) and the value (i.e. colour) of the closest texel to the identified position in the texture is used as the texture colour of the fragment. However, in most cases, the texture colour for a fragment is determined using more complicated filtering techniques which combine a plurality of texels close to the identified position in the texture. Examples of more complicated filtering techniques include isotropic filtering techniques and anisotropic filtering techniques. Isotropic filtering techniques uniformly filter textures across perpendicular axes, whereas anisotropic filtering techniques do not uniformly filter textures, instead filtering textures based on the local (i.e. anisotropic) warping that the texture undergoes in the neighbourhood of a fragment. In some cases, the warping may take into account the texture's location on the screen relative to the camera angle. Examples of isotropic filtering techniques include, but are not limited to, bilinear filtering and trilinear filtering.
In bilinear filtering the four nearest texels to the identified position in the texture are combined by a pairwise linear weighted average according to distance. Compared with point sampling, this generally provides a smoother reconstruction of a continuous image from the bitmapped texture. Bilinear filtering has proven to be particularly suitable for applications in which textures, as a result of texture mapping, are magnified. However, neither point sampling nor bilinear filtering provide an adequate solution when textures are minified as they do not take into account the size of the fragment footprint in texture space.
Point sampling and bilinear filtering can be combined with a technique referred to as mipmapping. In mipmapping, a series (or pyramid) of mipmaps are pre-computed (e.g. generated in advance and/or offline). Each mipmap is a lower resolution version of the original texture. Specifically, according to standards, each mipmap has a height and width that are a factor of 2 smaller than the previous level, wherein odd dimensions are rounded down, and any dimension less than one are rounded up to one. The standards assign an integer level of detail (LOD) to each mipmap (zero for the highest resolution and increasing by one for each subsequent level). Mipmaps allow an appropriate level of detail to be selected for a fragment, in the sense that the mipmap level whose texel footprints most closely match the fragment's footprint is a good candidate for filtering. Specifically, higher resolution mipmaps can be used for fragments/objects that are closer to the screen/viewer, and lower resolution mipmaps can be used for fragments/objects that are further from the screen/viewer. Mipmaps thus provide an efficient solution to enable texture minification without having to introduce additional filtering, with potentially unbounded computation and memory bandwidth cost. When point sampling and bilinear filtering are used with mipmapping, the texel(s) are selected from the closest mipmap level (or a scaled version of the closest mipmap level).
Trilinear filtering comprises performing bilinear filtering on the two closest mipmap levels (one higher resolution and one lower resolution) and then linearly interpolating between the results of the bilinear filtering. In analogy with bilinear filtering, trilinear filtering provides a smoother approximation of the continuous range of minification that a texture may undergo.
Neither bilinear nor trilinear filtering takes into account the fact that a fragment footprint may be warped by different amounts in different directions (e.g. when the texture is at a receding angle with respect to the screen/viewer), making it difficult to approximate the fragment footprint in texture space using a single parameter (e.g. the level of detail). In such cases, bilinear or trilinear filtering can produce blurry results.
Anisotropic filtering addresses this problem by combining several texels around the identified position in the texture, but on a sample pattern mapped according to the projected shape of the fragment in screen space onto the texture (i.e. in texture space). While anisotropic filtering can reduce blur at extreme viewing angles, anisotropic filtering is more computationally intensive than isotropic filtering.
The texture colour(s) output by the texture filtering may then be used as input to a fragment shader. As is known to those of skill in the art, a fragment shader (which may alternatively be referred to as a pixel shader) is a program (e.g. a set of instructions) that operates on individual fragments to determine the colour, brightness, contrast etc. thereof. A fragment shader may receive as input a fragment (e.g. the position thereof) and one or more other input parameters (e.g. texture co-ordinates) and output a colour value in accordance with a specific shader program. In some cases, the output of a pixel shader may be further processed. For example, where there are more samples than pixels, an anti-aliasing technique, such as multi-sample anti-aliasing (MSAA), may be used to generate the colour for a particular pixel from multiple samples (which may be referred to as sub-samples). Anti-aliasing techniques apply a filter, such as, but not limited to, a box filter to the multiple samples to generate a single colour value for a pixel.
A GPU which performs hidden surface removal prior to performing texturing and/or shading is said to implement ‘deferred’ rendering. In other examples, a GPU might not implement deferred rendering in which case texturing and shading may be applied to fragments before hidden surface removal is performed on those fragments. In either case, the rendered pixel values may be stored in memory (e.g. frame buffer).
The embodiments described below are provided by way of example only and are not limiting of implementations which solve any or all of the disadvantages of known methods and hardware for performing anisotropic texture filtering.
This summary is provided to introduce a selection of concepts that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Described herein are methods of performing anisotropic texture filtering. The methods include: generating one or more parameters describing an elliptical footprint in texture space; performing isotropic filtering at each of a plurality of sampling points in an ellipse to be sampled, the ellipse to be sampled based on the elliptical footprint; and combining results of the isotropic filtering at each of the plurality of sampling points to generate a combination result by a sequence of linear interpolations, wherein each linear interpolation in the sequence of linear interpolations comprises blending a result of a previous linear interpolation in the sequence with the isotropic filtering results for one or more of the plurality of sampling points, the one or more of the plurality of sampling points for a linear interpolation being closer to a midpoint of the major axis of the elliptical footprint than the one or more of the plurality of sampling points for the previous linear interpolation in the sequence.
A first aspect provides a method of performing anisotropic texture filtering, the method comprising: generating one or more parameters describing an elliptical footprint in texture space; performing isotropic filtering at each of a plurality of sampling points in an ellipse to be sampled, the ellipse to be sampled based on the elliptical footprint; and combining results of the isotropic filtering at each of the plurality of sampling points to generate a combination result by a sequence of linear interpolations, wherein each linear interpolation in the sequence of linear interpolations comprises blending a result of a previous linear interpolation in the sequence with the isotropic filtering results for one or more of the plurality of sampling points, the one or more of the plurality of sampling points for a linear interpolation being closer to a midpoint of the major axis of the elliptical footprint than the one or more of the plurality of sampling points for the previous linear interpolation in the sequence.
The plurality of sampling points may comprise an even number of sampling points symmetrically situated about the midpoint of the major axis of the elliptical footprint, and each linear interpolation may comprise blending the result of the previous linear interpolation with a combination of the results of the isotropic filtering at two sampling points that are equal distance from the midpoint of the major axis of the elliptical footprint.
The combination of the results of the isotropic filtering at the two sampling points may comprise an average of the results of the isotropic filtering at the two sampling points.
The plurality of sampling points may comprise sampling points that lie on one side of the midpoint of the major axis of the elliptical footprint, and the method may further comprise: performing isotropic filtering at each of a second plurality of sampling points in the ellipse to be sampled, the second plurality of sampling points comprising sampling points that lie on an opposite side of the midpoint of the major axis of the elliptical footprint; combining results of the isotropic filtering at each of the second plurality of sampling point to generate a second combination result by a second sequence of linear interpolations, wherein each linear interpolation in the second sequence of linear interpolations comprises blending a result of a previous linear interpolation in the second sequence with the isotropic filtering results for one or more of the second plurality of sampling points, the one or more of the second plurality of sampling points for a linear interpolation being closer to a midpoint of the elliptical footprint than the one or more of the second plurality of sampling points for the previous linear interpolation in the second sequence; and combining the combination result and the second combination result.
In a first linear interpolation of the sequence of linear interpolations the isotropic filtering results for the one or more of the plurality of sampling points for the first linear interpolation may be blended with a starting value.
The sequence of linear interpolations may be configured to combine the results of the isotropic filtering at each of the plurality of sampling points with a truncated filter; the starting value may be zero; and the method may further comprise normalising the combination result based on the truncated terms of the truncated filter.
Normalising the combination result based on the truncated terms of the truncated filter may comprise rescaling the combination result by the inverse of one minus the sum of the truncated terms of the truncated filter.
The sequence of linear interpolations may be configured to combine the results of the isotropic filtering at each of the plurality of sampling points with a truncated filter; and the starting value may represent a combination of the truncated terms of the truncated filter.
The sequence of linear interpolations may be configured to combine the results of the isotropic filtering at each of the plurality of sampling points with a truncated Gaussian filter.
Each linear interpolation may blend the result of the previous linear interpolation in the sequence with the isotropic filtering results for one or more of the plurality of sampling points using a linear interpolation factor for that linear interpolation.
The method may further comprise dynamically calculating the linear interpolation factor for a linear interpolation from a ratio of a major radius of the ellipse to be sampled and a minor radius of the ellipse to be sampled.
The method may further comprise obtaining the linear interpolation factor for a linear interpolation from a lookup table from a ratio of a major radius of the ellipse to be sampled and a minor radius of the ellipse to be sampled.
The linear interpolation factor γk for the kth linear interpolation in the sequence may be equal to γk=m(K−k)+c, wherein K is a number of sampling points of the plurality of sampling points on a same side of a midpoint of the major axis of the elliptical footprint, η is a ratio of a major radius of the ellipse to be sampled and a minor radius of the ellipse to be sampled,
The linear interpolation factor γk for the kth linear interpolation in the sequence may be equal to
wherein K is a number of sampling points of the plurality of sampling points on a same side of a midpoint of the major axis of the elliptical footprint, η is a ratio of a major radius of the ellipse to be sampled and a minor radius of the ellipse to be sampled.
A spacing between adjacent sampling points of the plurality of sampling points may be proportional to √{square root over (1−η−2)} units, wherein η is a ratio of a major radius of the ellipse to be sampled and a minor radius of the ellipse to be sampled.
The plurality of sampling points may lie along a major axis of the elliptical footprint.
The method may be performed at a texture filtering unit of a graphics processing system.
A second aspect provides a method of generating an image, the method comprising performing anisotropic texture filtering in accordance with the first aspect, and generating an image based on the combination result.
A third aspect provides a texture filtering unit for use in a graphics processing system, the texture filtering unit configured to perform the method of the first aspect.
A fourth aspect provides a graphics processing system comprising the texture filtering unit of the third aspect.
The texture filtering units and/or the graphics processing systems described herein may be embodied in hardware on an integrated circuit. There may be provided a method of manufacturing, at an integrated circuit manufacturing system, a texture filtering unit and/or a graphics processing system as described herein. There may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, configures the system to manufacture a texture filtering unit and/or a graphics processing system as described herein. There may be provided a non-transitory computer readable storage medium having stored thereon a computer readable description of a texture filtering unit or a graphics processing system that, when processed in an integrated circuit manufacturing system as described herein, causes the integrated circuit manufacturing system to manufacture an integrated circuit embodying the texture filtering unit or the graphics processing system.
There may be provided an integrated circuit manufacturing system comprising: a non-transitory computer readable storage medium having stored thereon a computer readable description of a texture filtering unit or a graphics processing system as described herein; a layout processing system configured to process the computer readable description so as to generate a circuit layout description of an integrated circuit embodying the texture filtering unit or the graphics processing system; and an integrated circuit generation system configured to manufacture the texture filtering unit or the graphics processing system according to the circuit layout description.
There may be provided computer program code for performing a method as described herein. There may be provided non-transitory computer readable storage medium having stored thereon computer readable instructions that, when executed at a computer system, cause the computer system to perform the methods as described herein.
The above features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the examples described herein.
Examples will now be described in detail with reference to the accompanying drawings in which:
The accompanying drawings illustrate various examples. The skilled person will appreciate that the illustrated element boundaries (e.g., boxes, groups of boxes, or other shapes) in the drawings represent one example of the boundaries. It may be that in some examples, one element may be designed as multiple elements or that multiple elements may be designed as one element. Common reference numerals are used throughout the figures, where appropriate, to indicate similar features.
The following description is presented by way of example to enable a person skilled in the art to make and use the invention. The present invention is not limited to the embodiments described herein and various modifications to the disclosed embodiments will be apparent to those skilled in the art. Embodiments are described by way of example only.
As described above, texture mapping is a process in which a texture (i.e. an image) is mapped onto an object in a 3D scene. For example, a texture representing a brick pattern may be applied to a wall object to make it look as if the wall is made of brick. The term “screen space” is used herein to represent the 3D coordinate system of the display in which 3D objects such as primitives and patches are defined. Each pixel in screen space is defined by pixel coordinates (x, y) and a depth z. The term “texture space” is used herein to represent the 2D coordinate system of a texture. Each texel in texture space is defined by texture coordinates (u, v).
In anisotropic texture filtering, several texels around an identified position in a texture are combined, but on a sample pattern mapped according to the projected shape of the filter in screen space onto the texture (i.e. in texture space). Anisotropic texture filtering can improve the look of textures that are angled and farther from the camera compared to other filtering methods, such as isotropic texture filtering methods. One method known for implementing anisotropic texture filtering is the elliptical weighted average (EWA) filter technique first proposed by Paul S. Heckbert and Ned Greene. In the EWA technique, which is described with reference to
The partial derivatives (∂u/∂x, ∂v/a∂x, ∂u/∂y, ∂v/∂y) represent the rate of change of the of u and v in texture space relative to changes in x and y in screen space. Texels inside the elliptical footprint are then sampled, weighted (according to a filter profile), and accumulated. The result is then divided by the sum of the weights (which is the elliptical filter's volume in texture space).
EWA, when used in conjunction with a Gaussian filter profile, which may be referred to herein as Gaussian EWA, is considered to be one of the highest quality texture filtering techniques and is often used as a benchmark to measure the quality of other filtering techniques. However, the EWA technique has proven difficult to implement in hardware. Specifically, it can be expensive, in terms of computing resources, to calculate the weights and ellipse parameters, and in some cases may require obtaining many texels.
Different methods known to the Applicant, which is not an admission that they are well-known, have been proposed to approximate Gaussian EWA which can be more easily implemented in hardware. Some of these techniques involve performing isotropic filtering, such as trilinear filtering, at several points along a line in the ellipse and combining the results of the isotropic filtering. For example, in one method which is described with reference to
In another method which is illustrated with reference to
In some cases, instead of computing the stepping vector (Δu, Δv) with trigonometric functions, it may be determined by scaling the longer vector directly.
The inventors have identified that Gaussian EWA can be more accurately estimated by performing isotropic filtering at several sample points along the major axis of the ellipse where the distance between sample points is proportional to √{square root over (1−η−2)} units, η being the ratio of the major radius of the ellipse (ρ+) and the minor radius of the ellipse (ρ−), such that η=ρ+/ρ−. The ratio between the major radius of the ellipse and the minor radius of the ellipse (i.e. η) may be referred to as the anisotropic ratio. As described in more detail below, when the distance between sampling points is proportional to √{square root over (1−η−2)} units the maximum error in the estimation is not dependent on the anisotropic ratio.
The inventors have also identified that, independent of the spacing of the sample points along the major axis, Gaussian EWA can be more accurately estimated by combining the results of the isotropic filtering in a recursive manner. As described in more detail below, not only can this decrease the cumulative error that may occur with summing a plurality of small values, but it can also simplify the calculation of the weights.
In some cases, the two techniques may be combined to obtain an even more accurate estimate of Gaussian EWA.
Accordingly, described herein are methods and texture filtering units for performing anisotropic filtering of a texture using one or more of these techniques.
Reference is now made to
The information identifying the position of interest in the texture may comprise a set of texture co-ordinates (u, v) that identify a position in the texture that corresponds to a particular fragment or pixel in screen space. The set of texture co-ordinates may define the midpoint (e.g. midpoint 510 of
In some cases, which may be referred to as explicit level of detail cases, the information defining the relationship between screen space and texture space may include the partial derivatives (∂u/∂x, ∂v/∂x, ∂u/∂y, ∂v/∂y) that represent the rate of change of u and v in texture space relative to changes in x and y in screen space. In other cases, which may be referred to as implicit level of detail cases, the information defining the relationship between screen space and texture space may include information from which the partial derivatives can be determined, or at least estimated. For example, the information defining the relationship between screen space and texture space may comprise texture co-ordinates corresponding to neighbouring pixels/fragments to the relevant pixel/fragment (e.g. a 2×2 block of pixels/fragments). In some cases, the parameters of the elliptical footprint may also be based on the dimensions of the texture.
The parameters defining the elliptical footprint that may be generated may include, but are not limited to, the major axis of the elliptical footprint (e.g. the major axis 504 of the elliptical footprint 502 of
There are many known methods for generating the parameters of an elliptical footprint in texture space. The parameters of the elliptical footprint in texture space may be generated in any suitable manner. In some cases, the major axis may be identified by performing a single value decomposition (SVD) of the total derivative matrix of partial derivatives (e.g. Jacobian). This involves taking a matrix M, squaring it MMT (or MTM) and then diagonalising. This is illustrated by equation (6):
The inverse of the Jacobian squared can then be expressed as shown in equation (9).
It will be evident to a person of skill in the art that this results in equation (1). It will also be evident to a person of skill in the art that this is an example only. Another example method for generating the parameters of the elliptical footprint in texture space is described in GB Patent No. 2583154 which is hereby incorporated by reference in its entirety.
Once the parameters defining the elliptical footprint in texture space have been identified the method 400 proceeds to block 404.
At block 404, at least one set of equally spaced sampling points (which may also be referred to as sample points) along the major axis are identified (e.g. sampling points 506 on the major axis 504 of
The number of sampling points in a set, N, may be selected based on the ratio of the major radius (ρ+) and the minor radius (ρ−) of the associated ellipse to be sampled (i.e. the anisotropic ratio η). In some cases, described in more detail below, N may be proportional to the anisotropic ratio η, a sampling rate β, and the width of the Gaussian kernel in standard deviations (which is preferably a multiple of 2 standard deviations—i.e. 2α). In general, the sampling rate β controls how closely spaced along the major axis kernel (i.e. the Gaussian kernel) the samples are. The higher β is, the more closely spaced the samples are along the major axis. For example, if β is two, samples are taken every one standard deviation of the minor axis, instead of every two standard deviations of the minor axis when β is one. In some cases, N may be equal to 2αρ[η]. The parameters α and β may be explicitly provided (e.g. based on a sampling budget), or may be dynamically selected based on information provided. For example, in some cases, the information provided may simply indicate a level of quality—e.g. high quality or low quality, and α and β may be selected accordingly.
Preferably the distance σ between sampling points in each set is proportional, by a proportionality factor κ, to √{square root over (1−η−2)} units. As described in more detail below, it has been determined that when the distance or spacing σ between sampling points is proportional to √{square root over (1−η−2)} units, the upper bound on the error associated with the anisotropic filtering result is not dependent on the anisotropic ratio, thereby ensuring a balanced quality of approximation for a specified performance budget.
In some cases, the proportionality factor κ may be equal to 1/β η/[η]. As described in more detail below, when the proportionality factor κ is equal to 1/β η/[η] the Gaussian weights that are applied in block 408 to the results of the isotropic filtering performed in block 406 may only be generated for integer anisotropic ratios. This can reduce the cost (e.g. tabulation of weights) of implementing the method. However, the disadvantage of this expression of the proportionality factor is that the spacing (and hence approximation quality) is discontinuous and specifically may jump discontinuously when there is a jump from one integer anisotropic ratio to another. This is especially true if the reconstruction quality is low (for example if the sample rate is low, e.g. β=½) which may result in discontinuous jumps in a filtered image, which may be prominent.
In other cases, the proportionality factor κ may be equal to 1/β. As described in more detail below, when the proportionality factor is equal to 1/β the Gaussian weights that are applied to the results of the isotropic filtering are not limited to integer anisotropic ratios which makes calculation and storage of the weights more complicated, but the spacing is continuous for various ratios and gives a uniform error in the limit as the number of samples tends to infinity.
In some cases, the first sample in a set may be offset from the middle point, midpoint or centre point of the major axis by a fraction ψ of the spacing (the fraction ψ may also be referred to as the offset). In these cases, the location of each sample point n can be expressed by
where n is any integer in the half-open interval
and ρ+ is the major axis radius vector. In some cases, if the number of sampling points N is even, the offset ψ may be set to ½ such that the sampling points are symmetrically positioned about the middle point of the major axis. For example, if ψ=½, N=2 then n∈[−1−½, +1−½) ⇒n∈[−1,0]⇒n+ψ=±½. However, an offset of ½ can also be used for an odd number of samples. For example, if ψ=½, N=3 then n∈[− 3/2-½, + 3/2−½)⇒n∈[−2,0]⇒n+ψ={±½, − 3/2}.
It will be evident to a person of skill in the art that this is an example only and that the offset ψ may be set to other suitable values. In particular, if the number of sampling points N is odd, a symmetric distribution of points about the middle or midpoint of the major axis may be attained when ψ is set to 0. For example, if ψ=0, N=3 then n∈[− 3/2, + 3/2)⇒n∈[−1,1]⇒n+ψ={0, ±1}. However, an offset of 0 can also be used for an even number of samples. For example, if ψ=0, N=4, then n∈[−2, +2)⇒n∈[−2,1]⇒n+ψ={0, ±1, −2}.
In yet other examples, the distribution pattern may alternate between offsets of 0 and ½ for odd and even numbers of samples, respectively. However, for the sake of continuity (as the anisotropic ratio increases), it may be preferable to use a consistent offset, which tends to favour an even number of samples since fewer samples may be required when the anisotropic ratio is small (e.g. perhaps only two samples, as opposed to three, when the ratio is close to unity).
In some cases, a single set of equally spaced sampling points along the major axis is identified and the ellipse to be sampled is the elliptical footprint identified in block 402. In these cases, isotropic filtering is performed at each of the identified sampling points (see block 406), and the results of the isotropic filtering are combined using a Gaussian filter (see block 408). In some cases, the isotropic filtering performed at the sampling points may incorporate a mipmap interpolation technique. In such cases, performing isotropic filtering at a sampling point may comprises: performing a first isotropic filtering at the sampling point at a first mipmap level; performing a second isotropic filtering at the sampling point at a second mipmap level; and interpolating between a result of the first isotropic filtering and a result of the second isotropic filtering to generate a result of the isotropic filtering for the sampling point. Therefore, in these cases, the interpolation between mipmaps is performed before the combination (i.e. before block 408). To implement trilinear filtering, the first and second isotropic filtering may be bilinear filtering, the first and second mipmap levels may comprise one higher resolution mipmap level and one lower resolution mipmap level, and the result of the trilinear filtering at each sampling point is combined at block 408.
For example, let the length of the minor axis ρ− be equal to √{square root over (2)} base mipmap level texels, and the length of the major axis ρ+ be equal to 3 √{square root over (2)} base mipmap level texels. As is known to those of skill in the art, the minor and major axis lengths have a fixed size in normalised texture co-ordinates so will have a smaller texel size for lower resolution mipmap levels. In this example, η=3, and the level of detail (LOD) λ=log2 ρ−=½ which indicates that it may be beneficial to sample from multiple mipmaps. Where only one set of equally spaced sampling points along the major axis are identified, N samples (where N is proportional to η=3) at a spacing of
are identified which produces an effective spacing of
texels from the base mipmap level and
texels from the second mipmap level. It will be evident to those of skill in the art that these locations are aligned in (u, v) co-ordinates. Thus the relevant positions of each mipmap level can be identified by a single set of points along the major axis.
In other cases, where multiple mipmap levels are used, multiple sets of equally spaced sampling points along the major axis are identified—one for each mipmap level. Each set of equally spaced sampling points relates to, or is associated with, a different ellipse to be sampled, wherein each ellipse to be sampled is based on, and/or related to, the elliptical footprint identified in block 402. When multiple sets of sampling points are identified each set may have the same number of sampling points or different sets may have a different number of sampling points.
Where multiple sets of sampling points are identified, isotropic filtering may be performed at each of the identified points of each set (see block 406), the results of the isotropic filtering for each set may be combined separately (see block 408), and then interpolation may be performed on the two combination results (e.g. using a fractional level of detail (LOD) mipmap interpolation weight). For each mipmap level the steps are in units of texels. In some examples, described with reference to
For example, let the length of the minor axis ρ− be equal to √{square root over (2)} base mipmap level texels, and the length of the major axis ρ+ be equal to 3√{square root over (2)} base mipmap level texels. In this example a spacing of
texels is used for each mipmap level. Since a width of a texel from the second mipmap level is twice that of the first (base) mipmap level, the kernel width of the second mipmap level is double that of the first. As the sample locations do not align, a separate set of sample points is identified for each mipmap level.
In other examples, described with reference to
For example, let the length of the minor axis ρ− be equal to √{square root over (2)} base mipmap level texels, and the length of the major axis p, be equal to 3√{square root over (2)} base mipmap level texels. In this example
The spacing for the higher resolution mipmap level is
texels and the spacing for the lower resolution mipmap level is
texels. As the sample locations do not align, a separate set of sample points is identified for each mipmap level.
Once the sampling point set(s) have been identified the method 400 proceeds to block 406.
At block 406, isotropic filtering is performed at each of the sampling points identified in block 404 (e.g. isotropic filtering 508 may be performed at each of the sampling points 506 in
At block 408, the isotropic filtering results for each set of sampling points are combined used a Gaussian kernel. For example, where there is a single set of sampling points, the isotropic filtering results for those sampling points are combined using a Gaussian kernel. Where, however, there are multiple sets of sampling points—one corresponding to each mipmap level, then the isotropic filtering results corresponding to each mipmap level may be combined separately using a Gaussian kernel. Specifically, the isotropic filtering results corresponding to the lower resolution mipmap level may be combined using a Gaussian kernel, and the isotropic filtering results corresponding to the higher resolution mipmap level may be combined using a Gaussian kernel.
In some cases, the results of the isotropic filtering for a set of sampling points may be combined by identifying the appropriate Gaussian weight for each isotropic filtering result based on the location of the related sampling point, calculating the product of each filtering result and the corresponding weight, and calculating the sum of the products. However, as described in more detail below with respect to
Where only one set of sampling points were identified, then the method 400 may end. Where, however, multiple sets of sampling points were identified then the method 400 may proceed to block 410.
At block 410, it is interpolated between the combination results generated in block 408 for the different sets of sampling points. For example, block 410 may comprise interpolating between the Gaussian combination result generated in block 408 for the set of sampling points for a higher resolution mipmap and the Gaussian combination result generated in block 408 for the set of sampling points for a lower resolution mipmap. In some cases, the interpolation may be performed using a fractional level of detail (LOD) mipmap interpolation weight. However, it will be evident to a person of skill in the art that this is an example only. Once the interpolation has been performed, the method 400 may end. The result of the method (block 408 or block 410), which may be referred to as a filter result, may be output for further processing. For example, the output of the method (block 408 or block 410) may be output to a shader (or another component of a graphics processing unit or a graphics processing system) for use in generating a rendering output (e.g. an image).
The different approaches described above for using multiple mipmap levels have different advantages and disadvantages. Where the interpolation between isotropic filtering results for different mipmap levels is performed prior to combining the filtering results (e.g. where a single set of sampling points are identified), the sampling points along the major axis may be spaced too sparsely on the high resolution mipmap and too densely on the low resolution mipmap, which may result in a relatively poor and high quality approximation respectively, in terms of the minor axis scale. In contrast, if the interpolation between isotropic filtering results for different mipmap levels is performed after Gaussian combination of the filtering results (e.g. where a set of sampling points are identified for each mipmap level) there will be a consistent spacing on each mipmap, so the quality in terms of the kernel sample density (i.e. minor axis spacing) will be consistent. However, if the symmetric anisotropic filtering approach is used where a pair of concentric ellipses are used to approximate the desired ellipse (one smaller than the desired ellipse and one larger than the desired ellipse), the kernel major axis length may be too sparsely separated (between adjacent fragments in the gram buffer) on the high resolution mipmap level and too densely separated on the lower resolution level. This is because using a pair of different sized ellipses affects the degree of overlap with neighbouring fragments. In contrast, if the asymmetric anisotropic filtering is used where a pair of ellipses with different levels of eccentricity are used to approximate the desired ellipse, then for the major axes the desired filter width is selected for each mipmap level and therefore the antialiasing issue associated with linearly interpolating two different sized kernels is avoided.
In Gaussian EWA a circular Gaussian filter in screen space is mapped to an elliptical filter in texture space. In the methods described herein the continuous Gaussian filter of Gaussian EWA is approximated as a discrete Gaussian weighted sum of smaller filters (i.e. the isotropic filters which are preferably Gaussian filters) which can be represented as a convolution between a Gaussian filter and an isotropic filter.
In determining a preferred spacing of the sampling points one must first identify the preferred covariance of the convolution and the filters that make up the convolution. As will be described in more detail below, the inventors have identified that the preferred variance of the convolution is ρ+2, and the preferred variance of the isotropic filter is p−2, and since the variances of functions are additive under convolution the preferred variance of the Gaussian filter is p+2−p−2. Methods known to the Applicant for estimating forms of EWA (e.g. Gaussian EWA), which is not an admission that they are well-known, do not generally account for the contribution of the isotropic filter to the target variance, leading to inexact resultant kernel profiles.
Specifically, in anisotropic filtering preferably there is a symmetric covariance matrix in screen space which maps to an anisotropic covariance matrix in texture space. In some examples, an asymmetric covariance matrix in space may be defined, but this generates additional complexity in the determination of the elliptical footprint in textures space. Equation (10) shows a 2D mapping of a covariance matrix (
the determination of the mapping simplifies to the earlier equations (e.g. J
and a minor axis
where for texture space coordinates
ϕ describes the angular displacement of the major axis from the u axis in texture space.
This can be divided into an isotropic portion and anisotropic portion. Specifically, let the anisotropic filter be represented by A and the isotropic filter be represented as T then, using the additive property of covariance where the covariance of a filter F is written as
and ρT2 is proportional to the covariance of the isotropic filter T (which is along both the major axis and the minor axis). It can be seen from equation (12) that the covariances along the major axis are additive to produce a final variance for the convolution of ρA2+ρT2.
Preferably equation (12) is equal to equation (10). In other words, preferably the covariance of the convolution is equal to the preferred covariance expressed in equation (10). For equation (10) and (12) to be equal, then ϕ=θ, ρT2=ρ−2, and ρA2=ρ+2−ρ−2 Therefore the preferred covariance for the anisotropic filter is ρ+2−ρ−2. Accordingly, in determining the covariance of the anisotropic filter, the covariance of the isotropic filter is to be taken into account.
Now that the preferred covariance of the Gaussian filter (i.e. the anisotropic filter) has been determined to be ρ+2−ρ−2, an analysis of the error is used to identify a preferred spacing of the sample points along the major axis. Specifically, as the discrete weighted sum is only an approximation of a continuous Gaussian there will be an error between the continuous Gaussian and the discrete approximation thereof. The inventors have identified that the upper bound on this error is not dependent on the anisotropic ratio if the spacing between the samples is proportional to √{square root over (1−η−2)} units. This is advantageous because, when the error is not dependent on the anisotropic ratio, a consistent quality can be achieved for a given (kernel) sampling rate budget. Specifically, when the error is not dependent on the anisotropic ratio it can be seen that if the number of samples are reduced for lower ratios this will result in a greater error. In other words, one cannot cut corners for lower anisotropic ratios.
Specifically, let a discrete Gaussian weighted sum {tilde over (Γ)}(x) with the preferred covariance of ρ+2−ρ−2 be represented in 1 dimension as shown in equation (13), wherein ψ is the offset from the centre of the major axis of the ellipse at which the first sample is situated, σ is the distance between samples along the major axis, and δ is the Dirac delta function, used to reduce a convolution defined inside an integral to a discrete sum. As described above, in some cases the offset ψ is equal to ½ such that when there are an even number of samples the samples are evenly distributed on either side of the middle or centre point of the major axis (i.e. the samples are symmetric about the middle or centre of the major axis). However, in other cases, the offset ψ may be other values such as, but not limited to, zero.
The Gaussian weighted sum is convolved with the isotropic filter. Let the isotropic filter be a Gaussian filter with a covariance of ρ−2 as shown in equation (14), which is the preferred covariance for the isotropic filter.
The general form of the convolution can then be expressed by equation (15) which can be re-written as equation (16) by using the Dirac delta function σ to eliminate the integral such that the convolution can be written as a sum of smaller Gaussians at each sample location weighted by the discrete Gaussian filter {tilde over (Γ)}. Specifically, the result of the application of the Dirac delta function δ in the integral is only non-zero when x′=x−(n+ψ), therefore x′ is replaced with x−(n+ψ)σ and the integral is removed.
The exponents in equation (16) can then be arranged as shown below to result in a representation of the exponents shown in equation (17) wherein the exponents are expressed as the combination of two terms—a first term which is dependent on n and a second term which is not dependent on n. It can be seen that this rearranging has been accomplished by completing the square.
The exponents in equation (16) can then be replaced with the expression thereof in equation (17) which results in equation (18). Since the second term of equation (17) is not dependent on n it can be removed from the summation which results in equation (19). It can be seen that the terms outside of the summation represent a Gaussian filter with a variance of ρ+2. Thus equation (19) can be re-written as a function of a Gaussian filter with a variance of ρ+2 (i.e. Γp
A Gaussian filter with a variance of ρ+2 (i.e. Γp
If ρ=σ−1√{square root over (1−η−2p−)} then equation (21) can be re-written as equation (22).
Equation (20) can then be re-written as equation (23) using equation (22).
Using the identity exp(2πikn)=δ(n−k), which specifies that a discrete series of integer frequency sinusoidal functions is equivalent to a train of Dirac delta functions with integer spacing, equation (23) can be re-written as equation (24) which can be simplified to equation (25) because the integral with respect to the Dirac delta function δ selects the integrand at integer locations.
One way to determine the worst case error on an operator like a kernel is to use the supremum norm when the kernel K acts on some function ƒ (which is a texture in the anisotropic filtering case). The supremum norm can be expressed by equation (26), and the L1 norm can thus be expressed by equation (27).
The error between the continuous desired Gaussian Γp
It can be seen from equation (30) that if the spacing σ of the samples along the major axis is proportional to √{square root over (1−η−2)} units (i.e. ρ− in this example) as shown in equation (31) where K is some constant, then the upper bound on the error shown in equation (30) can be expressed as shown in equation (32).
It can thus be seen from equation (32) that when the spacing σ of the samples along the major axis is proportional to √{square root over (1−η−2)} units then the upper bound on the error is not dependent on the ratio η or the units (e.g. ρ− in this example) which is beneficial. Specifically, it allows there to be a uniform bound for a set of parameters.
It has been assumed in the above analysis that an infinite series of weights are applied in the discrete convolution, which is clearly not feasible. This error is treated herein as an independent parameter that, as described in more detail below, can be controlled by the extent of the kernel support.
Two different example implementations where the spacing σ of the samples along the major axis is proportional to √{square root over (1−η−2)} units will now be described. As is known to those of skill in the art, for a non-negative function such as a Gaussian function, the variance (or rather the standard deviation, i.e. the square root of the variance) gives an indication of the scale of the kernel and the degrees of filtering. Accordingly, if the anisotropic filter has the preferred variance of ρ+2−ρ−2 then the standard deviation of the filter is ρ+2−ρ−2. Since the Gaussian (discrete or continuous) has infinite support, the series is truncated to generate a finite sequence of filtering operations. While higher quality windowing functions can be selected, the examples here assume a simple termination of the series after a finite number of terms.
It is desirable that the support of the Gaussian kernel along the major axis (i.e. the length of the major axis over which the sample points extend) be proportional, by a proportionality factor α, of two standard deviations (i.e. 2√{square root over (ρ+2−ρ−2)}). If there are N evenly spaced samples along the major axis then the spacing or distance σ between the samples on the major access can be expressed as shown in equation (33), where ρ− is the minor radius vector and ρ+ is the major radius vector. Accordingly, 2α represents the number of standard deviations of the major axis that is covered by the samples. Equation (33) can then be expressed in terms of √{square root over (1−η−2)} as shown in equations (34) and (35).
In a first example implementation, N (which has to be an integer) is proportional to the anisotropic ratio η, a sampling rate β, and the width of the kernel in standard deviations (i.e. 2α) as shown in equation (36). In general, the sampling rate β controls how closely spaced along the major axis kernel the samples are. The higher β is, the more closely spaced the samples are along the major axis. For example, if β is two, samples are taken every one standard deviation of the minor axis, instead of every two standard deviations of the minor axis when β is one. In this example, the spacing σ between the sample points can be expressed by equation (37) by replacing N in equation (34) with the right-hand side of equation (36).
Accordingly, the proportionality factor κ is equal to 1/β η/[η] for this first example implementation. It can be seen from the inequality in equation (32) that the cap on the error in the approximation decreases as β increases. This indicates that the approximation quality, in the limit as a tends toward infinity, is highest when η is just larger than an integer. Specifically, it can be seen from equation (37) that when the anisotropic ratio η is an integer (i.e. η=[η]) the spacing σ of the samples along the major axis (in units of ρ−) is proportional, by a constant or fixed proportionality factor of 1/β to √{square root over (1−η−2)}. When the anisotropic ratio η is not an integer ((i.e. η≠[η]) then the spacing σ of the samples along the major axis (in units of ρ−) is proportional, by the variable factor 1/β η/[η], to √{square root over (1−η−2)}. This means that the sample spacing for non-integer anisotropic ratios is at least as dense as for the corresponding integer anisotropic ratio.
As described above, in the methods described herein, isotropic filtering is performed at the sampling points along the major axis, and the results of the isotropic filtering are combined via a weighted sum where the weights are determined in accordance with a Gaussian kernel. This can be represented as a convolution of an anisotropic filter (a discrete Gaussian weighted sum) and an isotropic filter. The anisotropic filter can be represented by equation (13) shown above. From equation (13) it can be determined that the weight for the result of the isotropic filtering performed for the nth sample point can be determined in accordance with equation (38), where ψ is an offset from the midpoint of the major axis at which the first sample is situated, and σ is the spacing between samples. In some cases, the offset may be set to ½. However, in other cases, the offset may be set to other values, such as, but not limited to, zero. (n+ψ)σ can be expressed as shown in equation (39). If (n+ψ)σ in equation (38) is replaced with the right-hand side of equation (39), equation (38) can be written as equation (40). Equation (40) can be rearranged to produce equation (41).
In this first example, N is as set forth in equation (36) which means equation (41) can be re-written as weights equation (42).
It can be seen from equation (42) that in this first example the weights only need to be calculated for integer values of the anisotropic ratio. This can reduce the cost (e.g. tabulation of weights) of implementing this first example. However, the disadvantage of this first example is that the spacing is discontinuous and specifically may jump discontinuously when there is a jump from one integer anisotropic ratio to another. This is especially true if the reconstruction quality is low
which may result in discontinuous jumps in a filtered image, which may be prominent.
Accordingly, in a second example implementation, to avoid the discontinuous spacing, the anisotropic filter support (as indicated by α) can be widened for non-integer ratios. Specifically, in this second example α is expressed as a function of the anisotropic ratio η as shown in equation (43) where α0 is an integer, representing the base proportionality factor of the kernel support (i.e. multiples of two standard deviations). N can then be written as shown in equation (44) and the spacing σ between the sample points can be expressed by equation (45) by replacing α in equation (34) with the right hand side of equation (43), and replacing N in equation (34) with the right-hand side of equation (44).
In this second example, the spacing of the samples (in units of ρ−) is always proportional, by a fixed or constant proportionality constant 1/β, to √{square root over (1−η−2)} regardless of whether the anisotropic ratio is an integer or not.
In this second example N is as set forth in equation (44) which means equation (41) can be re-written as equation (46).
It can be seen from equation (46) that in this second example the weights are not limited to integer anisotropic ratios which makes calculation and storage of the weights more complicated, but this second example has the advantage that the spacing is continuous for various ratios and gives a uniform error in the limit as the number of samples tends to infinity.
It is noted that the first and second example formulations of the spacing of the sample points and the corresponding weights end up being the same for integer anisotropic ratios. Furthermore, if α in the first example equals α0 in the second example, an identical number of samples is used in each example, irrespective of the anisotropic ratio. In the former case, the sample density is increased, and the kernel support (in terms of standard deviations) is held constant, yielding an increased sample density quality of approximation but with roughly constant error associated with the truncation of the series. In the latter case, the sample density is held constant, but the kernel support is increased, reducing the error associated with the truncation of the series, but, as noted above, this effect diminishes as the base kernel support increases and vanishes in the limit.
The methods of performing anisotropic filtering described above comprise performing isotropic filtering (e.g. bilinear filtering or trilinear filtering) at a plurality of sampling points along a major of an ellipse in texture space and combining the results of the isotropic filtering using a Gaussian kernel. In other words, a Gaussian-weighted sum of the results of the isotropic filtering is generated. As described above, one way of combining the results of the isotropic filtering using a Gaussian kernel is to identify a Gaussian weight for each isotropic filtering result based on the location of the sampling point, calculate the product of each weight and the corresponding isotropic filtering result, and calculate the sum of the products. However, calculating a Gaussian weighted sum in this manner has a number of disadvantages. First, because some of the weights, and thus the associated products, can get quite small, this method comprises summing a number of small values which can result in a large accumulated error. Secondly, this method of calculating a Gaussian weighted sum comprises calculating and storing complicated Gaussian weights at a high level of precision.
Accordingly, described herein is an improved method of calculating the Gaussian-weighted sum of isotropic filtering results which can reduce the accumulated error from small weights, and it can also save time and resources. More particularly, in the methods described herein the Gaussian-weighted sum is calculated over a sequence of linear interpolations starting with the isotropic filtering results that correspond to the sampling points furthest from the centre of the major axis and moving towards the centre of the major axis.
For example, a Gaussian-weighted sum can be expressed as a sequence of linear interpolations as shown in equation (47) where Fk is the result after the kth iteration, γk is a linear interpolation factor for the kth iteration, and K=N/2 where N is the even number of sampling points along the major axis.
Each iteration a current total (Fk-1) is blended with the isotropic filtering result LK-(k-1) corresponding to the (K−(k−1))th sampling point to the left of the middle point of the major axis, and the isotropic filtering result RK-k corresponding to the (K−(k−1))th sample point to the right of the middle point of the major axis using a linear interpolation factor γk. Accordingly, L is used to denote an isotropic filtering result that corresponds to a sampling point that is to the left of the middle point of the major axis and R is used to denote an isotropic filtering result that corresponds to a sampling point that is to the right of the middle point of the major axis. The final result FK is obtained after K iterations.
In another example, instead of combining corresponding L and R isotropic filtering results in the same iteration, the L isotropic filtering results and the R isotropic filtering results may be interpolated separately and the interpolation of the L isotropic filtering results may be averaged with the interpolation of the R isotropic filtering results. Specifically, in this example, each iteration a current total of L isotropic filtering results, FkL, is blended with the isotropic filtering result LK-(k-1) using a linear interpolation factor γk as shown in equation (48); and a current total of R isotropic filtering results, FkR, is blended with the isotropic filtering result R(K-(k-1) using a linear interpolation factor γk as shown in equation (49). After K iterations the final result FK is calculated as the average of the final total of L isotropic filtering results FKL and the final total of R isotropic filtering results FKR as shown in equation (50).
Although in the second example two separate partial results are generated and stored, this approach can have the advantage of increasing cache coherency as the inner loop is performed over adjacent samples (as opposed to opposite ends of the kernel), and the outer loop can potentially be interleaved, or otherwise take account, of neighbouring fragment filter operations. For example, an anisotropic filter aligned with the horizontal screen space axis may choose to process a pair of vertically aligned fragments in parallel, first recursively interpolating the left points (which should be spatially coherent on account of the horizontal direction of anisotropy), followed by the right points, before moving on to the next vertical fragment pair to the right. Since the right points of the fragment pair to the left will move from the right to the left, and the left points of the fragment pair to the right will move from the left to the right, this can mean that the final samples from the left vertical fragment pair are close to the first samples from the right vertical fragment pair.
Reference is now made to
The method 800 begins at block 802 where an iteration counter k is initialised (e.g. to 1) and the current total (i.e. F0) is initialised to a starting value. If the initial interpolation factor γ1 is 1, then F0 does not contribute to the final result and the current total can be initialised to any value. This would define a normalised weighted sum purely in terms of a set of recursive weights. However, as described below, the weights for a Gaussian sum may be simplified when written directly in terms of the infinite series.
Since this series is truncated to achieve a realisable filter, the starting value F0 may represent the partial result of all the truncated terms. The advantage of initialising the starting value to the partial result of all the truncated terms is that a single set of weights γ (working from the inside out) may be defined, irrespective of the degree of truncation. In some examples, the starting value F0 may be an estimate of the sum of the truncated terms in the form of a texture sample (e.g. the central value) or a known average.
In other cases, the current total (i.e. F0) may be initialised to zero. In such cases, the final result may be normalised (e.g. by tabulating the missing weights for the degree of truncation for a particular ratio). Accordingly, where the current total is initialised to zero, the method 800 may further comprise, prior to block 810, a final normalisation step to rescale the result by
Once the iteration counter k and the current total have been initialised the method 800 proceeds to block 804.
At block 804, the current total is blended with the isotropic filtering result corresponding to the (K−(k−1))th sample point to the left of the middle point of the major axis of the ellipse (i.e. LK-(k-1)) and/or the isotropic filtering result corresponding to the (K−(k−1))th sample point to the right of the middle point of the major axis of the ellipse (i.e. R(K-(k-1)) using a linear interpolation factor of γk to generate a new current total. In some cases, the method 800 may be used to linearly interpolate between only the L isotropic filtering results, between only the R isotropic filtering results, or between both the L and R isotropic filtering results.
Where the method 800 is used to linearly interpolate between only the L isotropic filtering results then at block 804 (1−γk)Fk-1+γk (LK-(k-1)) may be calculated where Fk-1 is the current total. In these cases, the method may further comprise repeating blocks 802 to 810 for the R isotropic filtering results and then determining the average of the interpolation of the L isotropic filtering results and interpolation of the R isotropic filtering results. Similarly, where the method 800 is used to linearly interpolate between only the R isotropic filtering results then at block 804 (1−γk)Fk-1+γk (RK-(k-1)) may be calculated. In these cases, the method may further comprise repeating blocks 802 to 810 for the L isotropic filtering results and then determining the average of the interpolation of the R isotropic filtering results and interpolation of the L isotropic filtering results may be calculated to generate a final result. Where the method 800 is used to linearly interpolate between the L and R and isotropic filtering results then at block 804
may be calculated.
For example, where the method 800 is used to interpolate between the L and R isotropic filtering results corresponding to a set of evenly spaced sample points along a line shown in
In some cases, the linear interpolation factors γk may be approximated with a linear function of k. One example linear function for calculating the linear interpolation factors γk is shown in equation (51) where c and m may be calculated in accordance with equations (52) and (53) respectively.
In another example, the linear interpolation factors γk may be calculated in accordance with equation (54). In this case the results may be tabulated (e.g. in a hardware lookup table) directly for the finite (e.g. fixed point) set of anisotropic ratios for m and then apply lookup table optimisation techniques.
The fact that the factors γk are near linear means that it is likely amenable for lookup table optimisation. For example,
It will be evident to a person of skill in the art that these are examples only, and that other functions of k may be used to generate the linear interpolation factors. For example, in either of the examples described above η may be replaced with βη or β[η].
Once the blending is complete the method 800 proceeds to block 806.
At block 806, it is determined whether this is the last iteration. If the iteration counter is initialised to 1, it may be determined that this is the last iteration if the iteration counter k is equal to K. As described above, K is equal to N/2 where N is the even number of sample points along the major axis of the ellipse. If it is determined that this is not the last iteration, then the method 800 proceeds to block 808. If, however, it is determined that this is the last iteration then the method 800 ends 810.
At block 808, the iteration counter k is incremented (i.e. k=k+1). Once the iteration counter has been incremented the method 800 proceeds back to block 804.
Calculating the Gaussian-weighted sum in accordance with the method 800 of
Although the method 800 of
Although the method 800 of
In the examples described above, the weights used to combine the results of the isotropic filtering (i.e. the anisotropic filter weights) are Gaussian. In other words, the anisotropic filter is Gaussian. Using a Gaussian anisotropic filter has proven to provide good results in many cases, partially because of the frequency response of a Gaussian filter. Specifically, a Gaussian filter, which has a Gaussian response in the frequency or spectral domain, acts as a low pass frequency filter which has a minimum spectral width or variance (in the modulus squared sense as described below) for a given spatial width or variance.
This can be illustrated mathematically. Specifically, let F: → be a continuous filter defined by equation (55) where ϕ: → is a real function δ which may be referred to as the first function) defined such that the weights of the filter are non-negative. The filter F identifies, or specifies, the weights associated with different positions along an X axis (i.e. for different x values). In the anisotropic texture filtering context x represents the position of a sampling point with respect to the midpoint of the major axis. The integral of the filter from negative infinity to positive infinity, denoted |F|, can be expressed by equation (56); the mean of the filter, denoted
The frequency response of ψ is denoted {tilde over (ψ)} with {tilde over (ψ)}: →. The relationship between ϕ and its frequency response {tilde over (ϕ)} is described by equations (59) and (60).
The modulus squared spectral variance of ϕ can then be written as shown in equation (61) where
The expression of the modulus squared spectral variance of ϕ shown in equation (61) can be used to express the product of the modulus squared spatial and spectral variances purely in terms of integral expressions involving ϕ(x) and its derivative, as shown in equation (62).
The derivative of the product (equation (62)) can then be expressed by equation (63) where ϕ″(x) is the second derivative of ϕ with respect to x
It can be verified that when the filter F is a Gaussian filter as shown in equation (64), the product of equation (62) is minimized (i.e. equation (63)=0). Specifically, this can be seen by direct substitution of equation (64) into equation (63) where
ϕ(x) for the Gaussian filter F of equation (64).
However, when a continuous Gaussian is approximated by a truncated Gaussian (such that only a portion of the Gaussian curve is represented), and particularly when a discrete Gaussian is truncated to a finite number of sample points, the frequency response becomes less ideal (i.e. it looks less Gaussian). Specifically, more unwanted, or higher, frequencies are allowed to pass (i.e. they are not sufficiently attenuated). This becomes more pronounced as the sample points decreases. Accordingly, using an anisotropic filter with Gaussian weights may not provide the best filtering results in all cases. The inventors have identified that instead of automatically using Gaussian weights, the best weights for the anisotropic filter may be determined by selecting the weights that minimize a cost function which penalizes high frequencies in the frequency response under certain constraints. For example, the weights of the filter F may be selected so as to minimize the product of the modulus squared spatial and spectral variances of ϕ (i.e. equation (65)) so as to achieve the most Gaussian-like frequency response—i.e. a frequency response with a minimum spectral variance for ϕ for a given spatial variance. It is noted that it is the product of the modulus squared spatial and spectral variances of q that is minimized rather than simply the spectral variance of q as minimizing the spectral variance alone would not provide meaningful results as it would push the weights to values that result in no spectral variance—i.e. zero (non-zero) frequencies.
Reference is now made to
At block 1204, one or more sets of equally spaced sampling points (which may also be referred to as sample points) in the texture space, based on the elliptical footprint defined in block 1202, are identified. In some cases, the equally spaced sample points may lie along the major axis of the ellipse. All of the methods and techniques described above for (i) determining the number of sample points per set, (ii) identifying the number of sets, and (iii) identifying the position and/or spacing of the sample points described above (e.g. the methods and techniques described in relation to block 404 of the method 400 of
At block 1206, isotropic filtering is performed at each sampling point identified in block 1204. Block 1206 generally corresponds to block 406 of the method 400 of
At block 1208, an anisotropic filter is selected for each set of equally spaced sample points identified in block 1204, based on one or more parameters of the set of samples and at least one or more parameters of the elliptical footprint. The one or more parameters of the set of samples may comprise the number of samples N in the set, the offset ψ indicating a location of a first sample in the set, and/or the spacing σ between adjacent samples in the set. The one more parameters of the elliptical footprint may comprise the anisotropic ratio or parameters from which the anisotropic ratio can be determined.
The weights of each anisotropic filter are selected, based on the parameters, to be the weights that minimize a cost function that penalizes high frequencies in the frequency response of the anisotropic filter, under one or more constraints to ensure that the filter satisfies one or more features of a texture filter.
For example, in some cases it may be desirable for a texture filter to be normalised to remove a global brightness factor (i.e. a DC component). Accordingly, the anisotropic filter may be constrained to be normalised. Such a constraint may be referred to as the normalization constraint, and for the example filter F defined in equation (55) may be expressed by equation (66).
In some cases, it may be desirable that the weights of a texture filter be centred around the origin of the co-ordinate system. Accordingly, the mean of the anisotropic filter may be constrained to be zero. Such a constraint may be referred to as the mean constraint, and for the example filter F defined in equation (55) may be expressed by equation (67).
As described above, it has been determined that it is desirable that the anisotropic filter have a variance of η2 where η is the anisotropic ratio when expressed in units of standard deviations of the corresponding isotropic filter. Accordingly, the anisotropic filter may be constrained to have a variance of η2. Such a constraint may be referred to as the variance constraint, and for the example filter F defined in equation (55) may be expressed by equation (68).
The cost function that is minimized may be any cost function that penalizes high frequencies in the frequency response or spectrum response of the anisotropic filter. As described above, when a continuous Gaussian filter is approximated by a truncated Gaussian, the frequency response deviates from the ideal continuous Gaussian filter frequency response and may allow unwanted frequencies. Accordingly, the quality of the filtering result produced by the anisotropic filter may be improved by selecting filter weights that result in a desirable frequency response (e.g. more Gaussian-like frequency response).
As described above, a more Gaussian-like frequency response may be achieved by selecting weights that minimize the product of the modulus squared spatial and spectral variances of ϕ where F(x)=|ϕ(x)|2. So, if the spatial variance of ϕ is fixed or known, then minimizing equation (65) minimizes the spectral variance of ϕ. Equation (65) may be referred to as the Gaussian cost function.
In other cases, instead of selecting weights that minimize a cost function (e.g. equation (65)) that pushes the anisotropic filter to have a frequency response as close as possible to a Gaussian frequency response, another cost function may be used which pushes the anisotropic filter to have a frequency response that is not Gaussian but is similar. For example, instead of selecting weights that minimize equation (65), the weights may be selected to minimize the L2 norm of the anisotropic filter F. As is known to those of skill in the art, the L2 norm, also referred to as the Euclidean norm, is the length of a vector. For the anisotropic filter F of equation (55) the L2 norm can be expressed by equation (69). This may be referred to as the norm cost function.
However, equation (69) itself cannot be minimized, as selecting weights that minimize equation (69) would select weights that will tend towards no (non-zero) frequencies in the frequency domain. Specifically, equation (69) does not show a preference for desirable passband frequencies over undesirable stop band frequencies. Accordingly, Lagrange multipliers λ and μ may be used to enforce the constraints described above. Specifically, λ can be used to enforce the variance constraint, and μ can be used to enforce the normalization constraint. Equation (69) can then be written with the Lagrange multipliers as shown in equation (70). It is noted that the inventors have determined that the solution will satisfy the mean condition. Therefore the mean condition is assumed to be true and is not explicitly enforced using a Lagrange multiplier.
The variational derivative of equation (70) is shown (with common factors removed) in equation (71). The filter F that minimizes equation (70) can thus be identified by setting equation (71) to zero. It can be shown that equation (71) is equal to zero, and thus equation (70) is minimized, when the filter F is as shown in equation (72). In other words, equation (69) is minimized under the constraints when the filter F is as shown in equation (72).
The impulse response of the filter as set out in equation (72) is shown at 1302 in
The frequency response of the filter F of equation (72), denoted {tilde over (F)}(ƒ), can be expressed by equation (73) and is shown at 1402 in
Accordingly, selecting weights for the anisotropic filter that minimize equation (69), under the constraints, and specifically under the variance constraint, will select weights that have a spatial response as close to 1302 of
In other cases, instead of selecting weights that minimize equation (65) or equation (69), the weights that minimize the spectral spread of F2 as set forth in equation (74) may be selected. This may be referred to as the spectral cost function.
Like the L2 norm equation (i.e. equation (69)), equation (74) itself cannot be minimized, as selecting weights that minimize equation (74) would select weights that tend to produce no (non-zero) frequencies in the frequency domain. Specifically, equation (74) does not show a preference for desirable passband frequencies over undesirable stop band frequencies. Accordingly, Lagrange multipliers λ and μ may be used to enforce the constraints described above. Specifically, λ can be used to enforce the variance constraint, and μ can be used to enforce the normalization constraint. Equation (74) can then be written with the Lagrange multipliers as shown in equation (75).
The variational derivative of equation (75) is shown in equation (76). The filter F that minimizes equation (75) can thus be identified by setting equation (76) to zero.
It will be evident to a person of a skill in the art that the filter set out in equation (68) will also set equation (76) to zero, and thus will minimize equation (75). So, selecting the weights u sing equations (69) or (74), under the constraints, appear to produce similar results—i.e. they will both select weights that have a spatial response as close to 1302 of
In some cases, the weights the minimize of one of the cost functions, under the constraints, may be dynamically determined—i.e. on the fly. However, this can be a fairly time and resource intensive process. Accordingly, in other cases, the weights that minimize one or more of the cost functions, under the constraints, may be determined off-line for expected ranges of the parameters—e.g. for ranges of sets of sample points S (which is described above and below), anisotropic ratios η, and sample spacing σ). In such cases, the results may be stored in a look up table which is indexed by the parameters (e.g. set of sample points S, anisotropic ratio η, and sample spacing σ).
Once the weights for an anisotropic filter Γ or each set of sampling points have been selected, the method 1200 proceeds to block 1210.
At block 1210, the isotropic filter results for each set of sampling points are combined using the corresponding filter weights identified in block 1208. Block 1210 generally corresponds to block 408 of the method 400 of
At block 1212, it is interpolated between the combination results generated in block 1210 for each set of sampling points. Block 1212 generally corresponds to block 410 of the method 400 of
Once the interpolation has been performed the method 1200 may end. The result of the method (block 1210 or block 1212) which may be referred to as a filter result or a filtered texture value may be output for further processing. For example, the output of the method (block 1210 or block 1212) may be output to a shader (or another component) of a graphics processing system or graphics processing unit for use in generating a rendering output (e.g. an image).
Examples of how the weights for a discrete anisotropic filter for a set of sampling points may be selected in accordance with block 1208 of the method 1200 of
in these units (when applying equations (45) and (37) respectively), which should be borne in mind when comparing expressions. The set of sample points S comprises N sampling points and can be expressed as shown in equation (78).
The normalization, mean, and variance constraints for the discrete anisotropic filter, which now take into account the variance of the isotropic filter such that η2−1 is used (the variance of the isotropic filter is 1 by definition of the units) in place of η2, can be expressed by equations (79), (80) and (81) respectively.
If ψ=0 and N is odd, or if ψ=½ and N is even, then the mean constraint can be satisfied by setting A−n=An when N is odd and A−n-1=An when N is even. This also ensures that the spectrum of the anisotropic filter is real so the effects of phase do not have to be taken into account.
The mean and variance constraints can then be further simplified to equations (82) and (83) respectively.
As is known to those of skill in the art, each constraint reduces the degrees of freedom of the solutions. As there are three equations and only one unknown (i.e. weight A0) the problem is over-constrained if there only one sample point (i.e. N=1) unless η=1 and as described in more detail below, the weights are fully constrained for 2, 3 or 4 samples.
For example, if there are two sampling points (i.e. N=2) and 0≤ψ<1 then there are two weights (A−1+A0) and the normalisation constraint gives A−1+A0=1 and the mean constraint gives A−1=ψA0=1−ψ. If ψ=0 then there is only one non-zero weight and thus the problem becomes over-constrained. Even if ψ is greater than 0 then it can be seen from the variance constraint that the problem is still over-constrained (more equations than unknowns) unless the spacing between sampling points is as set out in equation (84). Therefore when there are only two sampling points, the spacing between sampling points is the only thing that can be controlled to satisfy the constraints. Accordingly the ideal Gaussian spacing described above with respect to
As described above, the width of the filter is preferably an integer multiple of two standard deviations. When ψ=½ the spacing set out in equation (80) may reach two standard deviations when η=√{square root over (2)} so two samples may be sufficient when the anisotropic ratio is less than √{square root over (2)}.
If there are three sampling points (i.e. N=3) and −½≤ψ<½ there will be three weights A−1, A0 and A1. The normalisation constraint gives A−1+A0+A1=1 and the mean constraint gives A1−A−1=−ψ which gives the weights as shown at (85).
The variance constraint then gives
Unlike the two sample case, the weights will satisfy the constraints for all values of η and σ. However, the variance may only be a meaningful indication of degree of low-pass filtering when An≥0. The requirement that A0 is positive gives equation (86), and the requirement that A1 and A−1 are positive gives the equations at (87).
This then gives equation (88) which sets out, for a given spacing σ and offset ψ, the values of η for which 3 sampling points is sufficient to satisfy the constraints.
For example, if σ=2√{square root over (1−η−2)} expressed in units of one standard deviation (which corresponds to σ=√{square root over (1−η−2)} expressed in units of two standard deviations) and ψ=0 are substituted into equation (88) 1≤η≤2.
If there are four sampling points (i.e. N=4) the problem is under-constrained as there are four unknowns (i.e. four filter weights) and only three constraints. However, if ψ=½ and A−n-1=An the mean constraint is guaranteed and the weights are then fully constrained. The normalisation constraint then gives 1=A−2+A−1+A0+A1=2(A0+A1) which gives the weights shown at (89).
The variance constraint gives equation (90) and the constraint that the weights are positive gives the equations at (91).
If the spacing described above with respect to
However, a different spacing of the sampling points may be used to mimic different filters. For example, if a spacing proportional to the inverse of standard deviation of a tent filter is used
This produces the weights shown at (93) for η=2 which are the weights for a sampled tent filter.
If a spacing proportional to the inverse of the standard deviation for a box filter is used (i.e. σ=√{square root over (12)}) then 2≤η≤5. This produces the weights shown at (94) for η=2 and the weights as shown at (95) for η=4, which are the weights for a sampled box filter.
Accordingly, it can be seen that the texture filtering constraints themselves may be sufficient to determine the weights for a small number of sampling points. For more sampling points (i.e. N>4) the weights can be selected as those that minimize one of the cost functions described above (or, as described in more detail below, analogous discrete cost functions).
For example, for more sampling points (i.e. N>4) the anisotropic filter A can be pre-convolved with a Gaussian to define |ϕ(x)| according to equation (96) where the goal of the minimization is to identify ϕn and thus An. The pre-convolution with a Gaussian is performed because it is ultimately the frequency response of the composite filter that is of interest.
For the discrete filter the normalization, mean and variance conditions as set out in equations (77), (80) and (81) can be written for the discrete function as shown in equations (97), (98) and (99)
Then An are selected that minimize one of the cost functions set out above (e.g. one the cost functions set out in equations (65), (70) and (75).
It can be seen that the constraints set out in equations (97), (98) and (99) can be used to simplify the cost functions set out in equation (65), (70) and (75) to equations (100), (101) and (102) respectively, as the constraints can be imposed directly (and hence substituted into the earlier equations).
In some cases, the cost functions may be further simplified using general algebraic and/or numerical techniques. For example |ψ(x)|4 may be expressed by equation (103) which allows the cost function set out in equation (101) to be re-written as equation (104).
In some cases, instead of selecting weights that minimize one of the continuous cost functions described above, an equivalent discrete cost function may be minimized. Example discrete versions of the continuous Gaussian, norm and spectral cost functions described above are shown in equations (105), (106) and (107) respectively where η∉S ⇒ϕn=0. The discrete versions of the cost function may be easier to work with and/or solve. The discrete versions may be more suitable for on-the fly calculations of the weight. However, minimizing a discrete version of a continuous cost function may still be fairly time and/our resource intensive.
Reference is now made to
While
The texture filtering unit and graphics processing system of
The texture filtering units and/or the graphics processing systems described herein may be embodied in hardware on an integrated circuit. The texture filtering units and/or the graphics processing system described herein may be configured to perform any of the methods described herein. Generally, any of the functions, methods, techniques or components described above can be implemented in software, firmware, hardware (e.g., fixed logic circuitry), or any combination thereof. The terms “module,” “functionality,” “component”, “element”, “unit”, “block” and “logic” may be used herein to generally represent software, firmware, hardware, or any combination thereof. In the case of a software implementation, the module, functionality, component, element, unit, block or logic represents program code that performs the specified tasks when executed on a processor. The algorithms and methods described herein could be performed by one or more processors executing code that causes the processor(s) to perform the algorithms/methods. Examples of a computer-readable storage medium include a random-access memory (RAM), read-only memory (ROM), an optical disc, flash memory, hard disk memory, and other memory devices that may use magnetic, optical, and other techniques to store instructions or other data and that can be accessed by a machine.
The terms computer program code and computer readable instructions as used herein refer to any kind of executable code for processors, including code expressed in a machine language, an interpreted language or a scripting language. Executable code includes binary code, machine code, bytecode, code defining an integrated circuit (such as a hardware description language or netlist), and code expressed in a programming language code such as C, Java or OpenCL. Executable code may be, for example, any kind of software, firmware, script, module or library which, when suitably executed, processed, interpreted, compiled, executed at a virtual machine or other software environment, cause a processor of the computer system at which the executable code is supported to perform the tasks specified by the code.
A processor, computer, or computer system may be any kind of device, machine or dedicated circuit, or collection or portion thereof, with processing capability such that it can execute instructions. A processor may be any kind of general purpose or dedicated processor, such as a CPU, GPU, NNA, System-on-chip, state machine, media processor, an application-specific integrated circuit (ASIC), a programmable logic array, a field-programmable gate array (FPGA), or the like. A computer or computer system may comprise one or more processors.
It is also intended to encompass software which defines a configuration of hardware as described herein, such as HDL (hardware description language) software, as is used for designing integrated circuits, or for configuring programmable chips, to carry out desired functions. That is, there may be provided a computer readable storage medium having encoded thereon computer readable program code in the form of an integrated circuit definition dataset that when processed (i.e. run) in an integrated circuit manufacturing system configures the system to manufacture a texture filtering unit and/or a graphics processing system configured to perform any of the methods described herein, or to manufacture a texture filtering unit and/or a graphics processing system comprising any apparatus described herein. An integrated circuit definition dataset may be, for example, an integrated circuit description.
Therefore, there may be provided a method of manufacturing, at an integrated circuit manufacturing system, a texture filtering unit and/or a graphics processing system as described herein. Furthermore, there may be provided an integrated circuit definition dataset that, when processed in an integrated circuit manufacturing system, causes the method of manufacturing a texture filtering unit and/or a graphics processing system to be performed.
An integrated circuit definition dataset may be in the form of computer code, for example as a netlist, code for configuring a programmable chip, as a hardware description language defining hardware suitable for manufacture in an integrated circuit at any level, including as register transfer level (RTL) code, as high-level circuit representations such as Verilog or VHDL, and as low-level circuit representations such as OASIS (RTM) and GDSII. Higher level representations which logically define hardware suitable for manufacture in an integrated circuit (such as RTL) may be processed at a computer system configured for generating a manufacturing definition of an integrated circuit in the context of a software environment comprising definitions of circuit elements and rules for combining those elements in order to generate the manufacturing definition of an integrated circuit so defined by the representation. As is typically the case with software executing at a computer system so as to define a machine, one or more intermediate user steps (e.g. providing commands, variables etc.) may be required in order for a computer system configured for generating a manufacturing definition of an integrated circuit to execute code defining an integrated circuit so as to generate the manufacturing definition of that integrated circuit.
An example of processing an integrated circuit definition dataset at an integrated circuit manufacturing system so as to configure the system to manufacture a texture filtering unit or a graphics processing system as described herein will now be described with respect to
The layout processing system 1704 is configured to receive and process the IC definition dataset to determine a circuit layout. Methods of determining a circuit layout from an IC definition dataset are known in the art, and for example may involve synthesising RTL code to determine a gate level representation of a circuit to be generated, e.g. in terms of logical components (e.g. NAND, NOR, AND, OR, MUX and FLIP-FLOP components). A circuit layout can be determined from the gate level representation of the circuit by determining positional information for the logical components. This may be done automatically or with user involvement in order to optimise the circuit layout. When the layout processing system 1704 has determined the circuit layout it may output a circuit layout definition to the IC generation system 1706. A circuit layout definition may be, for example, a circuit layout description.
The IC generation system 1706 generates an IC according to the circuit layout definition, as is known in the art. For example, the IC generation system 1706 may implement a semiconductor device fabrication process to generate the IC, which may involve a multiple-step sequence of photo lithographic and chemical processing steps during which electronic circuits are gradually created on a wafer made of semiconducting material. The circuit layout definition may be in the form of a mask which can be used in a lithographic process for generating an IC according to the circuit definition. Alternatively, the circuit layout definition provided to the IC generation system 1706 may be in the form of computer-readable code which the IC generation system 1706 can use to form a suitable mask for use in generating an IC.
The different processes performed by the IC manufacturing system 1702 may be implemented all in one location, e.g. by one party. Alternatively, the IC manufacturing system 1702 may be a distributed system such that some of the processes may be performed at different locations, and may be performed by different parties. For example, some of the stages of: (i) synthesising RTL code representing the IC definition dataset to form a gate level representation of a circuit to be generated, (ii) generating a circuit layout based on the gate level representation, (iii) forming a mask in accordance with the circuit layout, and (iv) fabricating an integrated circuit using the mask, may be performed in different locations and/or by different parties.
In other examples, processing of the integrated circuit definition dataset at an integrated circuit manufacturing system may configure the system to manufacture a texture filtering unit or a graphics processing system without the IC definition dataset being processed so as to determine a circuit layout. For instance, an integrated circuit definition dataset may define the configuration of a reconfigurable processor, such as an FPGA, and the processing of that dataset may configure an IC manufacturing system to generate a reconfigurable processor having that defined configuration (e.g. by loading configuration data to the FPGA).
In some embodiments, an integrated circuit manufacturing definition dataset, when processed in an integrated circuit manufacturing system, may cause an integrated circuit manufacturing system to generate a device as described herein. For example, the configuration of an integrated circuit manufacturing system in the manner described above with respect to
In some examples, an integrated circuit definition dataset could include software which runs on hardware defined at the dataset or in combination with hardware defined at the dataset. In the example shown in
The implementation of concepts set forth in this application in devices, apparatus, modules, and/or systems (as well as in methods implemented herein) may give rise to performance improvements when compared with known implementations. The performance improvements may include one or more of increased computational performance, reduced latency, increased throughput, and/or reduced power consumption. During manufacture of such devices, apparatus, modules, and systems (e.g. in integrated circuits) performance improvements can be traded-off against the physical implementation, thereby improving the method of manufacture. For example, a performance improvement may be traded against layout area, thereby matching the performance of a known implementation but using less silicon. This may be done, for example, by reusing functional blocks in a serialised fashion or sharing functional blocks between elements of the devices, apparatus, modules and/or systems. Conversely, concepts set forth in this application that give rise to improvements in the physical implementation of the devices, apparatus, modules, and systems (such as reduced silicon area) may be traded for improved performance. This may be done, for example, by manufacturing multiple instances of a module within a predefined area budget.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2110742.0 | Jul 2021 | GB | national |
2110743.8 | Jul 2021 | GB | national |
2110744.6 | Jul 2021 | GB | national |
This application is a continuation, under 35 U.S.C. 120, of copending application Ser. No. 17/873,425 filed Jul. 26, 2022, now U.S. Pat. No. 12,020,366, which claims foreign priority under 35 U.S.C 119 from United Kingdom Application Nos. 2110742.0, 2110743.8, and 2110744.6, all filed Jul. 26, 2021.
Number | Date | Country | |
---|---|---|---|
Parent | 17873425 | Jul 2022 | US |
Child | 18752412 | US |