Fixing Texture Seams With Linear Least-Squares
16/Nov 2015
This post is about how use least squares optimization to slightly adjust the texel values in a texture in order to minimize seams across edges in UV-space.
In other words, to turn this:
into this:
using least-squares optimization.
What are we trying to fix here?
The problem we have is that most meshes can’t be perfectly unfolded into a 2D image plane for texturing. To adress this, we cut the mesh up into flat-ish regions called “charts” and pack them into a texture. This means some edges must be split up - in 3D these edges are one and the same, but in 2D they belong to different charts. This in turn means that when the GPU filters the texture near such a “seam” edge, the filtered values on either side (which lie in two different charts) may have very different values. This is what produces the harsh filtering artefacts seen in the first image above. Here’s an example of how the charts in the above mesh are laid out in 2D space.
The fix is to figure out what values to put in the “gutter region” (i.e. the texels just outside chart edges), in order to achieve smooth filtering across the seam.
What are we not trying to fix
It’s worth taking a moment here to point out that we are assuming the texture data on either side of a seam edge roughly match up. If you want to fix up seams where the two sides are wildly different, you’re going to need some kind of texture synthesis. This is not what we’re doing. We’re merely taking textures that are supposed to match up perfectly along seams, but don’t due to parameterization issues, and tweaking them to fix things up.
The basic idea
The idea here is that we build up a massive equation system where all of the texels that are within a bilinear neighborhood of these seam edges are unknown variables that we’re trying to find. Then we generate a ton of sample locations along each seam, and create an equation for each setting the bilinear sample on one side of the seam to equal the bilinear sample of the other side.
This is an over-determined system and there isn’t an exact solution (in most cases), but we can use linear least squares to find a solution that minimizes the sum of the square errors for each of these equations.
This isn’t a new idea. I know plenty of studios who are doing this for their textures (for example naughty dog), but there doesn’t seem to be any sample code for this anywhere on the internet. Hopefully this will help more people use this technique.
Prepping the data
The first thing we’re going to need to do is load up the texture and mesh data. The sample code uses a very cheesy OBJ parser (which is unlikely to work for your meshes - it does the bare minimum to load a sample mesh I found online) and stb libraries for loading images. You may want to plug in your own asset loading routines here.
Next, we’ll need to figure out which texels in the image are actually inside the triangles, and which are in the “gutter region”. To do this, we rasterize all the triangels in the mesh into UV space, and store the result into a “coverage buffer”, which just sets values to 1 if they are inside the triangle, and leaves them at 0 if not. We use a simple method to do this, where we loop over a bounding box for the triangle, and check each texel coordinate if it’s inside the UV coordinates for the triangle:
|
|
Once we’ve done this for all faces in the mesh, we have a quick way of knowing whether a pixel is inside or outside the charts (indeed, this is how I generated the B&W coverage mask above).
The next thing we need to figure out is which of our edges are actual seams (i.e. two triangle edges that have the same 3D geometric positions, but different UVs).
|
|
This code is relatively straightforward. First we do the song and dance STL requires of us in order to use a hash table for our edges (which we represent as pairs of vectors). Then we simply find edges that have identical 3D vertex positions but differing UV coordinates, and add them to our seamEdges
vector.
So, now we know where are seams are, and what pixels inside our triangles. The next thing we need is to figure out what texels we need to optimize. These are the texels that lie “on” the seam edges (in truth, the texels that are within the bilinear neighborhood of the seam edges). To do that we loop through our seams, generate the sample points we will later use to optimize our pixels, and the first time we end up using a particular texel we store it in a vector. The pixels that are in this vector are the ones we will end up tweaking in the rest of the program and later writing back out to the texture. We also build a 2D map that will make it easy to go from a 2D coordinate to the particular “pixel index” in this vector.
Finally, while we’re looping through these pixels anyway we initialize the color data for the pixels we’re trying to optimize. For covered pixels, we just convert to YCoCg in order to reduce the correlation between channels (this lets us optimize the color channels independently). For uncovered pixels we compute an average of the covered pixels in its neighborhood. Having an initial guess at the color value for our gutter pixels will help later, when we’re going to use an iterative solver to refine these guesses.
|
|
Alright, now we have all the information we need to work with, and can finally get down to the meat of the problem.
Optimizing pixel values
To figure out the final pixel values, we build up a least squares problem of the form:
$$ Ax = b $$Where $$ x $$ is a vector containing all the pixel values we’re trying to find (for one channel at a time). The $$ A $$ matrix will contain one row per sample generated above with columns set to the sample weights of the two bilinearly filtered samples on either side of the seam (the weights for one side of the seam will be negated), and the RHS $$ b $$ set to 0. This means that we’re trying to find an $$ x $$ such that the bilinear samples on either side of a seam are identical.
Of course, so far there’s a trivial soluiton - set $$ x=0 $$! To make sure we get a more useful solution, we also add in rows that tries to keep the pixel values equal to their initial values. Note: we do also add terms even for gutter pixels to try to keep them close to their initial guess (weighted by a small value). This might seem strange since we only really care about the pixels the artist drew, not the stuff outside. The reason we do this is that if we don’t, the gutter pixel values may end up being pushed to extreme values (e.g. very large values, or even negative), which could overlflow fixed point color formats. By adding some terms to keep the solution close to “reasonable” values, we typically don’t have to worry about any crazy values that can’t fit within e.g. 8 bits. This also helps block compression.
Now, solving this equation exactly isn’t possible (in general). We have too many equations for our unknowns, and they are potentially contradictory. What we do instead is use linear least squares to find a good compromise value (one which minimizes the square error for each term). There are many ways of solving linear least squares. For example, you could do SVD or QR decomposition, or you could solve the normal equations (by inverting the Gramian matrix or use gaussian elimination). These are all pretty complicated to implement would most likely be extremely slow for the huge matrix we have. There are two reasons to expect that an iterative algorithm will work much better for us:
- We can use domain knowledge to produce a pretty good guess for our pixel values already. This allows us to start our iterative solver near the solution.
- We only really need our solution to look okay, not be the strictly most optimal solution. So we can bail out of the algorithm pretty early.
So we’ll use an iterative method instead. The simplest iterative solver is gradient descent - however we won’t use that because it tends to be pretty darn slow and it’s hard to tune the learning rate. What we will instead do is compute the normal matrix (see Wikipedia entry linked previously) which gives us a symmetric matrix which can then be solved using The Conjugate Gradient method. I won’t dwell on the details, you can see the wikipedia entry to learn more, but basically we multiply both sides of the equation above by $$ A^t $$ to get this:
$$ A^tAx = A^tb $$Then we iteratively improve our estimate of $$ x $$ to minimize residual of this equation. A key benefit of doing things this way is that we never actually have to form $$ A $$! We can directly construct $$ A^tA $$ and $$ A^tb $$ with far less storage required. Once we have that, we can implement the Conjugate Gradient method as described in the Wikipedia article above:
|
|
That wasn’t so bad! It’s basically a 1:1 port of the pseudo code in the wikipedia article, and in return we can efficiently optimize huge least squares problems without having to link in a bunch of complicated linear algebra library (or worse, implement complicated linear algebra ourselves). We just pass in $$ A^tA $$ as as the matrix, and $$ A^tb $$ as the RHS and out comes our solution $$ x $$.
Now, I’ve glossed over something here. In order for the above code to work we’d need some support classes for matrices and vectors. In this case, we only need a really simple dense vector (VectorX
), and a relatively simple sparse matrix (SparseMat
). I won’t list the vector implementation, since I think it’s pretty straightforward. For the matrix, a key insight is that this matrix will be mostly full of zeros. It would be wasteful to store all those zeros, and even more wasteful to perform a bunch of multiplications against zeroes. Thus, we store the matrix data in a sparse matrix.
|
|
There’s some additional complexity here that strictly speaking I could remove. I’ve allocated the memory using aligned allocations because I had some hope of being able to turn the sparse dot product into an AVX optimized kernel. Unfortunately that didn’t work out so far (it was slower).
Anyway, the way this works is that we store a dense array of Row
s (in this case, all rows have at least one element, so no need to be sparse here). Each Row
is in turn sparsely stored as a list of indices and coefficients. These are stored in separate arrays for efficiency, but conceptually it’s a single array of (index, coefficient) pairs. When we try to lookup a row element, we simply do a linear search in the index array for the closest index. If the index was found, we just return a reference to the corresponding coefficient, and if not we shuffle all the elements down to make room and insert a fresh 0 first. That’s really all you need to know! The matrix multiplication is pretty straightforward (it’s no accident that we stored the matrix row-major!), and the VectorX
is implemented exactly as you’d think it would be.
Now all that’s left is to map the solution back to the image and save it out and we’re done. Performance varies by texture size, how many seams you have, etc. but it shouldn’t take more than a few seconds even for large textures.
Conclusion
There you have it! An algorithm you can use to improve seam filtering for basically any texture, without needing to include any expensive or complicated libraries in your project. As a side effect, you also get a linear least squares solver that you could use for other things.
There’s really three main things I’d like people to take away from this:
- Least squares optimization is a pretty damn useful tool.
- It’s really not that hard to implement. Even without big math libraries like Eigen, Ceres, or Alglib you could get a pretty good solution with just a little bit of code on your end. So if you don’t need fancy stuff, you should seriously consider the costs of pulling in an external library.
- And as a bonus, the vehicle I chose for making the above two points is a pretty damn handy trick that you probably should be doing if you ever have texture seams to deal with.
For the rough and dirty code to this, click here. I haven’t bothered uploading the whole Visual Studio solution or anything, but it should be relatively straightforward to figure it out (it’s all in one CPP file, and you just need some stb libs).