Fixing Texture Seams With Linear Least-Squares

This post is about how use least squares optimization to slightly adjust the texel values in a texture in order to minimize seams across edges in UV-space.

Moving off of WordPress!

Whenever I would write blog posts on WordPress a huge source of friction was just how shitty the editing experience was. You could write the post elsewhere and then try to upload it but I found I was frequently surprised by things not looking quite right once WordPress was through with it. The online post editing tool was also an exercise in extreme frustration. So I decided to simplify my life a bit.

Brackets are awesome, don’t use them!

This is a syntax bike-shedding post, where I will try to explain why I think languages should stop using curly brackets for code blocks. I don’t expect anyone to come away completely blown away by this or anything, but I’m a language nerd and often end up discussing in-development languages with their designers. And since approximately 90% of all language discussion is about syntax, I’ll just write a reusable note on brackets here.

Why (most) High Level Languages are Slow

In the last month or two I’ve had basically the same conversation half a dozen times, both online and in real life, so I figured I’d just write up a blog post that I can refer to in the future. The reason most high level languages are slow is usually because of two reasons: They don’t play well with the cache. They have to do expensive garbage collections But really, both of these boil down to a single reason: the language heavily encourages too many allocations.

Variable Refresh Monitors and Animation

So variable refresh rates have a lot of hype surrounding them now, with representatives from GPU and monitor vendors are saying that it will just magically make existing games better with no effort required on the part of the developer. This is bullshit and you shouldn’t believe them. If you’re a developer you need to be careful to handle these new refresh rates and if you’re a customer you should consider just locking your monitor to 60Hz for games that don’t do the right thing.

Flags are a code smell

In most major code bases, it’s common to have objects with lots of Boolean flags indicating what state they’re in. E.g. is an enemy targeting the player or not? Is it trying to flee or not? If you have a set of mutually exclusive flags an enum can be used instead, but this is often not possible, and doesn’t change much in terms of the downsides. In my opinion these are code smells.

The Perils of Future-Coding

In my opinion, one of the most insidious form of technical debt is what I like to call future-coding. You might also call it over-engineering, second-system syndrome, etc. It’s code written in an overly elaborate and general way in order to, the future-coder reasons, pre-emptively handle use cases that we’ll “probably see in the future”. I find it more treacherous than plain “stupid code” because it predominantly affects smart people. It’s mostly smart people with limited experience – once you see the downsides of future coding a few times, you tend to knock it off (although there are powerful confirmation biases at work) – but still, these are bright people you’d want to hire and keep around in you organization because they’ll do great things in the future.

More on Robin Hood Hashing

I posted about Robin Hood hashing in a previous post. In the comments to that post David Wragg pointed out a flaw in my implementation: if you repeatedly delete and reinsert elements into a hashtable, the average probe count keeps rising. This is a good observation, and it’s true. Repeatedly deleting items makes the table slower but it (until you insert enough to trigger a rebuild of the table, which will filter out any tombstones - note: the number tombstones do not affect the rate of rebuilds).

Language Design Deal Breakers

I’m a bit of a programming language nerd. I love to learn new languages. That said, I spend most of my days writing C++. It’s a truly awful language, with many warts and problems, but I know it well and with enough effort and pain you can get the job done. The biggest benefit of C++ is purely an historical accident: it’s great because it’s popular. There’s no trouble finding people who know it, or libraries that work with it.

Robin Hood Hashing should be your default Hash Table implementation

There’s a neat variation on open-addressing based hash tables called Robin Hood hashing. This technique isn’t very well-known, but it makes a huge practical difference because it both improves performance and space utilization compared to other “standard” hash tables (e.g. chaining). It also eliminates the major downside of other open addressing hash tables. Here are the benefits, summarized: High load factors can be used without seriously affecting performance. 0.9 is perfectly reasonable as a default (0.95 or higher works too, it mainly affects insertion cost a bit).

Thoughts on Game Data Formats

I’ve been thinking about the representation and loading of game data recently. This has largely been the result of stumbling upon a variety of blog posts and libraries such as: Desired characteristics So, the basic problem is that you want to store a large variety of “resources” for a game “level” (or “sector” or whatever), describing all sorts of things such as the location of every object, the vertices of every mesh, and so on.

On GC in Games (response to Jeff and Casey)

So it turns out youtube comments suck. I’ll write my response to Jeff and Casey’s latest podcast in blog form instead of continuing the discussion there. View it here: Now, first let me say that I agree with 99% of the sentiment of this podcast. I think writing high performance games in Java or C# is kinda crazy, and the current trend of writing apps in HTML5 and JavaScript and then running it on top of some browser-like environment is positively bonkers.

Runtime Code Specialization

Specializing code is nothing new, but I’m surprised that modern programming languages don’t attempt to make this easier. Just to give you some context, the idea is that whenever you have an algorithm that takes multiple parameters where one or more of the parameters are fixed for some period of time, you specialize the code for the fixed parameters in order to achieve greater optimization at runtime. A key idea is that you may not know at compile time what the fixed value is, and the duration for which a value is fixed could be shorter than the execution of the program (mere milliseconds, even).

Space-efficient Resizable Arrays

This post is about resizable arrays that would serve as an alternative to std::vector. You’re probably familiar with geometrically growing arrays like std::vector, so I won’t waste time describing them in detail. Briefly, they offer amortized constant-time append and lookups, but with a pretty severe cost to space overhead (as low as 33% utilization, after shrinking until you hit a realloc, or 50% if all you ever do is append). Another issue is that the elements of the array get copied ever so often.

Garbage collection thoughts

One of the things that’s annoyed me for a while is that there aren’t any high performance, high level languages, really being seriously developed. It seems to me that most of the benefits from high level languages, if carefully designed, have no or very limited performance implications, but yet we get languages that either largely abandon safety and productivity concerns even when there’s no reason (iterations on C/C++ mostly, including Go and D), or we get high level languages that throw out performance considerations left and right for minor convenience reasons (e.g.

OnLive and cloud gaming

So OnLive is having some trouble, leading a lot of people to proclaim the death of cloud gaming. Not me. I’m pretty convinced that cloud gaming is the inevitable future, the question is merely one of time. First, let’s just talk about the technical issue of latency. It’s not solved right now, but it’s really not a fundamental law of physics like some people make out. Plenty of games released today have more than 100ms of latency, most of which is avoidable through software, and some of which can be avoided with better hardware and protocols (e.g.

Casting a Critical Eye on GPU PTex

Storing data on the surfaces of meshes is somewhat of a pain. It involves unwrapping the surface into a 2D UV layout, which is time consuming and can lead to tricky issues such as seams and packing inefficiencies. I’m sure you know all about it. For this reason, it makes sense to want to switch to some kind of automatic parameterization. Recently PTex has been proposed as a suitable approach for real-time graphics.

Idea for efficient AO type raytracing applications

Quick idea that I had a few weeks ago regarding raytracing things like AO and potentially GI/FG. The basic observation is that for AO ray tracing you have far more rays than you have triangles, so it really doesn’t make sense that we only focus on spatial paritioning for the triangles. I suspect there’s some bias here because we’re so used to thinking of our rays as travelling around a static scene, so we just kind spatially subdivide the scene without even thinking about it.

Implementation Inheritance Considered Harmful?

As the years go by and I see more and more painful bugs and convoluted architectures in OOP systems the more convinced I am that implementation inheritance is almost always the wrong answer, and that some kind of trait/mixin/delegation system is superior. Inheritance sucks partly because you’re expected to extend a class that you didn’t write (or that you at least ostensibly plan to modify independently), by patching into a few of the methods here and there and mucking around with the “protected” innards.

The "inevitability" of video game piracy

I just read yet another person say something along the lines of “there will always be piracy, it’s inevitable”. This line of thought bugs me. No, it’s not inevitable. It’s the best we can do right now for a variety of mostly non-technical reasons but it’s not that hard to come up with some rough proof-sketches that piracy can be 100% avoided, technically. At the extreme you could have built in signature verification on the CPU itself so it will simply refuse to load any code that isn’t cryptographically signed for _that _chip and that chip alone, then distribute executables as DLC, with per-customer signatures.

Idea for globally unique texturing without UV sets

Quick idea I’ve had for a while, but likely won’t have time to try out, for getting rid of UV sets once and for all by just looking up textures with a 3D position. Start with a coarse grained 3D hash grid (i.e a hash map of your grid cell coordinates), then each of these cells would represent, say, a 2563 volume texture indexed by your vertex position. For compression (this volume data would be extremely sparse, usually little more than a flat 2D slice through it), it would use 3D Random Access trees (read this if you haven’t heard about primal vs dual subdivision – eye opener for me!), but probably with a higher branching factor to reduce the number of levels required.

How to teach Game Programming

I’m generally unimpressed by specialized game courses at university. Generally I’d be more inclined to trust a traditional CS or CE degree with a strong side-interest in game programming over a specialized game course. One of the problems is that many game courses either don’t teach any low-level programming at all, or teach it poorly. Obviously we still all use C++, so it’s crucial that you understand low-level programming, and every so often you actually need to know it for real, and not just because the language is too low level for you (e.g.

R-trees – adapting out-of-core techniques to modern memory architectures

It’s been a couple of months since GDC, and I’ve been meaning to do a blog post about the talk I gave there (and if you have membership to the GDC vault you can even see a video). An R-tree is a data structure for spatial indexing that’s optimized for out-of-core scenarios (basically databases). Spatial indexing is used for storing “stuff” in space. For example, a game might store all of the game objects in a spatial index so that we can efficiently query for things like “give me all objects that are within the view frustum”, or “give me all pairs of monsters and exploding barrels within 5m of each other”.

Improving shadow map utilization for cascaded shadow maps

Jonathan Blow is working on stable cascaded shadow maps, and trying to reduce wasted texels due to the way the shadow frustum is set up. Well, as it happens I have my own minor tweak to Valient’s technique that also reduces wastage, so I’ figured I’d describe it here with pretty pictures. I’m going to assume that you’re roughly familiar with the general idea, if not, read e.g. Blow’s first blog post on the topic, or ShaderX 6.

The problem with tessellation in DirectX 11

As you may have heard, Direct X 11 brings tessellation support, and all the hardware vendors and benchmarks are going crazy with super-finely tessellated meshes, promising us automatic level of detail and unprecedented visual fidelity. What you may not be aware of is that the Xbox 360 has very similar tessellation hardware too, which means many game developers have had the opportunity to use these same techniques for about five years now.

Two Performance Walls Approaching

I just watched a panel from PDC called Microsoft Perspectives on the Future of Programming with loads of “big name” programming gurus. At the end of the session Herb Sutter points out that eventually Moore’s law will end, and then optimization will become extremely sexy again. This is a very good point, but I think there’s another law that will cause optimization to become sexy again far sooner than that: Almdahl’s law.

Ray tracing signed distance functions

Signed Distance Functions are a pretty simple concept. Basically for each point in the world, you return a distance to the nearest surface, negative distances are inside geometry. One neat application of them to represent scene geometry, and ray trace into the SDF. Why would anyone do such a thing? Well, it turns out that when marching along a ray looking for intersections (which is obviously not the only way to trace rays), it’s jolly useful to know a minimum bound on when you might expect to encounter a surface.

Why sales of used games are a problem

A few days ago industry analyst Michael Pachter said the following: The number of games traded annually is striking; we estimate the overall used game market to be $2 billion in the U.S., with an average ticket of around $20 per used game. This means that an estimated 100 million units of used games are traded in each year, representing around 1/3 of all games sold annually. Blimey! That’s a lot of money, and what’s worse it’s money that goes directly into the pockets of retailers, you know the people who do not fund the development of games, and do not take any risks.

The prosecutor’s fallacy and the Iranian elections

A few years ago I watched TED talk by Peter Donnelly dealing with statistics, and in particular the prosecutor’s fallacy. You can view it here. This was a thoroughly enjoyable talk and I recommend everyone to watch it. However, it turns out that not even statisticians are immune to making this basic error, as this recent article by Bernd Beber and Alexandra Scacco about the probabilities of election fraud in Iran shows.


My name is Sebastian Sylvan. I’m from Sweden but live in Seattle. I work at Microsoft on Hololens. Obviously my views are my own and don’t necessarily represent those of Microsoft. I typically blog graphics, languages, performance, and such. Feel free to hit me up on twitter or email (see links in sidebar).