Variable Refresh Monitors and Animation
24/Jan 2015
So variable refresh rates have a lot of hype surrounding them now, with representatives from GPU and monitor vendors are saying that it will just magically make existing games better with no effort required on the part of the developer. This is bullshit and you shouldn’t believe them. If you’re a developer you need to be careful to handle these new refresh rates and if you’re a customer you should consider just locking your monitor to 60Hz for games that don’t do the right thing.
The problem
The issue is that games do animation. So in order to compute a frame for display it needs to essentially predict when that frame will be presented, and then compute the positions of all the objects at that time. This is only partially true because there’s lag in the display pipeline that we don’t know anything about, but since that lag affects every frame the same you can typically just ignore it and just advance the game state by 1/60Hz each frame and your animation will look smooth.
However, if the update frequency is variable then the amount of time it takes to render and display a frame changes every frame. So when we’re computing the next frame we don’t know how far ahead to “step”. It may have been 16ms last frame, but this frame it may be 15 or 17 - we just can’t tell in advance because we don’t know how long it will take the CPU and GPU to finish the frame. This means that objects no longer move smoothly in the world, they will move in a jerky fashion because the amount they get advanced each frame isn’t consistent with how long it took to actually display the frame.
Solutions?
So the first solution here is to just lock your framerate at some fixed number. 60Hz is a decent option, but if you have variable refresh rates maybe you’ll go for something odd like 45Hz or whatever. In fact, I would be surprised if the sweet spot for smoothness/performance just happens to be 60Hz by sheer coincidence. However, this is troublesome because our graphics APIs don’t give us an easy way to do this right now - perhaps these fancy new monitors could just let us set the refresh rate at arbitrary intervals instead and we can go back to using vsync, perhaps with some kind of clever fallback (if you miss the vsync, it uses variable refresh to present the frame as quickly as possible and then “resets” the framerate at that point, so that the next “vsync” happens 1/refresh later).
However, picking the refresh rate up front is tricky, even if we can pick any number we want. The game is going to run better on some scenes than others, and it will also vary from system to system. Maybe this could be automatic? The game could render each frame with a “fixed” target in mind, this target is set in such a way that we’re very confident we can hit it, and all the physics and animation is updated by that amount of time. If we can simultaneously tune this target based on how long it typically takes to update the frames, we’ll end up running faster where we can, and slower where we need to. So the idea is this:
Render the frame.
If the current frame time is less than the target frame time, do a busy wait then present just at the right time (this should be the overwhelming majority of frames).
If the current frame time is more than the target frame time, present immediately and increase the target time.
If the frame time was significantly less than the target frame time, and has been for some time, slightly decay the target frame time so that we get better fluidity.
There are many ways you could tweak this. For example, maybe you pick a target frame time based on a histogram of the most recent 100 frame times, or something - picking a time so that 99 of them would’ve come in under budget. Or maybe you fit a normal distribution to it and pick the 99th percentile target frame time.
Or maybe something simpler could work. For example, we could make sure our target frame rate is always 10% higher than the longest frame in the last second (or something). This means that as our actual frame times drop the target frame time will drop too (after a second) and we get better fluidity. If our frame times creep up slightly we will pre-emptively bump up our target to 10% on top of that to give ourselves breathing room in case they keep going up. If we do end up with a frame spike that’s worse than 10% of our worst frame in the last second, we will immediately bump up the target frame time to 10% above _that _new worst case. You would tune that percentage based on your game - if your frame times vary slowly you could use a lower buffer percentage, and if it varies quickly you may need a larger buffer percentage.
The biggest problem with this issue is step 2 above. The busy wait. In a typical renderer this means we can’t actually start the doing rendering for the next frame because we have to baby-sit the present for the previous frame, to make sure it doesn’t fire too early. This wastes valuable CPU cycles and GPU/CPU parallelism. Maybe in DX12 we could have a dedicated “presenter thread” that just submits command buffers and then does the busy-wait to present, while the main render thread actually does the rendering into those command buffers. Really what we want here is a way to tell the OS to sync to an arbitrary time period. For example, if the last couple of frames have all taken around 20 ms (which we can tell by measuring both the CPU time, well as inspecting some GPU profiling queries a frame later to see how long the GPU took) we could call some API to set the vsync period to be 22 ms just to give us some buffer and avoid jittering. This way the OS could potentially use a HW timer to avoid eating up CPU cycles.
Either way, let’s at least stop pretending that variable refresh rates are magic that can just be enabled for any game and look good. The developer needs to know when these things are enabled and take steps to make sure animation and physics remains smooth. It would be ideal if future APIs could actually help us out in doing this, but either way we need to do something to handle this.