RenderingPipeline

from geometry to pixels

FPS vs msec/frame

When talking about the performance of graphics applications or algorithms you hear frames per second (FPS for short) as the unit of measurement. This however is wrong most of the time and here’s why:

Why we use FPS

There are two ways to measure how fast an application can render the virtual world: by giving the time it takes to render it or by counting the number of frames that get rendered per second. It seems strange that the later way is more common but it makes sense after you have learned that the brain will interpret a sequence of images as a smooth motion if shown fast enough. As cinema shows us 24 images per second and TV 25 (PAL) or 29.97 (NTSC) any game that renders more frames per second is fast enough, right? Well this isn’t exactly right and I will come to that later but it demonstrates why FPS is a nice way to measure the rendering speed of a whole system (e.g. a game) for the user: He/she has only to compare the number given by a fixed value he learned to be the magic value where a sequence of images begins to feel like a fluid motion.

But instead of memorizing that 30 or 60 FPS is the magic border, you could also memorize 33 or 16 milliseconds as the magic rendering time. As I will show next, talking about milliseconds instead of FPS has some advantages.

Talking about subsystems

FPS doesn’t work anymore then we start talking about parts of the system instead of the whole renderer. Let me give you an example: Let’s say you read this paper about this cool new SSAO algorithm and it claims that with a simple scene you can achieve 200 FPS. Our game targets full 60 FPS, so it sounds like there is plenty of performance left. Now let’s look at the rendering times: the hypothetical SSAO demo took 5 msec to render each frame and we have to render a full game in 16.6 msec. That’s 30% of our budget for one post-processing effect! But maybe it’s not that bad: those 5 msec are needed to render the post-processing effect and a simple scene for which we don’t know the cost of the overhead! If this information would have been given in FPS it would say something like: ‘FPS without post-processing: 2000 FPS, with post-processing: 200 FPS’. But then you would think ‘One order of magnitude slower? No way this would work in my engine!’. But this reveals that the overhead of a simple scene was 0.5 msec and the effect is 4.5 msec. This way it’s much more clear if the effect fits into your budget or not. If the demonstration would use a complex scene the numbers could look like this: Without SSAO: 39.2 FPS, with SSAO: 33.3 FPS. ‘Just 17% overhead, great! My engine is running at 72 FPS, my goal is 60 FPS, so I have enough performance to spare!’. Going to msec reveals that the test application used up 25.5 msec and the post-process (again) 4.5 msec. Your app renders one frame in 13.9 msec and adding 4.5 will give you 18.3 – too bad you are over budget (~55 FPS).

It gets worse when we talk about adding multiple effects: Effect A renders at 200FPS, effect B at 100FPS and effect C at 500FPS so all three combined will render at…? You have to convert to msec/frame to calculate that anyway, so why aren’t we talking about msec per effect to begin with?

The bottom line is this: It’s much more intuitive to handle timings in what they are – the time to calculate something.

Varying rendering speed

Let’s say your engine is too slow, you play around with various settings and this is what you found out: With all effects active your game runs at just 50 FPS, without shadow-map creation its 66.6 FPS so you decide to recreate the SM only every third frame (you’re ok with the resulting artifacts as long as the user has smooth 60 FPS). Even tho you have 60 FPS now, it doesn’t feel smooth at all! Two frames take 15 msec to render and the third one 20! I once worked with a rendering system that didn’t feel as responsive as we expected from constantly (slightly) above 60 FPS. Further investigation revealed that some calculations were only triggered every few frames (one of the shadow maps every second, other systems in other intervals). I plotted the timings per frame and they varied a lot. The distribution of workload over multiple frames was quite bad balanced.

How would you even write down such varying rendering times in FPS? ‘We have an average of 60FPS but N% of the frames have 300FPS while M% have 20-30FPS…’?

Latency

Latency is a big topic: How long does it take your system from the moment you press a button until the resulting action gets displayed on the screen? There are a lot of factors like input latency, rendering time, latency of the driver (by buffering commands for whole frames), additional buffering in your TFT (TVs are even worse as they can perform some post-processing, e.g. 25 to 100Hz upscaling, that’s why they often have a ‘gaming mode’ which switches this of) etc. All of these timings can be given in milliseconds and so should the rendering time! In the context of reducing the whole system latency rendering speeds below 16 msec (and thus above 60FPS) can make sense even if the display only is capable of displaying 60 images per second.

If your game has a high latency from input to the screen or your rendering time varies a lot, your average FPS count will tell me nothing about how smooth the experience is.

tl;dr

Don’t measure your graphics performance in FPS but in milliseconds per frame/effect.

m4s0n501

,

2 thoughts on “FPS vs msec/frame

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*