Benchmark scene for performance testing.The SilverLining Sky, 3D Cloud, and Weather SDK supports pretty much every rendering technology you can imagine: DirectX 9 through 11.1, and OpenGL 2.0 through the latest. As such, we’re in a good position to measure both the performance and code complexity associated with each rendering framework.

SilverLining is designed such that all of its renderer-specific code is isolated into DLL’s. So, you select which renderer you are working with when initializing SilverLining, and it loads up for example the OpenGL 3.2 core profile DLL with all of the code needed to draw our skies, clouds, and weather effects in OpenGL 3.2. We also have a sample application included with our SDK for each renderer that draws a similar scene.

So, we can measure both the lines of code needed to support each renderer – and the performance seen with each one. As graphics API’s have evolved, they have pushed more and more work away from the driver and onto the engine developer. Is that increased complexity paying off with increased performance? Let’s have a look (click for a clearer image):

OpenGLvsDirectX

First, a word on our methodology. We modified our sample applications to ensure the same exact test scene was rendered in each, using the same random seed and initial conditions for the clouds (as seen in the image above.) We ran at 1920×1080 resolution, and measured frame-rates with FRAPS. So in each case, we are measuring the lines of code required to produce the exact same result, and measuring the performance as frames per second. Higher lines of code is “bad”, and higher FPS is “good.” Tests were run on an NVidia GTX970 video card, with an Intel i5-4690K processor, with v-sync off.

The results are interesting, to say the least. There is a significant increase in performance between OpenGL and DirectX, which lends credence to the theory that driver authors pay a little more attention to DirectX performance. SilverLining does not use any fixed-function pipeline calls, so we are comparing apples to apples here.

We did not see any increase between OpenGL 2.0 and OpenGL 3.2 core profile, but did see a jump going from DirectX 9 to DirectX 10. This implies removing the “baggage” of fixed function support did result in a performance boost for DirectX, but not in OpenGL.

Notably, going from DirectX 11 to 11.1 caused a large increase in the complexity of code required on our end, but actually resulted in a step backwards for performance. I believe this is largely due to removing the DirectX 11 Effects framework, which was necessary because run-time shader compilation and reflection are discouraged in DirectX 11.1. So we had to write our own code for managing constant buffers, sampler states, etc., which may not be as efficient as the code Microsoft provided. So, we’re facing an uphill battle on reclaiming that lost performance now. If there’s a benefit to removing support for run-time compilation in Direct3D, it eludes me thusfar.

Another noteworthy result is that it takes 58% more DirectX 11.1 code to do same thing as in OpenGL 3.2. DirectX is clearly on a trend of pushing more complexity onto application and engine developers over time, and DirectX 12 takes this even further. This makes life easier for driver developers, and in principle gives engine developers more direct access to the graphics hardware which can translate to better performance. However, we shouldn’t forget that it also makes learning graphics code even harder for new developers. The simple act of rendering a triangle to the screen – the “hello world” of 3D graphics – is a very intimidating wall of code in DirectX 11.1. I do worry that this complexity will push young developers away from the field of lower-level graphics coding, and instead push them toward higher-level environments such as Unity. Whether that’s a good thing or not is debatable. It also means much of the burden of optimizing graphics code has shifted from the driver developers to the engine developers, which will lead to more inconsistent performance across applications.

So, bottom line: yes, DirectX is faster than OpenGL in our tests – but it comes at the cost of increased code complexity. Of course, if you care about supporting devices that don’t run Windows operating systems, this whole argument is moot – only OpenGL can support Linux and MacOS as well, for example.

It’s also worth mentioning that squabbling over 1,080 frames per second vs. 1,670 frames per second is probably also meaningless in practical terms. The human brain can’t see any improvement above 60 frames per second! In the context of SilverLining, none of this actually matters unless your application is already right on the edge of 60 Hz performance, and the smallest thing might push it over the edge to 30 when vsync’ed.