Home » Blogs » Microsoft DirectSR announcement

Microsoft® DirectSR and AMD FidelityFX™ Super Resolution technology

Jarrett Wendt
Jarrett Wendt

Jarrett Wendt is the lead developer on AMD's DirectSR support and is part of the Adrenalin software team. He holds a Master of Science from the Florida Interactive Entertainment Academy.

Developer overview

During the last few years, super resolution technologies have become an expected feature of most games. The first release of AMD FidelityFX Super Resolution 1 was released in 2021, with FSR 2 and FSR 3 in subsequent years. Each iteration improved the technology, generating more, better quality pixels for gamers everywhere. Of course, modern game development means accounting for – and working with – the ecosystem in which games exist. For PC games, that is a reality which includes multiple super resolution technologies. For developers, this means integrating multiple APIs into their game engine, and all the associated verification and validation of that functionality.

With the introduction of Microsoft’s DirectSR (see Microsoft’s blog), the need to integrate many super resolution APIs has been reduced. Instead, developers may integrate just one API – Microsoft DirectSR – and all super resolution technologies supported both by DirectSR natively and the particular adapter (and its driver stack) will be enumerated for the developer, allowing them to bubble those options up to the end user with ease.

To use DirectSR, applications enumerate the available super resolution variants, using the DirectSR API; a little like how applications might enumerate monitors or display adapters using DXGI. With the list of super resolution variants, the application can then create a super resolution engine by supplying a desired variant. Some techniques exposed via DirectSR – such as AMD FidelityFX Super Resolution 2.2 – may only expose a single variant. With the engine available, the application can then create an upscaler to call the per-frame API entry points to dispatch the super resolution workloads to a desired application queue.

It’s worth discussing the application queue a bit more because it marks a departure from the standard AMD FidelityFX SDK entry points for super resolution, which record directly into a DirectX®12 command list. The choice of a queue rather than a command list allows for a wider range of devices to provide super resolution engines to DirectSR and the application, including devices that aren’t GPUs. That additional flexibility means integrating DirectSR can be a little more challenging for engines that have a single place in their design where rendering work and synchronization primitives are submitted to the GPU. With DirectSR, that place in the game engine will have to be modified to include the calls to the DirectSR per-frame entry points, synchronizing with the engine’s view of normal work submission to the GPU.

DirectSR driver stack

AMD’s implementation of DirectSR is done via a direct integration into the runtime. This is a little different to the driver-based metacommand implementations used by other super resolution engines. By doing a runtime-level implementation, AMD has been able bring our industry-leading broad support that is offered by FSR 2 to virtually all users of DirectSR. Meaning even more gamers can benefit from cutting-edge super resolution technologies.

Copied!

// Create an IDSRDevice to allow us to enumerate available SR variants.
ID3D12DSRDeviceFactory* dsrDeviceFactory;
D3D12GetInterface(CLSID_D3D12DSRDeviceFactory, IID_PPV_ARGS(&dsrDeviceFactory));
IDSRDevice* dsrDevice;
dsrDeviceFactory->CreateDSRDevice(d3d12Device, 1, IID_PPV_ARGS(&dsrDevice));

Armed with a high-level overview of DirectSR, let’s now turn our focus to some of the specifics of the DirectSR API to see how one might integrate DirectSR into their application. Let’s start with creating a DirectSR device, as you can see in the code snippet above. This is the first step towards getting a DirectSR upscaler to run. With the device in hand, we are then able to enumerate the available DirectSR variants available on the target machine.

Copied!

// Enumerate all super resolution variants available on the device.
const UINT dsrVariantCount = dsrDevice->GetNumSuperResVariants();
for (UINT currentVariantIndex = 0; currentVariantIndex < dsrVariantCount; currentVariantIndex++)
{
    DSR_SUPERRES_VARIANT_DESC variantDesc;
    dsrDevice->GetSuperResVariantDesc(currentVariantIndex, &variantDesc);
    // Use the variant desc as we see fit...
}

The loop above is retrieving a structure containing a description of each variant from the device. A variant is DirectSR’s name for a particular super resolution technique, e.g.: AMD FidelityFX Super Resolution 2.2 would be a variant in this nomenclature. The next step is to create a DirectSR engine for the variant we want to use. This is what we are going to do next.

Copied!

// Create a DirectSR engine for the desired variant.
DSR_SUPERRES_CREATE_ENGINE_PARAMETERS createParams
{
    .VariantId         = ChosenVariantId,
    .TargetFormat      = DXGI_FORMAT_R10G10B10A2_UNORM,
    .SourceColorFormat = DXGI_FORMAT_R10G10B10A2_UNORM,
    .SourceDepthFormat = DXGI_FORMAT_R32_FLOAT,
    .Flags             = DSR_SUPERRES_CREATE_ENGINE_FLAG_ALLOW_DRS,
    .MaxSourceSize     = CreateParams.TargetSize,
    .TargetSize        = DSR_SIZE{ targetResX, targetResY },
};

IDSRSuperResEngine* dsrEngine;
dsrDevice->CreateSuperResEngine(&createParams, IID_PPV_ARGS(&dsrEngine));

// Create super resolution upscaler object to use each frame.
IDSRSuperResUpscaler* dsrUpscaler;
dsrEngine->CreateUpscaler(commandQueue, IID_PPV_ARGS(&dsrUpscaler));

An important aspect of the CreateUpscaler() method is that it takes a commandQueue parameter. This implies a binding of the DirectSR engine to a specific queue. The rest of the API surface area does not interact with other command-related DirectX®12 structures or objects, meaning that when the time comes to have DirectSR perform upscaling, the DirectSR runtime will directly submit work to the command queue that you specified during the engine’s creation. For now, we can go ahead and look at how we use the DirectSR upscaler object that we created from the DirectSR engine, and how we will interact with that from frame-to-frame.

Copied!

DSR_SUPERRES_UPSCALER_EXECUTE_PARAMETERS params{ /* see the next code snippet */ };
DSR_SUPERRES_UPSCALER_EXECUTE_FLAGS executeFlags =
    jumpCutOccurred ? DSR_SUPERRES_UPSCALER_EXECUTE_FLAG_RESET_HISTORY
                    : DSR_SUPERRES_UPSCALER_EXECUTE_FLAG_NONE;
dsrUpscaler->Execute(&params, frameDeltaInSeconds, executeFlags);

Interacting with the DirectSR upscaler is very straightforward and requires just three things. The first is the execute parameters, which we will explore in more detail next. Second is the frame delta, this tells DirectSR upscalers how much time elapsed since the last frame was upscaled. Finally, we have the execution flags which tell DirectSR upscalers about important events, such as the camera performing a jump cut, which completely invalidates all previous historical information. We glossed over the execution parameters in the snippet above, so lets now dig into that structure a little more to see what data from our application needs to flow into a DirectSR upscaler.

Copied!

DSR_SUPERRES_UPSCALER_EXECUTE_PARAMETERS params
{
    .pTargetTexture            = target,
    .TargetRegion              = D3D12_RECT{ 0, 0, targetResX, targetResY },
    .pSourceColorTexture       = pSourceColor,
    .SourceColorRegion         = D3D12_RECT{ 0, 0, sourceResX, sourceResY },
    .pSourceDepthTexture       = sourceDepth,
    .SourceDepthRegion         = D3D12_RECT{ 0, 0, sourceResX, sourceResY },
    .pMotionVectorsTexture     = motionVectors,
    .MotionVectorsRegion       = D3D12_RECT{ 0, 0, sourceResX, sourceResY },
    .MotionVectorScale         = DSR_FLOAT2{ 1.f, 1.f },
    .CameraJitter              = DSR_FLOAT2{ cameraJitterX, cameraJitterY },
    .ExposureScale             = 1.f,
    .PreExposure               = 1.f,
    .Sharpness                 = 1.f,
    .CameraNear                = 0.f,
    .CameraFar                 = INFINITY,
    .CameraFovAngleVert        = 1.f,
    .pExposureScaleTexture     = nullptr,
    .pIgnoreHistoryMaskTexture = nullptr,
    .IgnoreHistoryMaskRegion   = D3D12_RECT{ 0, 0, sourceResX, sourceResY },
    .pReactiveMaskTexture      = reactiveMask,
    .ReactiveMaskRegion        = D3D12_RECT{ 0, 0, sourceResX, sourceResY },
};

Before we pass this data to DirectSR, we – as application developers – need to ensure that all these inputs are available. That means that the GPU workload that generates them has been submitted to the GPU before calling the Execute() method on the super resolution engine. This most importantly includes the color texture, depth texture, and motion vectors at the application’s source resolution. These are typically produced by the application during their main rendering and shading phases.

The target texture parameter should contain a resource which is appropriately sized to contain the upscaled result of running a DirectSR upscaler. The other numerical parameters should be relatively self-documenting, and include camera jitter, exposure scale and pre-exposure for HDR input signals, camera near, far, and vertical field of view angle, as well as a sharpness factor which includes how much additional sharpening an upscaling engine should perform upon the upscaled results. The last two texture inputs – the ignore history mask and the reactive mask – are marked optional but are very important for those developers seeking to achieve the highest possible upscaling quality, and therefore warrant a little more explanation.

Just as with AMD FidelityFX Super Resolution 2.2, we strongly recommend that application developers provide the optional metadata to DirectSR implementations. In DirectSR parlance, this means having the application populate and provide the “Ignore History Mask” and “Reactive Mask” in addition to the Color, Depth and Motion Vector surfaces. These additional metadata surfaces map to FidelityFX Super Resolution 2.2’s “Transparency & Composition Mask” and “Reactivity Mask” respectively.

DirectSR inputs and outputs

To populate these masks, it is helpful to understand what they should contain. Let’s start with the reactive mask. The term “reactivity” means how much influence the samples rendered for the current frame have over the production of the final upscaled image. Typically, samples rendered for the current frame contribute a relatively modest amount to the result computed by FSR 2; however, there are exceptions. To produce the best results for fast moving, alpha-blended objects, FSR 2 requires one of the stages of its algorithm to become more reactive for such pixels. As there is no good way to determine from either color, depth, or motion vectors which pixels have been rendered using alpha blending, FSR 2 performs best when applications explicitly mark such areas. Therefore, it is strongly encouraged that applications provide a reactive mask to FSR 2.

The reactive mask guides FSR 2 on where it should reduce its reliance on historical information when compositing the current pixel, and instead allow the current frame’s samples to contribute more to the result. The reactive mask allows the application to provide a value from [0..1] where 0 indicates that the pixel is not at all reactive (and should use the default FSR 2 composition strategy), and a value of 1.0 indicates the pixel should be fully reactive. This is a floating-point range and can be tailored to different situations.

While there are other applications for the reactive mask, the primary application for the reactive mask is producing better results of upscaling images which include alpha-blended objects. A good proxy for reactiveness is the alpha value used when compositing an alpha-blended object into the scene, therefore, applications should write alpha to the reactive mask. Setting reactive to alpha is just the start, and to get the best results your implementation may want to specialize this value further. Please refer to the FSR 2 documentation for more details on populating the reactive mask. It is unlikely that a reactive value of close to 1 will ever produce good results. Therefore, we recommend clamping the maximum reactive value to around 0.9.

In addition to the reactive mask, FSR 2 provides for the application to denote areas of other specialist rendering which should be accounted for during the upscaling process. Examples of such special rendering include areas of raytraced reflections or animated textures. While the Reactive mask adjusts the accumulation balance, the ignore history mask adjusts the pixel history protection mechanisms. The mask also removes the effect of the luminance instability factor. A pixel with a value of 0 in the ignore history mask mask does not perform any additional modification to the lock for that pixel. Conversely, a value of 1 denotes that the lock for that pixel should be completely removed.


We hope you found this blog both useful and interesting in understanding a little more about Microsoft DirectSR and how AMD FidelityFX Super Resolution 2.2 is hosted within it. There is a lot more to discover about making a successful integration of an upscaler into an application. AMD FidelityFX Super Resolution 2 provides some great documentation which explains all of the integration considerations which are outside of a core upscaler API. These topics include mipmap biasing, jittering the camera, where to place the upscaler in your frame, how to construct motion vector textures, and much more. You can find the AMD FSR 2 documentation here.

AMD’s DirectSR implementation uses the AMD FSR 2.2 API from the AMD FidelityFX SDK. If you want to check out the AMD FidelityFX SDK, you can read more here. If you’re interested in frame generation technology as well as super resolution upscaling, you might want to check out the AMD FidelityFX SDK for more details on AMD FidelityFX Super Resolution 3.0 technology, and its ability to generate new frames.

Related links

Jarrett Wendt
Jarrett Wendt

Jarrett Wendt is the lead developer on AMD's DirectSR support and is part of the Adrenalin software team. He holds a Master of Science from the Florida Interactive Entertainment Academy.

Enjoy this blog post? If you found it useful, why not share it with other game developers?

You may also like...

Getting started: AMD GPUOpen software

New or fairly new to AMD’s tools, libraries, and effects? This is the best place to get started on GPUOpen!

AMD GPUOpen Getting Started Development and Performance

Looking for tips on getting started with developing and/or optimizing your game, whether on AMD hardware or generally? We’ve got you covered!

GPUOpen Manuals

Don’t miss our manual documentation! And if slide decks are what you’re after, you’ll find 100+ of our finest presentations here.

AMD GPUOpen Technical blogs

Browse our technical blogs, and find valuable advice on developing with AMD hardware, ray tracing, Vulkan®, DirectX®, Unreal Engine, and lots more.

AMD GPUOpen videos

Words not enough? How about pictures? How about moving pictures? We have some amazing videos to share with you!

AMD GPUOpen Performance Guides

The home of great performance and optimization advice for AMD RDNA™ 2 GPUs, AMD Ryzen™ CPUs, and so much more.

AMD GPUOpen software blogs

Our handy software release blogs will help you make good use of our tools, SDKs, and effects, as well as sharing the latest features with new releases.

AMD GPUOpen publications

Discover our published publications.