Share via

Best way to ensure a CEF based application uses higher powered GPU by default

Amar Satrawala 20 Reputation points
2026-01-14T16:54:13.5266667+00:00

Hi,

I have an application that uses CEF to render the UI and computer graphics, utilizing WebGL 2 for rendering.

Issue: On systems with multiple GPUs (e.g., integrated and dedicated), our application defaults to using the integrated GPU. This results in sub-optimal graphics performance, as the more powerful dedicated GPU is not being utilized.

Current Workaround: Users can manually set the preferred GPU for the application by navigating to Display Settings → Graphics → Custom settings for applications and selecting the dedicated GPU.

Problem with the Workaround: In previous versions of our app (which used OpenGL), the application would automatically use the dedicated GPU by default, without requiring any user action. Requiring users to manually adjust their settings is a significant UX issue for us.

Question: What is the recommended way to ensure that our application automatically uses the dedicated GPU by default, without requiring any user intervention?

Thanks in advance,

Amar


Moved from: Community Center | Not monitored

Windows for business | Windows Client for IT Pros | Devices and deployment | Configure application groups
0 comments No comments
{count} votes

Answer accepted by question author
  1. Domic Vo 17,825 Reputation points Independent Advisor
    2026-01-15T12:49:20.0666667+00:00

    Hello Amar Satrawala,

    What you are running into is a change in how GPU selection is handled in modern Windows builds when applications use WebGL through Chromium Embedded Framework (CEF). With OpenGL contexts, Windows and the driver stack historically defaulted to the discrete GPU when one was present. With WebGL 2 via ANGLE (the translation layer used by Chromium), the GPU selection logic is different: Windows now defers to the OS graphics preference system, which defaults to the integrated GPU unless overridden. That is why your users must manually configure the app in Display Settings → Graphics.

    There is no supported registry flag to force GPU selection globally. The recommended way to ensure your application automatically uses the dedicated GPU is to declare the preference explicitly in your executable. Windows 10 and 11 introduced the GraphicsPreference API (SetProcessDefaultGpuPreference) which allows developers to request either GpuPreference::HighPerformance or GpuPreference::MinimumPower. This is part of the DirectX 12 API set. If your application calls this at startup, Windows will route the process to the discrete GPU without requiring user intervention.

    For applications that cannot directly call the API, you can embed a manifest entry in the executable. By adding the GraphicsPreference element in the app’s manifest, you can specify that the process should use the high‑performance GPU. NVIDIA also provides the NvOptimusEnablement and AMD provides AmdPowerXpressRequestHighPerformance export flags that can be set in your binary. These are recognized by the driver and force the discrete GPU to be used. For example, in C/C++ you can declare:

    c

    extern "C" {
        __declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
        __declspec(dllexport) int AmdPowerXpressRequestHighPerformance = 1;
    }
    

    This ensures that on systems with NVIDIA Optimus or AMD switchable graphics, your application will be launched on the dedicated GPU by default.

    I hope you've found something useful here. If it helps you get more insight into the issue, it's appreciated to accept the answer. Should you have more questions, feel free to leave a message. Have a nice day!

    Domic Vo.

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.