This adds configuration options which automate the configuration of NVIDIA Optimus using PRIME. This allows using the NVIDIA proprietary driver on Optimus laptops, in order to render using the NVIDIA GPU while outputting to displays connected only to the integrated Intel GPU. It also adds an option for enabling kernel modesetting for the NVIDIA driver (via a kernel command line flag); this is particularly useful together with Optimus/PRIME because it fixes tearing on PRIME-connected screens.
The user still needs to enable the Optimus/PRIME feature and specify the bus IDs of the Intel and NVIDIA GPUs, but this is still much easier for users and more reliable. The implementation handles both the X configuration file as well as getting display managers to run certain necessary `xrandr` commands just after X has started.
Configuration of commands run after X startup is done using a new configuration option `services.xserver.displayManager.setupCommands`. Support for this option is implemented for LightDM, GDM and SDDM; all of these have been tested with this feature including logging into a Plasma session.
Note: support of `setupCommands` for GDM is implemented by making GDM run the session executable via a wrapper; the wrapper will run the `setupCommands` before execing. This seemed like the simplest and most reliable approach, and solves running these commands both for GDM's X server and user X servers (GDM starts separate X servers for itself and user sessions). An alternative approach would be with autostart files but that seems harder to set up and less reliable.
Note that some simple features for X configuration file generation (in `xserver.nix`) are added which are used in the implementation:
- `services.xserver.extraConfig`: Allows adding arbitrary new sections. This is used to add the Device section for the Intel GPU.
- `deviceSection` and `screenSection` within `services.xserver.drivers`. This allows the nvidia configuration module to add additional contents into the `Device` and `Screen` sections of the "nvidia" driver, and not into such sections for other drivers that may be enabled.
Previously all card-specific stuff was scattered across xserver.nix
and opengl.nix, which is ugly. Now it can be kept together in a single
card-specific module. This required the addition of a few internal
options:
- services.xserver.drivers: A list of { name, driverName, modules,
libPath } sets.
- hardware.opengl.package: The OpenGL implementation. Note that there
can be only one OpenGL implementation at a time in a system
configuration (i.e. no dynamic detection).
- hardware.opengl.package32: The 32-bit OpenGL implementation.