Merge branch 'master' of https://github.com/SuperSamus/nixpkgs
This commit is contained in:
commit
56eb515ff7
1355 changed files with 50966 additions and 20555 deletions
6
.github/CODEOWNERS
vendored
6
.github/CODEOWNERS
vendored
|
@ -97,9 +97,9 @@
|
|||
/pkgs/top-level/haskell-packages.nix @cdepillabout @sternenseemann @maralorn @expipiplus1
|
||||
|
||||
# Perl
|
||||
/pkgs/development/interpreters/perl @volth @stigtsp
|
||||
/pkgs/top-level/perl-packages.nix @volth @stigtsp
|
||||
/pkgs/development/perl-modules @volth @stigtsp
|
||||
/pkgs/development/interpreters/perl @volth @stigtsp @zakame
|
||||
/pkgs/top-level/perl-packages.nix @volth @stigtsp @zakame
|
||||
/pkgs/development/perl-modules @volth @stigtsp @zakame
|
||||
|
||||
# R
|
||||
/pkgs/applications/science/math/R @jbedo @bcdarwin
|
||||
|
|
|
@ -158,7 +158,23 @@ This can be overridden.
|
|||
|
||||
By default, Agda sources are files ending on `.agda`, or literate Agda files ending on `.lagda`, `.lagda.tex`, `.lagda.org`, `.lagda.md`, `.lagda.rst`. The list of recognised Agda source extensions can be extended by setting the `extraExtensions` config variable.
|
||||
|
||||
## Adding Agda packages to Nixpkgs {#adding-agda-packages-to-nixpkgs}
|
||||
## Maintaining the Agda package set on Nixpkgs {#maintaining-the-agda-package-set-on-nixpkgs}
|
||||
|
||||
We are aiming at providing all common Agda libraries as packages on `nixpkgs`,
|
||||
and keeping them up to date.
|
||||
Contributions and maintenance help is always appreciated,
|
||||
but the maintenance effort is typically low since the Agda ecosystem is quite small.
|
||||
|
||||
The `nixpkgs` Agda package set tries to take up a role similar to that of [Stackage](https://www.stackage.org/) in the Haskell world.
|
||||
It is a curated set of libraries that:
|
||||
|
||||
1. Always work together.
|
||||
2. Are as up-to-date as possible.
|
||||
|
||||
While the Haskell ecosystem is huge, and Stackage is highly automatised,
|
||||
the Agda package set is small and can (still) be maintained by hand.
|
||||
|
||||
### Adding Agda packages to Nixpkgs {#adding-agda-packages-to-nixpkgs}
|
||||
|
||||
To add an Agda package to `nixpkgs`, the derivation should be written to `pkgs/development/libraries/agda/${library-name}/` and an entry should be added to `pkgs/top-level/agda-packages.nix`. Here it is called in a scope with access to all other Agda libraries, so the top line of the `default.nix` can look like:
|
||||
|
||||
|
@ -192,3 +208,49 @@ mkDerivation {
|
|||
This library has a file called `.agda-lib`, and so we give an empty string to `libraryFile` as nothing precedes `.agda-lib` in the filename. This file contains `name: IAL-1.3`, and so we let `libraryName = "IAL-1.3"`. This library does not use an `Everything.agda` file and instead has a Makefile, so there is no need to set `everythingFile` and we set a custom `buildPhase`.
|
||||
|
||||
When writing an Agda package it is essential to make sure that no `.agda-lib` file gets added to the store as a single file (for example by using `writeText`). This causes Agda to think that the nix store is a Agda library and it will attempt to write to it whenever it typechecks something. See [https://github.com/agda/agda/issues/4613](https://github.com/agda/agda/issues/4613).
|
||||
|
||||
In the pull request adding this library,
|
||||
you can test whether it builds correctly by writing in a comment:
|
||||
|
||||
```
|
||||
@ofborg build agdaPackages.iowa-stdlib
|
||||
```
|
||||
|
||||
### Maintaining Agda packages
|
||||
|
||||
As mentioned before, the aim is to have a compatible, and up-to-date package set.
|
||||
These two conditions sometimes exclude each other:
|
||||
For example, if we update `agdaPackages.standard-library` because there was an upstream release,
|
||||
this will typically break many reverse dependencies,
|
||||
i.e. downstream Agda libraries that depend on the standard library.
|
||||
In `nixpkgs` we are typically among the first to notice this,
|
||||
since we have build tests in place to check this.
|
||||
|
||||
In a pull request updating e.g. the standard library, you should write the following comment:
|
||||
|
||||
```
|
||||
@ofborg build agdaPackages.standard-library.passthru.tests
|
||||
```
|
||||
|
||||
This will build all reverse dependencies of the standard library,
|
||||
for example `agdaPackages.agda-categories`, or `agdaPackages.generic`.
|
||||
|
||||
In some cases it is useful to build _all_ Agda packages.
|
||||
This can be done with the following Github comment:
|
||||
|
||||
```
|
||||
@ofborg build agda.passthru.tests.allPackages
|
||||
```
|
||||
|
||||
Sometimes, the builds of the reverse dependencies fail because they have not yet been updated and released.
|
||||
You should drop the maintainers a quick issue notifying them of the breakage,
|
||||
citing the build error (which you can get from the ofborg logs).
|
||||
If you are motivated, you might even send a pull request that fixes it.
|
||||
Usually, the maintainers will answer within a week or two with a new release.
|
||||
Bumping the version of that reverse dependency should be a further commit on your PR.
|
||||
|
||||
In the rare case that a new release is not to be expected within an acceptable time,
|
||||
simply mark the broken package as broken by setting `meta.broken = true;`.
|
||||
This will exclude it from the build test.
|
||||
It can be added later when it is fixed,
|
||||
and does not hinder the advancement of the whole package set in the meantime.
|
||||
|
|
|
@ -28,8 +28,7 @@ mkShell {
|
|||
packages = [
|
||||
(with dotnetCorePackages; combinePackages [
|
||||
sdk_3_1
|
||||
sdk_3_0
|
||||
sdk_2_1
|
||||
sdk_5_0
|
||||
])
|
||||
];
|
||||
}
|
||||
|
@ -64,12 +63,46 @@ $ dotnet --info
|
|||
|
||||
The `dotnetCorePackages.sdk_X_Y` is preferred over the old dotnet-sdk as both major and minor version are very important for a dotnet environment. If a given minor version isn't present (or was changed), then this will likely break your ability to build a project.
|
||||
|
||||
## dotnetCorePackages.sdk vs dotnetCorePackages.net vs dotnetCorePackages.netcore vs dotnetCorePackages.aspnetcore {#dotnetcorepackages.sdk-vs-dotnetcorepackages.net-vs-dotnetcorepackages.netcore-vs-dotnetcorepackages.aspnetcore}
|
||||
## dotnetCorePackages.sdk vs dotnetCorePackages.runtime vs dotnetCorePackages.aspnetcore {#dotnetcorepackages.sdk-vs-dotnetcorepackages.runtime-vs-dotnetcorepackages.aspnetcore}
|
||||
|
||||
The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given version. The `net`, `netcore` and `aspnetcore` packages are meant to serve as minimal runtimes to deploy alongside already built applications. For runtime versions >= .NET 5 `net` is used while `netcore` is used for older .NET Core runtime version.
|
||||
The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given version. The `runtime` and `aspnetcore` packages are meant to serve as minimal runtimes to deploy alongside already built applications.
|
||||
|
||||
## Packaging a Dotnet Application {#packaging-a-dotnet-application}
|
||||
|
||||
Ideally, we would like to build against the sdk, then only have the dotnet runtime available in the runtime closure.
|
||||
To package Dotnet applications, you can use `buildDotnetModule`. This has similar arguments to `stdenv.mkDerivation`, with the following additions:
|
||||
|
||||
TODO: Create closure-friendly way to package dotnet applications
|
||||
* `projectFile` has to be used for specifying the dotnet project file relative to the source root. These usually have `.sln` or `.csproj` file extensions.
|
||||
* `nugetDeps` has to be used to specify the NuGet dependency file. Unfortunately, these cannot be deterministically fetched without a lockfile. This file should be generated using `nuget-to-nix` tool, which is available in nixpkgs.
|
||||
* `executables` is used to specify which executables get wrapped to `$out/bin`, relative to `$out/lib/$pname`. If this is unset, all executables generated will get installed. If you do not want to install any, set this to `[]`.
|
||||
* `runtimeDeps` is used to wrap libraries into `LD_LIBRARY_PATH`. This is how dotnet usually handles runtime dependencies.
|
||||
* `buildType` is used to change the type of build. Possible values are `Release`, `Debug`, etc. By default, this is set to `Release`.
|
||||
* `dotnet-sdk` is useful in cases where you need to change what dotnet SDK is being used.
|
||||
* `dotnet-runtime` is useful in cases where you need to change what dotnet runtime is being used.
|
||||
* `dotnetRestoreFlags` can be used to pass flags to `dotnet restore`.
|
||||
* `dotnetBuildFlags` can be used to pass flags to `dotnet build`.
|
||||
* `dotnetInstallFlags` can be used to pass flags to `dotnet install`.
|
||||
* `dotnetFlags` can be used to pass flags to all of the above phases.
|
||||
|
||||
Here is an example `default.nix`, using some of the previously discussed arguments:
|
||||
```nix
|
||||
{ lib, buildDotnetModule, dotnetCorePackages, ffmpeg }:
|
||||
|
||||
buildDotnetModule rec {
|
||||
pname = "someDotnetApplication";
|
||||
version = "0.1";
|
||||
|
||||
src = ./.;
|
||||
|
||||
projectFile = "src/project.sln";
|
||||
nugetDeps = ./deps.nix; # File generated with `nuget-to-nix path/to/src > deps.nix`.
|
||||
|
||||
dotnet-sdk = dotnetCorePackages.sdk_3_1;
|
||||
dotnet-runtime = dotnetCorePackages.net_5_0;
|
||||
dotnetFlags = [ "--runtime linux-x64" ];
|
||||
|
||||
executables = [ "foo" ]; # This wraps "$out/lib/$pname/foo" to `$out/bin/foo`.
|
||||
executables = []; # Don't install any executables.
|
||||
|
||||
runtimeDeps = [ ffmpeg ]; # This will wrap ffmpeg's library path into `LD_LIBRARY_PATH`.
|
||||
}
|
||||
```
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
<xi:include href="coq.section.xml" />
|
||||
<xi:include href="crystal.section.xml" />
|
||||
<xi:include href="dhall.section.xml" />
|
||||
<xi:include href="dotnet.section.xml" />
|
||||
<xi:include href="emscripten.section.xml" />
|
||||
<xi:include href="gnome.section.xml" />
|
||||
<xi:include href="go.section.xml" />
|
||||
|
|
|
@ -237,22 +237,6 @@ where they are known to differ. But there are ways to customize the argument:
|
|||
--target /nix/store/asdfasdfsadf-thumb-crazy.json # contains {"foo":"","bar":""}
|
||||
```
|
||||
|
||||
Finally, as an ad-hoc escape hatch, a computed target (string or JSON file
|
||||
path) can be passed directly to `buildRustPackage`:
|
||||
|
||||
```nix
|
||||
pkgs.rustPlatform.buildRustPackage {
|
||||
/* ... */
|
||||
target = "x86_64-fortanix-unknown-sgx";
|
||||
}
|
||||
```
|
||||
|
||||
This is useful to avoid rebuilding Rust tools, since they are actually target
|
||||
agnostic and don't need to be rebuilt. But in the future, we should always
|
||||
build the Rust tools and standard library crates separately so there is no
|
||||
reason not to take the `stdenv.hostPlatform.rustc`-modifying approach, and the
|
||||
ad-hoc escape hatch to `buildRustPackage` can be removed.
|
||||
|
||||
Note that currently custom targets aren't compiled with `std`, so `cargo test`
|
||||
will fail. This can be ignored by adding `doCheck = false;` to your derivation.
|
||||
|
||||
|
|
|
@ -153,6 +153,11 @@ in mkLicense lset) ({
|
|||
free = false;
|
||||
};
|
||||
|
||||
capec = {
|
||||
fullName = "Common Attack Pattern Enumeration and Classification";
|
||||
url = "https://capec.mitre.org/about/termsofuse.html";
|
||||
};
|
||||
|
||||
clArtistic = {
|
||||
spdxId = "ClArtistic";
|
||||
fullName = "Clarified Artistic License";
|
||||
|
|
|
@ -303,7 +303,26 @@ rec {
|
|||
# TODO: figure out a clever way to integrate location information from
|
||||
# something like __unsafeGetAttrPos.
|
||||
|
||||
warn = msg: builtins.trace "[1;31mwarning: ${msg}[0m";
|
||||
/*
|
||||
Print a warning before returning the second argument. This function behaves
|
||||
like `builtins.trace`, but requires a string message and formats it as a
|
||||
warning, including the `warning: ` prefix.
|
||||
|
||||
To get a call stack trace and abort evaluation, set the environment variable
|
||||
`NIX_ABORT_ON_WARN=true` and set the Nix options `--option pure-eval false --show-trace`
|
||||
|
||||
Type: string -> a -> a
|
||||
*/
|
||||
warn =
|
||||
if lib.elem (builtins.getEnv "NIX_ABORT_ON_WARN") ["1" "true" "yes"]
|
||||
then msg: builtins.trace "[1;31mwarning: ${msg}[0m" (abort "NIX_ABORT_ON_WARN=true; warnings are treated as unrecoverable errors.")
|
||||
else msg: builtins.trace "[1;31mwarning: ${msg}[0m";
|
||||
|
||||
/*
|
||||
Like warn, but only warn when the first argument is `true`.
|
||||
|
||||
Type: bool -> string -> a -> a
|
||||
*/
|
||||
warnIf = cond: msg: if cond then warn msg else id;
|
||||
|
||||
info = msg: builtins.trace "INFO: ${msg}";
|
||||
|
|
|
@ -440,6 +440,12 @@
|
|||
githubId = 173595;
|
||||
name = "Caleb Maclennan";
|
||||
};
|
||||
ALEX11BR = {
|
||||
email = "alexioanpopa11@gmail.com";
|
||||
github = "ALEX11BR";
|
||||
githubId = 49609151;
|
||||
name = "Popa Ioan Alexandru";
|
||||
};
|
||||
alexarice = {
|
||||
email = "alexrice999@hotmail.co.uk";
|
||||
github = "alexarice";
|
||||
|
@ -896,6 +902,12 @@
|
|||
githubId = 1296771;
|
||||
name = "Anders Riutta";
|
||||
};
|
||||
arkivm = {
|
||||
email = "vikram186@gmail.com";
|
||||
github = "arkivm";
|
||||
githubId = 1118815;
|
||||
name = "Vikram Narayanan";
|
||||
};
|
||||
armijnhemel = {
|
||||
email = "armijn@tjaldur.nl";
|
||||
github = "armijnhemel";
|
||||
|
@ -1185,6 +1197,12 @@
|
|||
email = "sivaraman.balaji@gmail.com";
|
||||
name = "Balaji Sivaraman";
|
||||
};
|
||||
balodja = {
|
||||
email = "balodja@gmail.com";
|
||||
github = "balodja";
|
||||
githubId = 294444;
|
||||
name = "Vladimir Korolev";
|
||||
};
|
||||
baloo = {
|
||||
email = "nixpkgs@superbaloo.net";
|
||||
github = "baloo";
|
||||
|
@ -1909,6 +1927,12 @@
|
|||
email = "me@philscotted.com";
|
||||
name = "Phil Scott";
|
||||
};
|
||||
chekoopa = {
|
||||
email = "chekoopa@mail.ru";
|
||||
github = "chekoopa";
|
||||
githubId = 1689801;
|
||||
name = "Mikhail Chekan";
|
||||
};
|
||||
ChengCat = {
|
||||
email = "yu@cheng.cat";
|
||||
github = "ChengCat";
|
||||
|
@ -2078,6 +2102,16 @@
|
|||
githubId = 25088352;
|
||||
name = "Christian Kögler";
|
||||
};
|
||||
ckie = {
|
||||
email = "nixpkgs-0efe364@ckie.dev";
|
||||
github = "ckiee";
|
||||
githubId = 2526321;
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0x13E79449C0525215";
|
||||
fingerprint = "539F 0655 4D35 38A5 429A E253 13E7 9449 C052 5215";
|
||||
}];
|
||||
name = "ckie";
|
||||
};
|
||||
clkamp = {
|
||||
email = "c@lkamp.de";
|
||||
github = "clkamp";
|
||||
|
@ -2409,6 +2443,12 @@
|
|||
githubId = 4331004;
|
||||
name = "Naoya Hatta";
|
||||
};
|
||||
dalpd = {
|
||||
email = "denizalpd@ogr.iu.edu.tr";
|
||||
github = "dalpd";
|
||||
githubId = 16895361;
|
||||
name = "Deniz Alp Durmaz";
|
||||
};
|
||||
DamienCassou = {
|
||||
email = "damien@cassou.me";
|
||||
github = "DamienCassou";
|
||||
|
@ -4054,6 +4094,12 @@
|
|||
githubId = 16470252;
|
||||
name = "Gemini Lasswell";
|
||||
};
|
||||
gbtb = {
|
||||
email = "goodbetterthebeast3@gmail.com";
|
||||
github = "gbtb";
|
||||
githubId = 37017396;
|
||||
name = "gbtb";
|
||||
};
|
||||
gebner = {
|
||||
email = "gebner@gebner.org";
|
||||
github = "gebner";
|
||||
|
@ -4381,6 +4427,16 @@
|
|||
githubId = 54728477;
|
||||
name = "Happy River";
|
||||
};
|
||||
hardselius = {
|
||||
email = "martin@hardselius.dev";
|
||||
github = "hardselius";
|
||||
githubId = 1422583;
|
||||
name = "Martin Hardselius";
|
||||
keys = [{
|
||||
longkeyid = "rsa4096/0x03A6E6F786936619";
|
||||
fingerprint = "3F35 E4CA CBF4 2DE1 2E90 53E5 03A6 E6F7 8693 6619";
|
||||
}];
|
||||
};
|
||||
haslersn = {
|
||||
email = "haslersn@fius.informatik.uni-stuttgart.de";
|
||||
github = "haslersn";
|
||||
|
@ -5144,6 +5200,12 @@
|
|||
githubId = 117874;
|
||||
name = "Jeroen de Haas";
|
||||
};
|
||||
jdreaver = {
|
||||
email = "johndreaver@gmail.com";
|
||||
github = "jdreaver";
|
||||
githubId = 1253071;
|
||||
name = "David Reaver";
|
||||
};
|
||||
jduan = {
|
||||
name = "Jingjing Duan";
|
||||
email = "duanjingjing@gmail.com";
|
||||
|
@ -5443,6 +5505,12 @@
|
|||
githubId = 8735102;
|
||||
name = "John Ramsden";
|
||||
};
|
||||
johnrichardrinehart = {
|
||||
email = "johnrichardrinehart@gmail.com";
|
||||
github = "johnrichardrinehart";
|
||||
githubId = 6321578;
|
||||
name = "John Rinehart";
|
||||
};
|
||||
johntitor = {
|
||||
email = "huyuumi.dev@gmail.com";
|
||||
github = "JohnTitor";
|
||||
|
@ -6488,6 +6556,12 @@
|
|||
githubId = 791115;
|
||||
name = "Linquize";
|
||||
};
|
||||
linsui = {
|
||||
email = "linsui555@gmail.com";
|
||||
github = "linsui";
|
||||
githubId = 36977733;
|
||||
name = "linsui";
|
||||
};
|
||||
linus = {
|
||||
email = "linusarver@gmail.com";
|
||||
github = "listx";
|
||||
|
@ -8296,6 +8370,17 @@
|
|||
githubId = 127548;
|
||||
name = "Judson Lester";
|
||||
};
|
||||
nzbr = {
|
||||
email = "nixos@nzbr.de";
|
||||
github = "nzbr";
|
||||
githubId = 7851175;
|
||||
name = "nzbr";
|
||||
matrix = "@nzbr:nzbr.de";
|
||||
keys = [{
|
||||
longkeyid = "rsa2048/0x6C78B50B97A42F8A";
|
||||
fingerprint = "BF3A 3EE6 3144 2C5F C9FB 39A7 6C78 B50B 97A4 2F8A";
|
||||
}];
|
||||
};
|
||||
nzhang-zh = {
|
||||
email = "n.zhang.hp.au@gmail.com";
|
||||
github = "nzhang-zh";
|
||||
|
@ -8658,6 +8743,12 @@
|
|||
githubId = 13225611;
|
||||
name = "Nicolas Martin";
|
||||
};
|
||||
pennae = {
|
||||
name = "pennae";
|
||||
email = "github@quasiparticle.net";
|
||||
github = "pennae";
|
||||
githubId = 82953136;
|
||||
};
|
||||
p3psi = {
|
||||
name = "Elliot Boo";
|
||||
email = "p3psi.boo@gmail.com";
|
||||
|
@ -9654,16 +9745,6 @@
|
|||
githubId = 1312525;
|
||||
name = "Rongcui Dong";
|
||||
};
|
||||
ronthecookie = {
|
||||
name = "Ron B";
|
||||
email = "me@ronthecookie.me";
|
||||
github = "ronthecookie";
|
||||
githubId = 2526321;
|
||||
keys = [{
|
||||
longkeyid = "rsa2048/0x6F5B32DE5E5FA80C";
|
||||
fingerprint = "4B2C DDA5 FA35 642D 956D 7294 6F5B 32DE 5E5F A80C";
|
||||
}];
|
||||
};
|
||||
roosemberth = {
|
||||
email = "roosembert.palacios+nixpkgs@posteo.ch";
|
||||
github = "roosemberth";
|
||||
|
@ -10134,6 +10215,12 @@
|
|||
githubId = 307899;
|
||||
name = "Gurkan Gur";
|
||||
};
|
||||
sersorrel = {
|
||||
email = "ash@sorrel.sh";
|
||||
github = "sersorrel";
|
||||
githubId = 9433472;
|
||||
name = "ash";
|
||||
};
|
||||
servalcatty = {
|
||||
email = "servalcat@pm.me";
|
||||
github = "servalcatty";
|
||||
|
@ -10353,6 +10440,12 @@
|
|||
fingerprint = "B234 EFD4 2B42 FE81 EE4D 7627 F72C 4A88 7F9A 24CA";
|
||||
}];
|
||||
};
|
||||
sirseruju = {
|
||||
email = "sir.seruju@yandex.ru";
|
||||
github = "sirseruju";
|
||||
githubId = 74881555;
|
||||
name = "Fofanov Sergey";
|
||||
};
|
||||
sivteck = {
|
||||
email = "sivaram1992@gmail.com";
|
||||
github = "sivteck";
|
||||
|
@ -10448,6 +10541,13 @@
|
|||
githubId = 4477729;
|
||||
name = "Sergey Mironov";
|
||||
};
|
||||
smitop = {
|
||||
name = "Smitty van Bodegom";
|
||||
email = "me@smitop.com";
|
||||
matrix = "@smitop:kde.org";
|
||||
github = "Smittyvb";
|
||||
githubId = 10530973;
|
||||
};
|
||||
sna = {
|
||||
email = "abouzahra.9@wright.edu";
|
||||
github = "s-na";
|
||||
|
|
|
@ -33,8 +33,7 @@ TMP_FILE="$(mktemp)"
|
|||
GENERATED_NIXFILE="pkgs/development/lua-modules/generated-packages.nix"
|
||||
LUAROCKS_CONFIG="$NIXPKGS_PATH/maintainers/scripts/luarocks-config.lua"
|
||||
|
||||
HEADER = """
|
||||
/* {GENERATED_NIXFILE} is an auto-generated file -- DO NOT EDIT!
|
||||
HEADER = """/* {GENERATED_NIXFILE} is an auto-generated file -- DO NOT EDIT!
|
||||
Regenerate it with:
|
||||
nixpkgs$ ./maintainers/scripts/update-luarocks-packages
|
||||
|
||||
|
@ -99,9 +98,8 @@ class LuaEditor(Editor):
|
|||
header2 = textwrap.dedent(
|
||||
# header2 = inspect.cleandoc(
|
||||
"""
|
||||
{ self, stdenv, lib, fetchurl, fetchgit, ... } @ args:
|
||||
self: super:
|
||||
with self;
|
||||
{ self, stdenv, lib, fetchurl, fetchgit, callPackage, ... } @ args:
|
||||
final: prev:
|
||||
{
|
||||
""")
|
||||
f.write(header2)
|
||||
|
@ -199,6 +197,7 @@ def generate_pkg_nix(plug: LuaPlugin):
|
|||
|
||||
log.debug("running %s", ' '.join(cmd))
|
||||
output = subprocess.check_output(cmd, text=True)
|
||||
output = "callPackage(" + output.strip() + ") {};\n\n"
|
||||
return (plug, output)
|
||||
|
||||
def main():
|
||||
|
|
|
@ -137,7 +137,7 @@ with lib.maintainers; {
|
|||
cleverca22
|
||||
disassembler
|
||||
jonringer
|
||||
maveru
|
||||
manveru
|
||||
nrdxp
|
||||
];
|
||||
scope = "Input-Output Global employees, which maintain critical software";
|
||||
|
|
|
@ -55,6 +55,11 @@
|
|||
actions.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
KDE Plasma now finally works on Wayland.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
bash now defaults to major version 5.
|
||||
|
@ -81,6 +86,13 @@
|
|||
6</link> for more details.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
GNOME has been upgraded to 41. Please take a look at their
|
||||
<link xlink:href="https://help.gnome.org/misc/release-notes/41.0/">Release
|
||||
Notes</link> for details.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="sec-release-21.11-new-services">
|
||||
|
@ -269,6 +281,14 @@
|
|||
<link linkend="opt-services.postfixadmin.enable">postfixadmin</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://wiki.servarr.com/prowlarr">prowlarr</link>,
|
||||
an indexer manager/proxy built on the popular arr .net/reactjs
|
||||
base stack
|
||||
<link linkend="opt-services.prowlarr.enable">services.prowlarr</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://sr.ht/~emersion/soju">soju</link>, a
|
||||
|
@ -329,11 +349,25 @@
|
|||
controller support.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://github.com/opensvc/multipath-tools">multipath</link>,
|
||||
the device mapper multipath (DM-MP) daemon. Available as
|
||||
<link linkend="opt-services.multipath.enable">services.multipath</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="sec-release-21.11-incompatibilities">
|
||||
<title>Backward Incompatibilities</title>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>services.wakeonlan</literal> option was removed,
|
||||
and replaced with
|
||||
<literal>networking.interfaces.<name>.wakeOnLan</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>security.wrappers</literal> option now requires
|
||||
|
@ -1077,6 +1111,23 @@ Superuser created successfully.
|
|||
functionality.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>services.xserver.displayManager.defaultSession = "plasma5"</literal>
|
||||
does not work anymore, instead use either
|
||||
<literal>"plasma"</literal> for the Plasma X11
|
||||
session or <literal>"plasmawayland"</literal> for
|
||||
the Plasma Wayland sesison.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>boot.kernelParams</literal> now only accepts one
|
||||
command line parameter per string. This change is aimed to
|
||||
reduce common mistakes like <quote>param = 12</quote>, which
|
||||
would be parsed as 3 parameters.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="sec-release-21.11-notable-changes">
|
||||
|
@ -1477,6 +1528,73 @@ Superuser created successfully.
|
|||
<literal>/etc/xdg/mimeapps.list</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Kopia was upgraded from 0.8.x to 0.9.x. Please read the
|
||||
<link xlink:href="https://github.com/kopia/kopia/releases/tag/v0.9.0">upstream
|
||||
release notes</link> for changes and upgrade instructions.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>systemd.network</literal> module has gained
|
||||
support for the FooOverUDP link type.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>networking</literal> module has a new
|
||||
<literal>networking.fooOverUDP</literal> option to configure
|
||||
Foo-over-UDP encapsulations.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>networking.sits</literal> now supports Foo-over-UDP
|
||||
encapsulation.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Changing systemd <literal>.socket</literal> units now restarts
|
||||
them and stops the service that is activated by them.
|
||||
Additionally, services with
|
||||
<literal>stopOnChange = false</literal> don’t break anymore
|
||||
when they are socket-activated.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>virtualisation.libvirtd</literal> module has been
|
||||
refactored and updated with new options:
|
||||
</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>virtualisation.libvirtd.qemu*</literal> options
|
||||
(e.g.:
|
||||
<literal>virtualisation.libvirtd.qemuRunAsRoot</literal>)
|
||||
were moved to
|
||||
<link xlink:href="options.html#opt-virtualisation.libvirtd.qemu"><literal>virtualisation.libvirtd.qemu</literal></link>
|
||||
submodule,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
software TPM1/TPM2 support (e.g.: Windows 11 guests)
|
||||
(<link xlink:href="options.html#opt-virtualisation.libvirtd.qemu.swtpm"><literal>virtualisation.libvirtd.qemu.swtpm</literal></link>),
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
custom OVMF package (e.g.:
|
||||
<literal>pkgs.OVMFFull</literal> with HTTP, CSM and Secure
|
||||
Boot support)
|
||||
(<link xlink:href="options.html#opt-virtualisation.libvirtd.qemu.ovmf.package"><literal>virtualisation.libvirtd.qemu.ovmf.package</literal></link>).
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
# into DocBook files in the from_md folder.
|
||||
|
||||
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
|
||||
pushd $DIR
|
||||
pushd "$DIR"
|
||||
|
||||
# NOTE: Keep in sync with Nixpkgs manual (/doc/Makefile).
|
||||
# TODO: Remove raw-attribute when we can get rid of DocBook altogether.
|
||||
|
@ -29,7 +29,7 @@ mapfile -t MD_FILES < <(find . -type f -regex '.*\.md$')
|
|||
|
||||
for mf in ${MD_FILES[*]}; do
|
||||
if [ "${mf: -11}" == ".section.md" ]; then
|
||||
mkdir -p $(dirname "$OUT/$mf")
|
||||
mkdir -p "$(dirname "$OUT/$mf")"
|
||||
OUTFILE="$OUT/${mf%".section.md"}.section.xml"
|
||||
pandoc "$mf" "${pandoc_flags[@]}" \
|
||||
-o "$OUTFILE"
|
||||
|
@ -37,7 +37,7 @@ for mf in ${MD_FILES[*]}; do
|
|||
fi
|
||||
|
||||
if [ "${mf: -11}" == ".chapter.md" ]; then
|
||||
mkdir -p $(dirname "$OUT/$mf")
|
||||
mkdir -p "$(dirname "$OUT/$mf")"
|
||||
OUTFILE="$OUT/${mf%".chapter.md"}.chapter.xml"
|
||||
pandoc "$mf" "${pandoc_flags[@]}" \
|
||||
--top-level-division=chapter \
|
||||
|
|
|
@ -20,6 +20,8 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
This allows activation scripts to output what they would change if the activation was really run.
|
||||
The users/modules activation script supports this and outputs some of is actions.
|
||||
|
||||
- KDE Plasma now finally works on Wayland.
|
||||
|
||||
- bash now defaults to major version 5.
|
||||
|
||||
- Systemd was updated to version 249 (from 247).
|
||||
|
@ -28,6 +30,8 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- `kubernetes-helm` now defaults to 3.7.0, which introduced some breaking changes to the experimental OCI manifest format. See [HIP 6](https://github.com/helm/community/blob/main/hips/hip-0006.md) for more details.
|
||||
|
||||
- GNOME has been upgraded to 41. Please take a look at their [Release Notes](https://help.gnome.org/misc/release-notes/41.0/) for details.
|
||||
|
||||
## New Services {#sec-release-21.11-new-services}
|
||||
|
||||
- [btrbk](https://digint.ch/btrbk/index.html), a backup tool for btrfs subvolumes, taking advantage of btrfs specific capabilities to create atomic snapshots and transfer them incrementally to your backup locations. Available as [services.btrbk](options.html#opt-services.brtbk.instances).
|
||||
|
@ -82,6 +86,8 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- [postfixadmin](https://postfixadmin.sourceforge.io/), a web based virtual user administration interface for Postfix mail servers. Available as [postfixadmin](#opt-services.postfixadmin.enable).
|
||||
|
||||
- [prowlarr](https://wiki.servarr.com/prowlarr), an indexer manager/proxy built on the popular arr .net/reactjs base stack [services.prowlarr](#opt-services.prowlarr.enable).
|
||||
|
||||
- [soju](https://sr.ht/~emersion/soju), a user-friendly IRC bouncer. Available as [services.soju](options.html#opt-services.soju.enable).
|
||||
|
||||
- [nats](https://nats.io/), a high performance cloud and edge messaging system. Available as [services.nats](#opt-services.nats.enable).
|
||||
|
@ -101,8 +107,12 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- [joycond](https://github.com/DanielOgorchock/joycond), a service that uses `hid-nintendo` to provide nintendo joycond pairing and better nintendo switch pro controller support.
|
||||
|
||||
- [multipath](https://github.com/opensvc/multipath-tools), the device mapper multipath (DM-MP) daemon. Available as [services.multipath](#opt-services.multipath.enable).
|
||||
|
||||
## Backward Incompatibilities {#sec-release-21.11-incompatibilities}
|
||||
|
||||
- The `services.wakeonlan` option was removed, and replaced with `networking.interfaces.<name>.wakeOnLan`.
|
||||
|
||||
- The `security.wrappers` option now requires to always specify an owner, group and whether the setuid/setgid bit should be set.
|
||||
This is motivated by the fact that before NixOS 21.11, specifying either setuid or setgid but not owner/group resulted in wrappers owned by nobody/nogroup, which is unsafe.
|
||||
|
||||
|
@ -334,6 +344,9 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
configuration file. For details, see the [upstream changelog](https://github.com/DataDog/datadog-agent/blob/main/CHANGELOG.rst).
|
||||
|
||||
- `opencv2` no longer includes the non-free libraries by default, and consequently `pfstools` no longer includes OpenCV support by default. Both packages now support an `enableUnfree` option to re-enable this functionality.
|
||||
- `services.xserver.displayManager.defaultSession = "plasma5"` does not work anymore, instead use either `"plasma"` for the Plasma X11 session or `"plasmawayland"` for the Plasma Wayland sesison.
|
||||
|
||||
- `boot.kernelParams` now only accepts one command line parameter per string. This change is aimed to reduce common mistakes like "param = 12", which would be parsed as 3 parameters.
|
||||
|
||||
## Other Notable Changes {#sec-release-21.11-notable-changes}
|
||||
|
||||
|
@ -428,3 +441,18 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
directories, thus increasing the purity of the build.
|
||||
|
||||
- Three new options, [xdg.mime.addedAssociations](#opt-xdg.mime.addedAssociations), [xdg.mime.defaultApplications](#opt-xdg.mime.defaultApplications), and [xdg.mime.removedAssociations](#opt-xdg.mime.removedAssociations) have been added to the [xdg.mime](#opt-xdg.mime.enable) module to allow the configuration of `/etc/xdg/mimeapps.list`.
|
||||
|
||||
- Kopia was upgraded from 0.8.x to 0.9.x. Please read the [upstream release notes](https://github.com/kopia/kopia/releases/tag/v0.9.0) for changes and upgrade instructions.
|
||||
|
||||
- The `systemd.network` module has gained support for the FooOverUDP link type.
|
||||
|
||||
- The `networking` module has a new `networking.fooOverUDP` option to configure Foo-over-UDP encapsulations.
|
||||
|
||||
- `networking.sits` now supports Foo-over-UDP encapsulation.
|
||||
|
||||
- Changing systemd `.socket` units now restarts them and stops the service that is activated by them. Additionally, services with `stopOnChange = false` don't break anymore when they are socket-activated.
|
||||
|
||||
- The `virtualisation.libvirtd` module has been refactored and updated with new options:
|
||||
- `virtualisation.libvirtd.qemu*` options (e.g.: `virtualisation.libvirtd.qemuRunAsRoot`) were moved to [`virtualisation.libvirtd.qemu`](options.html#opt-virtualisation.libvirtd.qemu) submodule,
|
||||
- software TPM1/TPM2 support (e.g.: Windows 11 guests) ([`virtualisation.libvirtd.qemu.swtpm`](options.html#opt-virtualisation.libvirtd.qemu.swtpm)),
|
||||
- custom OVMF package (e.g.: `pkgs.OVMFFull` with HTTP, CSM and Secure Boot support) ([`virtualisation.libvirtd.qemu.ovmf.package`](options.html#opt-virtualisation.libvirtd.qemu.ovmf.package)).
|
||||
|
|
|
@ -68,9 +68,8 @@ rec {
|
|||
prefixLength = 24;
|
||||
} ];
|
||||
});
|
||||
in
|
||||
{ key = "ip-address";
|
||||
config =
|
||||
|
||||
networkConfig =
|
||||
{ networking.hostName = mkDefault m.fst;
|
||||
|
||||
networking.interfaces = listToAttrs interfaces;
|
||||
|
@ -96,7 +95,15 @@ rec {
|
|||
in flip concatMap interfacesNumbered
|
||||
({ fst, snd }: qemu-common.qemuNICFlags snd fst m.snd);
|
||||
};
|
||||
}
|
||||
|
||||
in
|
||||
{ key = "ip-address";
|
||||
config = networkConfig // {
|
||||
# Expose the networkConfig items for tests like nixops
|
||||
# that need to recreate the network config.
|
||||
system.build.networkConfig = networkConfig;
|
||||
};
|
||||
}
|
||||
)
|
||||
(getAttr m.fst nodes)
|
||||
] );
|
||||
|
|
|
@ -83,10 +83,13 @@ let
|
|||
optionsListVisible = lib.filter (opt: opt.visible && !opt.internal) (lib.optionAttrSetToDocList options);
|
||||
|
||||
# Customly sort option list for the man page.
|
||||
# Always ensure that the sort order matches sortXML.py!
|
||||
optionsList = lib.sort optionLess optionsListDesc;
|
||||
|
||||
# Convert the list of options into an XML file.
|
||||
optionsXML = builtins.toFile "options.xml" (builtins.toXML optionsList);
|
||||
# This file is *not* sorted sorted to save on eval time, since the docbook XML
|
||||
# and the manpage depend on it and thus we evaluate this on every system rebuild.
|
||||
optionsXML = builtins.toFile "options.xml" (builtins.toXML optionsListDesc);
|
||||
|
||||
optionsNix = builtins.listToAttrs (map (o: { name = o.name; value = removeAttrs o ["name" "visible" "internal"]; }) optionsList);
|
||||
|
||||
|
@ -185,9 +188,10 @@ in {
|
|||
exit 1
|
||||
fi
|
||||
|
||||
${pkgs.python3Minimal}/bin/python ${./sortXML.py} $optionsXML sorted.xml
|
||||
${pkgs.libxslt.bin}/bin/xsltproc \
|
||||
--stringparam revision '${revision}' \
|
||||
-o intermediate.xml ${./options-to-docbook.xsl} $optionsXML
|
||||
-o intermediate.xml ${./options-to-docbook.xsl} sorted.xml
|
||||
${pkgs.libxslt.bin}/bin/xsltproc \
|
||||
-o "$out" ${./postprocess-option-descriptions.xsl} intermediate.xml
|
||||
'';
|
||||
|
|
28
nixos/lib/make-options-doc/sortXML.py
Normal file
28
nixos/lib/make-options-doc/sortXML.py
Normal file
|
@ -0,0 +1,28 @@
|
|||
import xml.etree.ElementTree as ET
|
||||
import sys
|
||||
|
||||
tree = ET.parse(sys.argv[1])
|
||||
# the xml tree is of the form
|
||||
# <expr><list> {all options, each an attrs} </list></expr>
|
||||
options = list(tree.getroot().find('list'))
|
||||
|
||||
def sortKey(opt):
|
||||
def order(s):
|
||||
if s.startswith("enable"):
|
||||
return 0
|
||||
if s.startswith("package"):
|
||||
return 1
|
||||
return 2
|
||||
|
||||
return [
|
||||
(order(p.attrib['value']), p.attrib['value'])
|
||||
for p in opt.findall('attr[@name="loc"]/list/string')
|
||||
]
|
||||
|
||||
# always ensure that the sort order matches the order used in the nix expression!
|
||||
options.sort(key=sortKey)
|
||||
|
||||
doc = ET.Element("expr")
|
||||
newOptions = ET.SubElement(doc, "list")
|
||||
newOptions.extend(options)
|
||||
ET.ElementTree(doc).write(sys.argv[2], encoding='utf-8')
|
|
@ -1126,9 +1126,9 @@ class Driver:
|
|||
try:
|
||||
yield
|
||||
return True
|
||||
except:
|
||||
rootlog.error(f'Test "{name}" failed with error:')
|
||||
raise
|
||||
except Exception as e:
|
||||
rootlog.error(f'Test "{name}" failed with error: "{e}"')
|
||||
raise e
|
||||
|
||||
def test_symbols(self) -> Dict[str, Any]:
|
||||
@contextmanager
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
let
|
||||
pkgs = (import ../../../../../../default.nix {});
|
||||
machine = import "${pkgs.path}/nixos/lib/eval-config.nix" {
|
||||
machine = import (pkgs.path + "/nixos/lib/eval-config.nix") {
|
||||
system = "x86_64-linux";
|
||||
modules = [
|
||||
({config, ...}: { imports = [ ./system.nix ]; })
|
||||
|
|
|
@ -50,9 +50,8 @@ in
|
|||
|
||||
config = mkIf cfg.enable {
|
||||
|
||||
# This is enough to make a symlink because the xserver
|
||||
# module already links all /share/X11 paths.
|
||||
environment.systemPackages = [ x11Fonts ];
|
||||
environment.pathsToLink = [ "/share/X11/fonts" ];
|
||||
|
||||
services.xserver.filesSection = ''
|
||||
FontPath "${x11Fonts}/share/X11/fonts"
|
||||
|
|
|
@ -213,7 +213,7 @@ in
|
|||
}
|
||||
|
||||
{
|
||||
assertion = cfg.powerManagement.enable -> offloadCfg.enable;
|
||||
assertion = cfg.powerManagement.finegrained -> offloadCfg.enable;
|
||||
message = "Fine-grained power management requires offload to be enabled.";
|
||||
}
|
||||
|
||||
|
|
|
@ -144,7 +144,7 @@ in
|
|||
dictd = 105;
|
||||
couchdb = 106;
|
||||
#searx = 107; # dynamically allocated as of 2020-10-27
|
||||
kippo = 108;
|
||||
#kippo = 108; # removed 2021-10-07, the kippo package was removed in 1b213f321cdbfcf868b96fd9959c24207ce1b66a during 2021-04
|
||||
jenkins = 109;
|
||||
systemd-journal-gateway = 110;
|
||||
#notbit = 111; # unused
|
||||
|
@ -462,7 +462,7 @@ in
|
|||
dictd = 105;
|
||||
couchdb = 106;
|
||||
#searx = 107; # dynamically allocated as of 2020-10-27
|
||||
kippo = 108;
|
||||
#kippo = 108; # removed 2021-10-07, the kippo package was removed in 1b213f321cdbfcf868b96fd9959c24207ce1b66a during 2021-04
|
||||
jenkins = 109;
|
||||
systemd-journal-gateway = 110;
|
||||
#notbit = 111; # unused
|
||||
|
|
|
@ -571,6 +571,7 @@
|
|||
./services/misc/plex.nix
|
||||
./services/misc/plikd.nix
|
||||
./services/misc/podgrab.nix
|
||||
./services/misc/prowlarr.nix
|
||||
./services/misc/tautulli.nix
|
||||
./services/misc/pinnwand.nix
|
||||
./services/misc/pykms.nix
|
||||
|
@ -759,7 +760,6 @@
|
|||
./services/networking/kea.nix
|
||||
./services/networking/keepalived/default.nix
|
||||
./services/networking/keybase.nix
|
||||
./services/networking/kippo.nix
|
||||
./services/networking/knot.nix
|
||||
./services/networking/kresd.nix
|
||||
./services/networking/lambdabot.nix
|
||||
|
@ -779,6 +779,7 @@
|
|||
./services/networking/mstpd.nix
|
||||
./services/networking/mtprotoproxy.nix
|
||||
./services/networking/mullvad-vpn.nix
|
||||
./services/networking/multipath.nix
|
||||
./services/networking/murmur.nix
|
||||
./services/networking/mxisd.nix
|
||||
./services/networking/namecoind.nix
|
||||
|
@ -883,7 +884,6 @@
|
|||
./services/video/unifi-video.nix
|
||||
./services/networking/v2ray.nix
|
||||
./services/networking/vsftpd.nix
|
||||
./services/networking/wakeonlan.nix
|
||||
./services/networking/wasabibackend.nix
|
||||
./services/networking/websockify.nix
|
||||
./services/networking/wg-quick.nix
|
||||
|
|
|
@ -4,7 +4,9 @@
|
|||
|
||||
with lib;
|
||||
|
||||
{
|
||||
let cfg = config.programs.evince;
|
||||
|
||||
in {
|
||||
|
||||
# Added 2019-08-09
|
||||
imports = [
|
||||
|
@ -22,6 +24,13 @@ with lib;
|
|||
enable = mkEnableOption
|
||||
"Evince, the GNOME document viewer";
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.evince;
|
||||
defaultText = literalExpression "pkgs.evince";
|
||||
description = "Evince derivation to use.";
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
};
|
||||
|
@ -31,11 +40,11 @@ with lib;
|
|||
|
||||
config = mkIf config.programs.evince.enable {
|
||||
|
||||
environment.systemPackages = [ pkgs.evince ];
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
||||
services.dbus.packages = [ pkgs.evince ];
|
||||
services.dbus.packages = [ cfg.package ];
|
||||
|
||||
systemd.packages = [ pkgs.evince ];
|
||||
systemd.packages = [ cfg.package ];
|
||||
|
||||
};
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ in
|
|||
};
|
||||
|
||||
config = mkOption {
|
||||
type = types.attrs;
|
||||
type = with types; attrsOf (attrsOf anything);
|
||||
default = { };
|
||||
example = {
|
||||
init.defaultBranch = "main";
|
||||
|
@ -31,15 +31,39 @@ in
|
|||
section of git-config(1) for more information.
|
||||
'';
|
||||
};
|
||||
|
||||
lfs = {
|
||||
enable = mkEnableOption "git-lfs";
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.git-lfs;
|
||||
defaultText = literalExpression "pkgs.git-lfs";
|
||||
description = "The git-lfs package to use";
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
environment.etc.gitconfig = mkIf (cfg.config != {}) {
|
||||
text = generators.toGitINI cfg.config;
|
||||
};
|
||||
};
|
||||
config = mkMerge [
|
||||
(mkIf cfg.enable {
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
environment.etc.gitconfig = mkIf (cfg.config != {}) {
|
||||
text = generators.toGitINI cfg.config;
|
||||
};
|
||||
})
|
||||
(mkIf (cfg.enable && cfg.lfs.enable) {
|
||||
environment.systemPackages = [ cfg.lfs.package ];
|
||||
programs.git.config = {
|
||||
filter.lfs = {
|
||||
clean = "git-lfs clean -- %f";
|
||||
smudge = "git-lfs smudge -- %f";
|
||||
process = "git-lfs filter-process";
|
||||
required = true;
|
||||
};
|
||||
};
|
||||
})
|
||||
];
|
||||
|
||||
meta.maintainers = with maintainers; [ figsoda ];
|
||||
}
|
||||
|
|
|
@ -79,6 +79,9 @@ with lib;
|
|||
The hidepid module was removed, since the underlying machinery
|
||||
is broken when using cgroups-v2.
|
||||
'')
|
||||
(mkRemovedOptionModule ["services" "wakeonlan"] "This module was removed in favor of enabling it with networking.interfaces.<name>.wakeOnLan")
|
||||
|
||||
(mkRemovedOptionModule [ "services" "kippo" ] "The corresponding package was removed from nixpkgs.")
|
||||
|
||||
# Do NOT add any option renames here, see top of the file
|
||||
];
|
||||
|
|
|
@ -96,9 +96,8 @@ let
|
|||
};
|
||||
} cfg.extraConfig;
|
||||
|
||||
configFile = pkgs.runCommand "config.toml" {
|
||||
buildInputs = [ pkgs.remarshal ];
|
||||
preferLocalBuild = true;
|
||||
configFile = pkgs.runCommandLocal "config.toml" {
|
||||
nativeBuildInputs = [ pkgs.remarshal ];
|
||||
} ''
|
||||
remarshal -if json -of toml \
|
||||
< ${pkgs.writeText "config.json" (builtins.toJSON configOptions)} \
|
||||
|
|
|
@ -122,6 +122,14 @@ in {
|
|||
options = {
|
||||
services.matrix-synapse = {
|
||||
enable = mkEnableOption "matrix.org synapse";
|
||||
configFile = mkOption {
|
||||
type = types.str;
|
||||
readOnly = true;
|
||||
description = ''
|
||||
Path to the configuration file on the target system. Useful to configure e.g. workers
|
||||
that also need this.
|
||||
'';
|
||||
};
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.matrix-synapse;
|
||||
|
@ -706,6 +714,8 @@ in {
|
|||
}
|
||||
];
|
||||
|
||||
services.matrix-synapse.configFile = "${configFile}";
|
||||
|
||||
users.users.matrix-synapse = {
|
||||
group = "matrix-synapse";
|
||||
home = cfg.dataDir;
|
||||
|
|
|
@ -555,6 +555,22 @@ in
|
|||
+ "\n"
|
||||
) cfg.buildMachines;
|
||||
};
|
||||
assertions =
|
||||
let badMachine = m: m.system == null && m.systems == [];
|
||||
in [
|
||||
{
|
||||
assertion = !(builtins.any badMachine cfg.buildMachines);
|
||||
message = ''
|
||||
At least one system type (via <varname>system</varname> or
|
||||
<varname>systems</varname>) must be set for every build machine.
|
||||
Invalid machine specifications:
|
||||
'' + " " +
|
||||
(builtins.concatStringsSep "\n "
|
||||
(builtins.map (m: m.hostName)
|
||||
(builtins.filter (badMachine) cfg.buildMachines)));
|
||||
}
|
||||
];
|
||||
|
||||
|
||||
systemd.packages = [ nix ];
|
||||
|
||||
|
|
41
nixos/modules/services/misc/prowlarr.nix
Normal file
41
nixos/modules/services/misc/prowlarr.nix
Normal file
|
@ -0,0 +1,41 @@
|
|||
{ config, pkgs, lib, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.prowlarr;
|
||||
|
||||
in
|
||||
{
|
||||
options = {
|
||||
services.prowlarr = {
|
||||
enable = mkEnableOption "Prowlarr";
|
||||
|
||||
openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Open ports in the firewall for the Prowlarr web interface.";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
systemd.services.prowlarr = {
|
||||
description = "Prowlarr";
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
||||
serviceConfig = {
|
||||
Type = "simple";
|
||||
DynamicUser = true;
|
||||
StateDirectory = "prowlarr";
|
||||
ExecStart = "${pkgs.prowlarr}/bin/Prowlarr -nobrowser -data=/var/lib/prowlarr";
|
||||
Restart = "on-failure";
|
||||
};
|
||||
};
|
||||
|
||||
networking.firewall = mkIf cfg.openFirewall {
|
||||
allowedTCPPorts = [ 9696 ];
|
||||
};
|
||||
};
|
||||
}
|
|
@ -86,7 +86,7 @@ in
|
|||
serviceConfig = {
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
ExecStart = "${sickbeard}/SickBeard.py --datadir ${cfg.dataDir} --config ${cfg.configFile} --port ${toString cfg.port}";
|
||||
ExecStart = "${sickbeard}/bin/${sickbeard.pname} --datadir ${cfg.dataDir} --config ${cfg.configFile} --port ${toString cfg.port}";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -109,7 +109,7 @@ let cfg = config.services.subsonic; in {
|
|||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
script = ''
|
||||
${pkgs.jre}/bin/java -Xmx${toString cfg.maxMemory}m \
|
||||
${pkgs.jre8}/bin/java -Xmx${toString cfg.maxMemory}m \
|
||||
-Dsubsonic.home=${cfg.home} \
|
||||
-Dsubsonic.host=${cfg.listenAddress} \
|
||||
-Dsubsonic.port=${toString cfg.port} \
|
||||
|
|
|
@ -192,7 +192,7 @@ let
|
|||
serviceConfig.MemoryDenyWriteExecute = true;
|
||||
serviceConfig.NoNewPrivileges = true;
|
||||
serviceConfig.PrivateDevices = true;
|
||||
serviceConfig.ProtectClock = true;
|
||||
serviceConfig.ProtectClock = mkDefault true;
|
||||
serviceConfig.ProtectControlGroups = true;
|
||||
serviceConfig.ProtectHome = true;
|
||||
serviceConfig.ProtectHostname = true;
|
||||
|
|
|
@ -35,6 +35,10 @@ in
|
|||
${concatMapStringsSep " " (x: "--no-collector." + x) cfg.disabledCollectors} \
|
||||
--web.listen-address ${cfg.listenAddress}:${toString cfg.port} ${concatStringsSep " " cfg.extraFlags}
|
||||
'';
|
||||
# The systemd collector needs AF_UNIX
|
||||
RestrictAddressFamilies = lib.optional (lib.any (x: x == "systemd") cfg.enabledCollectors) "AF_UNIX";
|
||||
# The timex collector needs to access clock APIs
|
||||
ProtectClock = lib.any (x: x == "timex") cfg.disabledCollectors;
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -61,6 +61,11 @@ in
|
|||
serviceConfig = {
|
||||
# rtl-sdr udev rules make supported USB devices +rw by plugdev.
|
||||
SupplementaryGroups = "plugdev";
|
||||
# rtl_433 needs rw access to the USB radio.
|
||||
PrivateDevices = lib.mkForce false;
|
||||
DeviceAllow = lib.mkForce "char-usb_device rw";
|
||||
RestrictAddressFamilies = [ "AF_NETLINK" ];
|
||||
|
||||
ExecStart = let
|
||||
matchers = (map (m:
|
||||
"--channel_matcher '${m.name},${toString m.channel},${m.location}'"
|
||||
|
|
|
@ -24,18 +24,21 @@ in
|
|||
|
||||
environment.systemPackages = [ pkgs.teamviewer ];
|
||||
|
||||
services.dbus.packages = [ pkgs.teamviewer ];
|
||||
|
||||
systemd.services.teamviewerd = {
|
||||
description = "TeamViewer remote control daemon";
|
||||
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
after = [ "NetworkManager-wait-online.service" "network.target" ];
|
||||
after = [ "NetworkManager-wait-online.service" "network.target" "dbus.service" ];
|
||||
requires = [ "dbus.service" ];
|
||||
preStart = "mkdir -pv /var/lib/teamviewer /var/log/teamviewer";
|
||||
|
||||
startLimitIntervalSec = 60;
|
||||
startLimitBurst = 10;
|
||||
serviceConfig = {
|
||||
Type = "forking";
|
||||
ExecStart = "${pkgs.teamviewer}/bin/teamviewerd -d";
|
||||
Type = "simple";
|
||||
ExecStart = "${pkgs.teamviewer}/bin/teamviewerd -f";
|
||||
PIDFile = "/run/teamviewerd.pid";
|
||||
ExecReload = "${pkgs.coreutils}/bin/kill -HUP $MAINPID";
|
||||
Restart = "on-abort";
|
||||
|
|
|
@ -87,13 +87,20 @@ in
|
|||
<note>
|
||||
<para>If you use the firewall consider adding the following:</para>
|
||||
<programlisting>
|
||||
networking.firewall.allowedTCPPorts = [ 139 445 ];
|
||||
networking.firewall.allowedUDPPorts = [ 137 138 ];
|
||||
services.samba.openFirewall = true;
|
||||
</programlisting>
|
||||
</note>
|
||||
'';
|
||||
};
|
||||
|
||||
openFirewall = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Whether to automatically open the necessary ports in the firewall.
|
||||
'';
|
||||
};
|
||||
|
||||
enableNmbd = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
|
@ -235,7 +242,10 @@ in
|
|||
};
|
||||
|
||||
security.pam.services.samba = {};
|
||||
environment.systemPackages = [ config.services.samba.package ];
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
|
||||
networking.firewall.allowedTCPPorts = mkIf cfg.openFirewall [ 139 445 ];
|
||||
networking.firewall.allowedUDPPorts = mkIf cfg.openFirewall [ 137 138 ];
|
||||
})
|
||||
];
|
||||
|
||||
|
|
|
@ -64,6 +64,12 @@ in
|
|||
default = false;
|
||||
};
|
||||
|
||||
extraIscsiCommands = mkOption {
|
||||
description = "Extra iscsi commands to run in the initrd.";
|
||||
default = "";
|
||||
type = lines;
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
description = "Extra lines to append to /etc/iscsid.conf";
|
||||
default = null;
|
||||
|
@ -162,6 +168,9 @@ in
|
|||
'' else ''
|
||||
iscsiadm --mode node --targetname ${escapeShellArg cfg.target} --login
|
||||
''}
|
||||
|
||||
${cfg.extraIscsiCommands}
|
||||
|
||||
pkill -9 iscsid
|
||||
'';
|
||||
};
|
||||
|
|
|
@ -1,117 +0,0 @@
|
|||
# NixOS module for kippo honeypot ssh server
|
||||
# See all the options for configuration details.
|
||||
#
|
||||
# Default port is 2222. Recommend using something like this for port redirection to default SSH port:
|
||||
# networking.firewall.extraCommands = ''
|
||||
# iptables -t nat -A PREROUTING -i IN_IFACE -p tcp --dport 22 -j REDIRECT --to-port 2222'';
|
||||
#
|
||||
# Lastly: use this service at your own risk. I am working on a way to run this inside a VM.
|
||||
{ config, lib, pkgs, ... }:
|
||||
with lib;
|
||||
let
|
||||
cfg = config.services.kippo;
|
||||
in
|
||||
{
|
||||
options = {
|
||||
services.kippo = {
|
||||
enable = mkOption {
|
||||
default = false;
|
||||
type = types.bool;
|
||||
description = "Enable the kippo honeypot ssh server.";
|
||||
};
|
||||
port = mkOption {
|
||||
default = 2222;
|
||||
type = types.int;
|
||||
description = "TCP port number for kippo to bind to.";
|
||||
};
|
||||
hostname = mkOption {
|
||||
default = "nas3";
|
||||
type = types.str;
|
||||
description = "Hostname for kippo to present to SSH login";
|
||||
};
|
||||
varPath = mkOption {
|
||||
default = "/var/lib/kippo";
|
||||
type = types.path;
|
||||
description = "Path of read/write files needed for operation and configuration.";
|
||||
};
|
||||
logPath = mkOption {
|
||||
default = "/var/log/kippo";
|
||||
type = types.path;
|
||||
description = "Path of log files needed for operation and configuration.";
|
||||
};
|
||||
pidPath = mkOption {
|
||||
default = "/run/kippo";
|
||||
type = types.path;
|
||||
description = "Path of pid files needed for operation.";
|
||||
};
|
||||
extraConfig = mkOption {
|
||||
default = "";
|
||||
type = types.lines;
|
||||
description = "Extra verbatim configuration added to the end of kippo.cfg.";
|
||||
};
|
||||
};
|
||||
|
||||
};
|
||||
config = mkIf cfg.enable {
|
||||
environment.systemPackages = with pkgs.pythonPackages; [
|
||||
python pkgs.kippo.twisted pycrypto pyasn1 ];
|
||||
|
||||
environment.etc."kippo.cfg".text = ''
|
||||
# Automatically generated by NixOS.
|
||||
# See ${pkgs.kippo}/src/kippo.cfg for details.
|
||||
[honeypot]
|
||||
log_path = ${cfg.logPath}
|
||||
download_path = ${cfg.logPath}/dl
|
||||
filesystem_file = ${cfg.varPath}/honeyfs
|
||||
filesystem_file = ${cfg.varPath}/fs.pickle
|
||||
data_path = ${cfg.varPath}/data
|
||||
txtcmds_path = ${cfg.varPath}/txtcmds
|
||||
public_key = ${cfg.varPath}/keys/public.key
|
||||
private_key = ${cfg.varPath}/keys/private.key
|
||||
ssh_port = ${toString cfg.port}
|
||||
hostname = ${cfg.hostname}
|
||||
${cfg.extraConfig}
|
||||
'';
|
||||
|
||||
users.users.kippo = {
|
||||
description = "kippo web server privilege separation user";
|
||||
uid = 108; # why does config.ids.uids.kippo give an error?
|
||||
};
|
||||
users.groups.kippo.gid = 108;
|
||||
|
||||
systemd.services.kippo = with pkgs; {
|
||||
description = "Kippo Web Server";
|
||||
after = [ "network.target" ];
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
environment.PYTHONPATH = "${pkgs.kippo}/src/:${pkgs.pythonPackages.pycrypto}/lib/python2.7/site-packages/:${pkgs.pythonPackages.pyasn1}/lib/python2.7/site-packages/:${pkgs.pythonPackages.python}/lib/python2.7/site-packages/:${pkgs.kippo.twisted}/lib/python2.7/site-packages/:.";
|
||||
preStart = ''
|
||||
if [ ! -d ${cfg.varPath}/ ] ; then
|
||||
mkdir -p ${cfg.logPath}/tty
|
||||
mkdir -p ${cfg.logPath}/dl
|
||||
mkdir -p ${cfg.varPath}/keys
|
||||
cp ${pkgs.kippo}/src/honeyfs ${cfg.varPath} -r
|
||||
cp ${pkgs.kippo}/src/fs.pickle ${cfg.varPath}/fs.pickle
|
||||
cp ${pkgs.kippo}/src/data ${cfg.varPath} -r
|
||||
cp ${pkgs.kippo}/src/txtcmds ${cfg.varPath} -r
|
||||
|
||||
chmod u+rw ${cfg.varPath} -R
|
||||
chown kippo.kippo ${cfg.varPath} -R
|
||||
chown kippo.kippo ${cfg.logPath} -R
|
||||
chmod u+rw ${cfg.logPath} -R
|
||||
fi
|
||||
if [ ! -d ${cfg.pidPath}/ ] ; then
|
||||
mkdir -p ${cfg.pidPath}
|
||||
chmod u+rw ${cfg.pidPath}
|
||||
chown kippo.kippo ${cfg.pidPath}
|
||||
fi
|
||||
'';
|
||||
|
||||
serviceConfig.ExecStart = "${pkgs.kippo.twisted}/bin/twistd -y ${pkgs.kippo}/src/kippo.tac --syslog --rundir=${cfg.varPath}/ --pidfile=${cfg.pidPath}/kippo.pid --prefix=kippo -n";
|
||||
serviceConfig.PermissionsStartOnly = true;
|
||||
serviceConfig.User = "kippo";
|
||||
serviceConfig.Group = "kippo";
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
|
572
nixos/modules/services/networking/multipath.nix
Normal file
572
nixos/modules/services/networking/multipath.nix
Normal file
|
@ -0,0 +1,572 @@
|
|||
{ config, lib, pkgs, ... }: with lib;
|
||||
|
||||
# See http://christophe.varoqui.free.fr/usage.html and
|
||||
# https://github.com/opensvc/multipath-tools/blob/master/multipath/multipath.conf.5
|
||||
|
||||
let
|
||||
cfg = config.services.multipath;
|
||||
|
||||
indentLines = n: str: concatStringsSep "\n" (
|
||||
map (line: "${fixedWidthString n " " " "}${line}") (
|
||||
filter ( x: x != "" ) ( splitString "\n" str )
|
||||
)
|
||||
);
|
||||
|
||||
addCheckDesc = desc: elemType: check: types.addCheck elemType check
|
||||
// { description = "${elemType.description} (with check: ${desc})"; };
|
||||
hexChars = stringToCharacters "0123456789abcdef";
|
||||
isHexString = s: all (c: elem c hexChars) (stringToCharacters (toLower s));
|
||||
hexStr = addCheckDesc "hexadecimal string" types.str isHexString;
|
||||
|
||||
in {
|
||||
|
||||
options.services.multipath = with types; {
|
||||
|
||||
enable = mkEnableOption "the device mapper multipath (DM-MP) daemon";
|
||||
|
||||
package = mkOption {
|
||||
type = package;
|
||||
description = "multipath-tools package to use";
|
||||
default = pkgs.multipath-tools;
|
||||
defaultText = "pkgs.multipath-tools";
|
||||
};
|
||||
|
||||
devices = mkOption {
|
||||
default = [ ];
|
||||
example = literalExpression ''
|
||||
[
|
||||
{
|
||||
vendor = "\"COMPELNT\"";
|
||||
product = "\"Compellent Vol\"";
|
||||
path_checker = "tur";
|
||||
no_path_retry = "queue";
|
||||
max_sectors_kb = 256;
|
||||
}, ...
|
||||
]
|
||||
'';
|
||||
description = ''
|
||||
This option allows you to define arrays for use in multipath
|
||||
groups.
|
||||
'';
|
||||
type = listOf (submodule {
|
||||
options = {
|
||||
|
||||
vendor = mkOption {
|
||||
type = str;
|
||||
example = "COMPELNT";
|
||||
description = "Regular expression to match the vendor name";
|
||||
};
|
||||
|
||||
product = mkOption {
|
||||
type = str;
|
||||
example = "Compellent Vol";
|
||||
description = "Regular expression to match the product name";
|
||||
};
|
||||
|
||||
revision = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "Regular expression to match the product revision";
|
||||
};
|
||||
|
||||
product_blacklist = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "Products with the given vendor matching this string are blacklisted";
|
||||
};
|
||||
|
||||
alias_prefix = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "The user_friendly_names prefix to use for this device type, instead of the default mpath";
|
||||
};
|
||||
|
||||
vpd_vendor = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "The vendor specific vpd page information, using the vpd page abbreviation";
|
||||
};
|
||||
|
||||
hardware_handler = mkOption {
|
||||
type = nullOr (enum [ "emc" "rdac" "hp_sw" "alua" "ana" ]);
|
||||
default = null;
|
||||
description = "The hardware handler to use for this device type";
|
||||
};
|
||||
|
||||
# Optional arguments
|
||||
path_grouping_policy = mkOption {
|
||||
type = nullOr (enum [ "failover" "multibus" "group_by_serial" "group_by_prio" "group_by_node_name" ]);
|
||||
default = null; # real default: "failover"
|
||||
description = "The default path grouping policy to apply to unspecified multipaths";
|
||||
};
|
||||
|
||||
uid_attribute = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "The udev attribute providing a unique path identifier (WWID)";
|
||||
};
|
||||
|
||||
getuid_callout = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
(Superseded by uid_attribute) The default program and args to callout
|
||||
to obtain a unique path identifier. Should be specified with an absolute path.
|
||||
'';
|
||||
};
|
||||
|
||||
path_selector = mkOption {
|
||||
type = nullOr (enum [
|
||||
''"round-robin 0"''
|
||||
''"queue-length 0"''
|
||||
''"service-time 0"''
|
||||
''"historical-service-time 0"''
|
||||
]);
|
||||
default = null; # real default: "service-time 0"
|
||||
description = "The default path selector algorithm to use; they are offered by the kernel multipath target";
|
||||
};
|
||||
|
||||
path_checker = mkOption {
|
||||
type = enum [ "readsector0" "tur" "emc_clariion" "hp_sw" "rdac" "directio" "cciss_tur" "none" ];
|
||||
default = "tur";
|
||||
description = "The default method used to determine the paths state";
|
||||
};
|
||||
|
||||
prio = mkOption {
|
||||
type = nullOr (enum [
|
||||
"none" "const" "sysfs" "emc" "alua" "ontap" "rdac" "hp_sw" "hds"
|
||||
"random" "weightedpath" "path_latency" "ana" "datacore" "iet"
|
||||
]);
|
||||
default = null; # real default: "const"
|
||||
description = "The name of the path priority routine";
|
||||
};
|
||||
|
||||
prio_args = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "Arguments to pass to to the prio function";
|
||||
};
|
||||
|
||||
features = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "Specify any device-mapper features to be used";
|
||||
};
|
||||
|
||||
failback = mkOption {
|
||||
type = nullOr str;
|
||||
default = null; # real default: "manual"
|
||||
description = "Tell multipathd how to manage path group failback. Quote integers as strings";
|
||||
};
|
||||
|
||||
rr_weight = mkOption {
|
||||
type = nullOr (enum [ "priorities" "uniform" ]);
|
||||
default = null; # real default: "uniform"
|
||||
description = ''
|
||||
If set to priorities the multipath configurator will assign path weights
|
||||
as "path prio * rr_min_io".
|
||||
'';
|
||||
};
|
||||
|
||||
no_path_retry = mkOption {
|
||||
type = nullOr str;
|
||||
default = null; # real default: "fail"
|
||||
description = "Specify what to do when all paths are down. Quote integers as strings";
|
||||
};
|
||||
|
||||
rr_min_io = mkOption {
|
||||
type = nullOr int;
|
||||
default = null; # real default: 1000
|
||||
description = ''
|
||||
Number of I/O requests to route to a path before switching to the next in the
|
||||
same path group. This is only for Block I/O (BIO) based multipath and
|
||||
only apply to round-robin path_selector.
|
||||
'';
|
||||
};
|
||||
|
||||
rr_min_io_rq = mkOption {
|
||||
type = nullOr int;
|
||||
default = null; # real default: 1
|
||||
description = ''
|
||||
Number of I/O requests to route to a path before switching to the next in the
|
||||
same path group. This is only for Request based multipath and
|
||||
only apply to round-robin path_selector.
|
||||
'';
|
||||
};
|
||||
|
||||
fast_io_fail_tmo = mkOption {
|
||||
type = nullOr str;
|
||||
default = null; # real default: 5
|
||||
description = ''
|
||||
Specify the number of seconds the SCSI layer will wait after a problem has been
|
||||
detected on a FC remote port before failing I/O to devices on that remote port.
|
||||
This should be smaller than dev_loss_tmo. Setting this to "off" will disable
|
||||
the timeout. Quote integers as strings.
|
||||
'';
|
||||
};
|
||||
|
||||
dev_loss_tmo = mkOption {
|
||||
type = nullOr str;
|
||||
default = null; # real default: 600
|
||||
description = ''
|
||||
Specify the number of seconds the SCSI layer will wait after a problem has
|
||||
been detected on a FC remote port before removing it from the system. This
|
||||
can be set to "infinity" which sets it to the max value of 2147483647
|
||||
seconds, or 68 years. It will be automatically adjusted to the overall
|
||||
retry interval no_path_retry * polling_interval
|
||||
if a number of retries is given with no_path_retry and the
|
||||
overall retry interval is longer than the specified dev_loss_tmo value.
|
||||
The Linux kernel will cap this value to 600 if fast_io_fail_tmo
|
||||
is not set.
|
||||
'';
|
||||
};
|
||||
|
||||
flush_on_last_del = mkOption {
|
||||
type = nullOr (enum [ "yes" "no" ]);
|
||||
default = null; # real default: "no"
|
||||
description = ''
|
||||
If set to "yes" multipathd will disable queueing when the last path to a
|
||||
device has been deleted.
|
||||
'';
|
||||
};
|
||||
|
||||
user_friendly_names = mkOption {
|
||||
type = nullOr (enum [ "yes" "no" ]);
|
||||
default = null; # real default: "no"
|
||||
description = ''
|
||||
If set to "yes", using the bindings file /etc/multipath/bindings
|
||||
to assign a persistent and unique alias to the multipath, in the
|
||||
form of mpath. If set to "no" use the WWID as the alias. In either
|
||||
case this be will be overridden by any specific aliases in the
|
||||
multipaths section.
|
||||
'';
|
||||
};
|
||||
|
||||
retain_attached_hw_handler = mkOption {
|
||||
type = nullOr (enum [ "yes" "no" ]);
|
||||
default = null; # real default: "yes"
|
||||
description = ''
|
||||
(Obsolete for kernels >= 4.3) If set to "yes" and the SCSI layer has
|
||||
already attached a hardware_handler to the device, multipath will not
|
||||
force the device to use the hardware_handler specified by mutipath.conf.
|
||||
If the SCSI layer has not attached a hardware handler, multipath will
|
||||
continue to use its configured hardware handler.
|
||||
|
||||
Important Note: Linux kernel 4.3 or newer always behaves as if
|
||||
"retain_attached_hw_handler yes" was set.
|
||||
'';
|
||||
};
|
||||
|
||||
detect_prio = mkOption {
|
||||
type = nullOr (enum [ "yes" "no" ]);
|
||||
default = null; # real default: "yes"
|
||||
description = ''
|
||||
If set to "yes", multipath will try to detect if the device supports
|
||||
SCSI-3 ALUA. If so, the device will automatically use the sysfs
|
||||
prioritizer if the required sysf attributes access_state and
|
||||
preferred_path are supported, or the alua prioritizer if not. If set
|
||||
to "no", the prioritizer will be selected as usual.
|
||||
'';
|
||||
};
|
||||
|
||||
detect_checker = mkOption {
|
||||
type = nullOr (enum [ "yes" "no" ]);
|
||||
default = null; # real default: "yes"
|
||||
description = ''
|
||||
If set to "yes", multipath will try to detect if the device supports
|
||||
SCSI-3 ALUA. If so, the device will automatically use the tur checker.
|
||||
If set to "no", the checker will be selected as usual.
|
||||
'';
|
||||
};
|
||||
|
||||
deferred_remove = mkOption {
|
||||
type = nullOr (enum [ "yes" "no" ]);
|
||||
default = null; # real default: "no"
|
||||
description = ''
|
||||
If set to "yes", multipathd will do a deferred remove instead of a
|
||||
regular remove when the last path device has been deleted. This means
|
||||
that if the multipath device is still in use, it will be freed when
|
||||
the last user closes it. If path is added to the multipath device
|
||||
before the last user closes it, the deferred remove will be canceled.
|
||||
'';
|
||||
};
|
||||
|
||||
san_path_err_threshold = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
If set to a value greater than 0, multipathd will watch paths and check
|
||||
how many times a path has been failed due to errors.If the number of
|
||||
failures on a particular path is greater then the san_path_err_threshold,
|
||||
then the path will not reinstate till san_path_err_recovery_time. These
|
||||
path failures should occur within a san_path_err_forget_rate checks, if
|
||||
not we will consider the path is good enough to reinstantate.
|
||||
'';
|
||||
};
|
||||
|
||||
san_path_err_forget_rate = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
If set to a value greater than 0, multipathd will check whether the path
|
||||
failures has exceeded the san_path_err_threshold within this many checks
|
||||
i.e san_path_err_forget_rate. If so we will not reinstante the path till
|
||||
san_path_err_recovery_time.
|
||||
'';
|
||||
};
|
||||
|
||||
san_path_err_recovery_time = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
If set to a value greater than 0, multipathd will make sure that when
|
||||
path failures has exceeded the san_path_err_threshold within
|
||||
san_path_err_forget_rate then the path will be placed in failed state
|
||||
for san_path_err_recovery_time duration. Once san_path_err_recovery_time
|
||||
has timeout we will reinstante the failed path. san_path_err_recovery_time
|
||||
value should be in secs.
|
||||
'';
|
||||
};
|
||||
|
||||
marginal_path_err_sample_time = mkOption {
|
||||
type = nullOr int;
|
||||
default = null;
|
||||
description = "One of the four parameters of supporting path check based on accounting IO error such as intermittent error";
|
||||
};
|
||||
|
||||
marginal_path_err_rate_threshold = mkOption {
|
||||
type = nullOr int;
|
||||
default = null;
|
||||
description = "The error rate threshold as a permillage (1/1000)";
|
||||
};
|
||||
|
||||
marginal_path_err_recheck_gap_time = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "One of the four parameters of supporting path check based on accounting IO error such as intermittent error";
|
||||
};
|
||||
|
||||
marginal_path_double_failed_time = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "One of the four parameters of supporting path check based on accounting IO error such as intermittent error";
|
||||
};
|
||||
|
||||
delay_watch_checks = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "This option is deprecated, and mapped to san_path_err_forget_rate";
|
||||
};
|
||||
|
||||
delay_wait_checks = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "This option is deprecated, and mapped to san_path_err_recovery_time";
|
||||
};
|
||||
|
||||
skip_kpartx = mkOption {
|
||||
type = nullOr (enum [ "yes" "no" ]);
|
||||
default = null; # real default: "no"
|
||||
description = "If set to yes, kpartx will not automatically create partitions on the device";
|
||||
};
|
||||
|
||||
max_sectors_kb = mkOption {
|
||||
type = nullOr int;
|
||||
default = null;
|
||||
description = "Sets the max_sectors_kb device parameter on all path devices and the multipath device to the specified value";
|
||||
};
|
||||
|
||||
ghost_delay = mkOption {
|
||||
type = nullOr int;
|
||||
default = null;
|
||||
description = "Sets the number of seconds that multipath will wait after creating a device with only ghost paths before marking it ready for use in systemd";
|
||||
};
|
||||
|
||||
all_tg_pt = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "Set the 'all targets ports' flag when registering keys with mpathpersist";
|
||||
};
|
||||
|
||||
};
|
||||
});
|
||||
};
|
||||
|
||||
defaults = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
This section defines default values for attributes which are used
|
||||
whenever no values are given in the appropriate device or multipath
|
||||
sections.
|
||||
'';
|
||||
};
|
||||
|
||||
blacklist = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
This section defines which devices should be excluded from the
|
||||
multipath topology discovery.
|
||||
'';
|
||||
};
|
||||
|
||||
blacklist_exceptions = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
This section defines which devices should be included in the
|
||||
multipath topology discovery, despite being listed in the
|
||||
blacklist section.
|
||||
'';
|
||||
};
|
||||
|
||||
overrides = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
This section defines values for attributes that should override the
|
||||
device-specific settings for all devices.
|
||||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "Lines to append to default multipath.conf";
|
||||
};
|
||||
|
||||
extraConfigFile = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
description = "Append an additional file's contents to /etc/multipath.conf";
|
||||
};
|
||||
|
||||
pathGroups = mkOption {
|
||||
example = literalExpression ''
|
||||
[
|
||||
{
|
||||
wwid = "360080e500043b35c0123456789abcdef";
|
||||
alias = 10001234;
|
||||
array = "bigarray.example.com";
|
||||
fsType = "zfs"; # optional
|
||||
options = "ro"; # optional
|
||||
}, ...
|
||||
]
|
||||
'';
|
||||
description = ''
|
||||
This option allows you to define multipath groups as described
|
||||
in http://christophe.varoqui.free.fr/usage.html.
|
||||
'';
|
||||
type = listOf (submodule {
|
||||
options = {
|
||||
|
||||
alias = mkOption {
|
||||
type = int;
|
||||
example = 1001234;
|
||||
description = "The name of the multipath device";
|
||||
};
|
||||
|
||||
wwid = mkOption {
|
||||
type = hexStr;
|
||||
example = "360080e500043b35c0123456789abcdef";
|
||||
description = "The identifier for the multipath device";
|
||||
};
|
||||
|
||||
array = mkOption {
|
||||
type = str;
|
||||
default = null;
|
||||
example = "bigarray.example.com";
|
||||
description = "The DNS name of the storage array";
|
||||
};
|
||||
|
||||
fsType = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
example = "zfs";
|
||||
description = "Type of the filesystem";
|
||||
};
|
||||
|
||||
options = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
example = "ro";
|
||||
description = "Options used to mount the file system";
|
||||
};
|
||||
|
||||
};
|
||||
});
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
environment.etc."multipath.conf".text =
|
||||
let
|
||||
inherit (cfg) defaults blacklist blacklist_exceptions overrides;
|
||||
|
||||
mkDeviceBlock = cfg: let
|
||||
nonNullCfg = lib.filterAttrs (k: v: v != null) cfg;
|
||||
attrs = lib.mapAttrsToList (name: value: " ${name} ${toString value}") nonNullCfg;
|
||||
in ''
|
||||
device {
|
||||
${lib.concatStringsSep "\n" attrs}
|
||||
}
|
||||
'';
|
||||
devices = lib.concatMapStringsSep "\n" mkDeviceBlock cfg.devices;
|
||||
|
||||
mkMultipathBlock = m: ''
|
||||
multipath {
|
||||
wwid ${m.wwid}
|
||||
alias ${toString m.alias}
|
||||
}
|
||||
'';
|
||||
multipaths = lib.concatMapStringsSep "\n" mkMultipathBlock cfg.pathGroups;
|
||||
|
||||
in ''
|
||||
devices {
|
||||
${indentLines 2 devices}
|
||||
}
|
||||
|
||||
${optionalString (!isNull defaults) ''
|
||||
defaults {
|
||||
${indentLines 2 defaults}
|
||||
multipath_dir ${cfg.package}/lib/multipath
|
||||
}
|
||||
''}
|
||||
${optionalString (!isNull blacklist) ''
|
||||
blacklist {
|
||||
${indentLines 2 blacklist}
|
||||
}
|
||||
''}
|
||||
${optionalString (!isNull blacklist_exceptions) ''
|
||||
blacklist_exceptions {
|
||||
${indentLines 2 blacklist_exceptions}
|
||||
}
|
||||
''}
|
||||
${optionalString (!isNull overrides) ''
|
||||
overrides {
|
||||
${indentLines 2 overrides}
|
||||
}
|
||||
''}
|
||||
multipaths {
|
||||
${indentLines 2 multipaths}
|
||||
}
|
||||
'';
|
||||
|
||||
systemd.packages = [ cfg.package ];
|
||||
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
boot.kernelModules = [ "dm-multipath" "dm-service-time" ];
|
||||
|
||||
# We do not have systemd in stage-1 boot so must invoke `multipathd`
|
||||
# with the `-1` argument which disables systemd calls. Invoke `multipath`
|
||||
# to display the multipath mappings in the output of `journalctl -b`.
|
||||
boot.initrd.kernelModules = [ "dm-multipath" "dm-service-time" ];
|
||||
boot.initrd.postDeviceCommands = ''
|
||||
modprobe -a dm-multipath dm-service-time
|
||||
multipathd -s
|
||||
(set -x && sleep 1 && multipath -ll)
|
||||
'';
|
||||
};
|
||||
}
|
|
@ -439,7 +439,7 @@ in
|
|||
mkdir -m 0755 -p /etc/ssh
|
||||
|
||||
${flip concatMapStrings cfg.hostKeys (k: ''
|
||||
if ! [ -f "${k.path}" ]; then
|
||||
if ! [ -s "${k.path}" ]; then
|
||||
ssh-keygen \
|
||||
-t "${k.type}" \
|
||||
${if k ? bits then "-b ${toString k.bits}" else ""} \
|
||||
|
|
|
@ -172,9 +172,15 @@ in
|
|||
ExecStart = "${(removeSuffix "\n" cmd)} start";
|
||||
ExecStop = "${(removeSuffix "\n" cmd)} stop";
|
||||
Restart = "on-failure";
|
||||
TimeoutSec = "5min";
|
||||
User = "unifi";
|
||||
UMask = "0077";
|
||||
WorkingDirectory = "${stateDir}";
|
||||
# the stop command exits while the main process is still running, and unifi
|
||||
# wants to manage its own child processes. this means we have to set KillSignal
|
||||
# to something the main process ignores, otherwise every stop will have unifi.service
|
||||
# fail with SIGTERM status.
|
||||
KillSignal = "SIGCONT";
|
||||
|
||||
# Hardening
|
||||
AmbientCapabilities = "";
|
||||
|
@ -215,5 +221,5 @@ in
|
|||
|
||||
};
|
||||
|
||||
meta.maintainers = with lib.maintainers; [ erictapen ];
|
||||
meta.maintainers = with lib.maintainers; [ erictapen pennae ];
|
||||
}
|
||||
|
|
|
@ -1,70 +0,0 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
interfaces = config.services.wakeonlan.interfaces;
|
||||
|
||||
ethtool = "${pkgs.ethtool}/sbin/ethtool";
|
||||
|
||||
passwordParameter = password : if (password == "") then "" else
|
||||
"sopass ${password}";
|
||||
|
||||
methodParameter = {method, password} :
|
||||
if method == "magicpacket" then "wol g"
|
||||
else if method == "password" then "wol s so ${passwordParameter password}"
|
||||
else throw "Wake-On-Lan method not supported";
|
||||
|
||||
line = { interface, method ? "magicpacket", password ? "" }: ''
|
||||
${ethtool} -s ${interface} ${methodParameter {inherit method password;}}
|
||||
'';
|
||||
|
||||
concatStrings = foldr (x: y: x + y) "";
|
||||
lines = concatStrings (map (l: line l) interfaces);
|
||||
|
||||
in
|
||||
{
|
||||
|
||||
###### interface
|
||||
|
||||
options = {
|
||||
|
||||
services.wakeonlan.interfaces = mkOption {
|
||||
default = [ ];
|
||||
type = types.listOf (types.submodule { options = {
|
||||
interface = mkOption {
|
||||
type = types.str;
|
||||
description = "Interface to enable for Wake-On-Lan.";
|
||||
};
|
||||
method = mkOption {
|
||||
type = types.enum [ "magicpacket" "password"];
|
||||
description = "Wake-On-Lan method for this interface.";
|
||||
};
|
||||
password = mkOption {
|
||||
type = types.strMatching "[a-fA-F0-9]{2}:([a-fA-F0-9]{2}:){4}[a-fA-F0-9]{2}";
|
||||
description = "The password has the shape of six bytes in hexadecimal separated by a colon each.";
|
||||
};
|
||||
};});
|
||||
example = [
|
||||
{
|
||||
interface = "eth0";
|
||||
method = "password";
|
||||
password = "00:11:22:33:44:55";
|
||||
}
|
||||
];
|
||||
description = ''
|
||||
Interfaces where to enable Wake-On-LAN, and how. Two methods available:
|
||||
"magicpacket" and "password". The password has the shape of six bytes
|
||||
in hexadecimal separated by a colon each. For more information,
|
||||
check the ethtool manual.
|
||||
'';
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
|
||||
###### implementation
|
||||
|
||||
config.powerManagement.powerUpCommands = lines;
|
||||
|
||||
}
|
|
@ -272,7 +272,7 @@ in
|
|||
(mkIf cfg.ldap-proxy.enable {
|
||||
|
||||
systemd.services.privacyidea-ldap-proxy = let
|
||||
ldap-proxy-env = pkgs.python2.withPackages (ps: [ ps.privacyidea-ldap-proxy ]);
|
||||
ldap-proxy-env = pkgs.python3.withPackages (ps: [ ps.privacyidea-ldap-proxy ]);
|
||||
in {
|
||||
description = "privacyIDEA LDAP proxy";
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
|
|
|
@ -152,6 +152,8 @@ in
|
|||
install -d -m '${cfg.downloadDirPermissions}' -o '${cfg.user}' -g '${cfg.group}' '${cfg.settings.download-dir}'
|
||||
'' + optionalString cfg.settings.incomplete-dir-enabled ''
|
||||
install -d -m '${cfg.downloadDirPermissions}' -o '${cfg.user}' -g '${cfg.group}' '${cfg.settings.incomplete-dir}'
|
||||
'' + optionalString cfg.settings.watch-dir-enabled ''
|
||||
install -d -m '${cfg.downloadDirPermissions}' -o '${cfg.user}' -g '${cfg.group}' '${cfg.settings.watch-dir}'
|
||||
'';
|
||||
|
||||
assertions = [
|
||||
|
|
|
@ -539,6 +539,69 @@ in
|
|||
Specify the OAuth token URL.
|
||||
'';
|
||||
};
|
||||
baseURL = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Specify the OAuth base URL.
|
||||
'';
|
||||
};
|
||||
userProfileURL = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Specify the OAuth userprofile URL.
|
||||
'';
|
||||
};
|
||||
userProfileUsernameAttr = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Specify the name of the attribute for the username from the claim.
|
||||
'';
|
||||
};
|
||||
userProfileDisplayNameAttr = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Specify the name of the attribute for the display name from the claim.
|
||||
'';
|
||||
};
|
||||
userProfileEmailAttr = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Specify the name of the attribute for the email from the claim.
|
||||
'';
|
||||
};
|
||||
scope = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Specify the OAuth scope.
|
||||
'';
|
||||
};
|
||||
providerName = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Specify the name to be displayed for this strategy.
|
||||
'';
|
||||
};
|
||||
rolesClaim = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Specify the role claim name.
|
||||
'';
|
||||
};
|
||||
accessRole = mkOption {
|
||||
type = with types; nullOr str;
|
||||
default = null;
|
||||
description = ''
|
||||
Specify role which should be included in the ID token roles claim to grant access
|
||||
'';
|
||||
};
|
||||
clientID = mkOption {
|
||||
type = types.str;
|
||||
description = ''
|
||||
|
|
|
@ -144,6 +144,8 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
caddy.enable = mkEnableOption "Whether to enablle caddy reverse proxy to expose jitsi-meet";
|
||||
|
||||
prosody.enable = mkOption {
|
||||
type = bool;
|
||||
default = true;
|
||||
|
@ -322,6 +324,42 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
services.caddy = mkIf cfg.caddy.enable {
|
||||
enable = mkDefault true;
|
||||
virtualHosts.${cfg.hostName} = {
|
||||
extraConfig =
|
||||
let
|
||||
templatedJitsiMeet = pkgs.runCommand "templated-jitsi-meet" {} ''
|
||||
cp -R ${pkgs.jitsi-meet}/* .
|
||||
for file in *.html **/*.html ; do
|
||||
${pkgs.sd}/bin/sd '<!--#include virtual="(.*)" -->' '{{ include "$1" }}' $file
|
||||
done
|
||||
rm config.js
|
||||
rm interface_config.js
|
||||
cp -R . $out
|
||||
cp ${overrideJs "${pkgs.jitsi-meet}/config.js" "config" (recursiveUpdate defaultCfg cfg.config) cfg.extraConfig} $out/config.js
|
||||
cp ${overrideJs "${pkgs.jitsi-meet}/interface_config.js" "interfaceConfig" cfg.interfaceConfig ""} $out/interface_config.js
|
||||
cp ./libs/external_api.min.js $out/external_api.js
|
||||
'';
|
||||
in ''
|
||||
handle /http-bind {
|
||||
header Host ${cfg.hostName}
|
||||
reverse_proxy 127.0.0.1:5280
|
||||
}
|
||||
handle /xmpp-websocket {
|
||||
reverse_proxy 127.0.0.1:5280
|
||||
}
|
||||
handle {
|
||||
templates
|
||||
root * ${templatedJitsiMeet}
|
||||
try_files {path} {path}
|
||||
try_files {path} /index.html
|
||||
file_server
|
||||
}
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
services.jitsi-videobridge = mkIf cfg.videobridge.enable {
|
||||
enable = true;
|
||||
xmppConfigs."localhost" = {
|
||||
|
|
|
@ -6,6 +6,8 @@ let
|
|||
cfg = config.services.nextcloud;
|
||||
fpm = config.services.phpfpm.pools.nextcloud;
|
||||
|
||||
inherit (cfg) datadir;
|
||||
|
||||
phpPackage = cfg.phpPackage.buildEnv {
|
||||
extensions = { enabled, all }:
|
||||
(with all;
|
||||
|
@ -40,7 +42,7 @@ let
|
|||
if [[ "$USER" != nextcloud ]]; then
|
||||
sudo='exec /run/wrappers/bin/sudo -u nextcloud --preserve-env=NEXTCLOUD_CONFIG_DIR --preserve-env=OC_PASS'
|
||||
fi
|
||||
export NEXTCLOUD_CONFIG_DIR="${cfg.home}/config"
|
||||
export NEXTCLOUD_CONFIG_DIR="${datadir}/config"
|
||||
$sudo \
|
||||
${phpPackage}/bin/php \
|
||||
occ "$@"
|
||||
|
@ -51,6 +53,12 @@ let
|
|||
in {
|
||||
|
||||
imports = [
|
||||
(mkRemovedOptionModule [ "services" "nextcloud" "config" "adminpass" ] ''
|
||||
Please use `services.nextcloud.config.adminpassFile' instead!
|
||||
'')
|
||||
(mkRemovedOptionModule [ "services" "nextcloud" "config" "dbpass" ] ''
|
||||
Please use `services.nextcloud.config.dbpassFile' instead!
|
||||
'')
|
||||
(mkRemovedOptionModule [ "services" "nextcloud" "nginx" "enable" ] ''
|
||||
The nextcloud module supports `nginx` as reverse-proxy by default and doesn't
|
||||
support other reverse-proxies officially.
|
||||
|
@ -79,6 +87,59 @@ in {
|
|||
default = "/var/lib/nextcloud";
|
||||
description = "Storage path of nextcloud.";
|
||||
};
|
||||
datadir = mkOption {
|
||||
type = types.str;
|
||||
defaultText = "config.services.nextcloud.home";
|
||||
description = ''
|
||||
Data storage path of nextcloud. Will be <xref linkend="opt-services.nextcloud.home" /> by default.
|
||||
This folder will be populated with a config.php and data folder which contains the state of the instance (excl the database).";
|
||||
'';
|
||||
example = "/mnt/nextcloud-file";
|
||||
};
|
||||
extraApps = mkOption {
|
||||
type = types.attrsOf types.package;
|
||||
default = { };
|
||||
description = ''
|
||||
Extra apps to install. Should be an attrSet of appid to packages generated by fetchNextcloudApp.
|
||||
The appid must be identical to the "id" value in the apps appinfo/info.xml.
|
||||
Using this will disable the appstore to prevent Nextcloud from updating these apps (see <xref linkend="opt-services.nextcloud.appstoreEnable" />).
|
||||
'';
|
||||
example = literalExpression ''
|
||||
{
|
||||
maps = pkgs.fetchNextcloudApp {
|
||||
name = "maps";
|
||||
sha256 = "007y80idqg6b6zk6kjxg4vgw0z8fsxs9lajnv49vv1zjy6jx2i1i";
|
||||
url = "https://github.com/nextcloud/maps/releases/download/v0.1.9/maps-0.1.9.tar.gz";
|
||||
version = "0.1.9";
|
||||
};
|
||||
phonetrack = pkgs.fetchNextcloudApp {
|
||||
name = "phonetrack";
|
||||
sha256 = "0qf366vbahyl27p9mshfma1as4nvql6w75zy2zk5xwwbp343vsbc";
|
||||
url = "https://gitlab.com/eneiluj/phonetrack-oc/-/wikis/uploads/931aaaf8dca24bf31a7e169a83c17235/phonetrack-0.6.9.tar.gz";
|
||||
version = "0.6.9";
|
||||
};
|
||||
}
|
||||
'';
|
||||
};
|
||||
extraAppsEnable = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Automatically enable the apps in <xref linkend="opt-services.nextcloud.extraApps" /> every time nextcloud starts.
|
||||
If set to false, apps need to be enabled in the Nextcloud user interface or with nextcloud-occ app:enable.
|
||||
'';
|
||||
};
|
||||
appstoreEnable = mkOption {
|
||||
type = types.nullOr types.bool;
|
||||
default = null;
|
||||
example = true;
|
||||
description = ''
|
||||
Allow the installation of apps and app updates from the store.
|
||||
Enabled by default unless there are packages in <xref linkend="opt-services.nextcloud.extraApps" />.
|
||||
Set to true to force enable the store even if <xref linkend="opt-services.nextcloud.extraApps" /> is used.
|
||||
Set to false to disable the installation of apps from the global appstore. App management is always enabled regardless of this setting.
|
||||
'';
|
||||
};
|
||||
logLevel = mkOption {
|
||||
type = types.ints.between 0 4;
|
||||
default = 2;
|
||||
|
@ -206,14 +267,6 @@ in {
|
|||
default = "nextcloud";
|
||||
description = "Database user.";
|
||||
};
|
||||
dbpass = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = ''
|
||||
Database password. Use <literal>dbpassFile</literal> to avoid this
|
||||
being world-readable in the <literal>/nix/store</literal>.
|
||||
'';
|
||||
};
|
||||
dbpassFile = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
|
@ -246,17 +299,8 @@ in {
|
|||
default = "root";
|
||||
description = "Admin username.";
|
||||
};
|
||||
adminpass = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
description = ''
|
||||
Admin password. Use <literal>adminpassFile</literal> to avoid this
|
||||
being world-readable in the <literal>/nix/store</literal>.
|
||||
'';
|
||||
};
|
||||
adminpassFile = mkOption {
|
||||
type = types.nullOr types.str;
|
||||
default = null;
|
||||
type = types.str;
|
||||
description = ''
|
||||
The full path to a file that contains the admin's password. Must be
|
||||
readable by user <literal>nextcloud</literal>.
|
||||
|
@ -321,8 +365,8 @@ in {
|
|||
This mounts a bucket on an Amazon S3 object storage or compatible
|
||||
implementation into the virtual filesystem.
|
||||
|
||||
See nextcloud's documentation on "Object Storage as Primary
|
||||
Storage" for more details.
|
||||
Further details about this feature can be found in the
|
||||
<link xlink:href="https://docs.nextcloud.com/server/22/admin_manual/configuration_files/primary_storage.html">upstream documentation</link>.
|
||||
'';
|
||||
bucket = mkOption {
|
||||
type = types.str;
|
||||
|
@ -389,9 +433,9 @@ in {
|
|||
Required for some non-Amazon S3 implementations.
|
||||
|
||||
Ordinarily, requests will be made with
|
||||
http://bucket.hostname.domain/, but with path style
|
||||
<literal>http://bucket.hostname.domain/</literal>, but with path style
|
||||
enabled requests are made with
|
||||
http://hostname.domain/bucket instead.
|
||||
<literal>http://hostname.domain/bucket</literal> instead.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
@ -399,11 +443,11 @@ in {
|
|||
};
|
||||
|
||||
enableImagemagick = mkEnableOption ''
|
||||
Whether to load the ImageMagick module into PHP.
|
||||
the ImageMagick module for PHP.
|
||||
This is used by the theming app and for generating previews of certain images (e.g. SVG and HEIF).
|
||||
You may want to disable it for increased security. In that case, previews will still be available
|
||||
for some images (e.g. JPEG and PNG).
|
||||
See https://github.com/nextcloud/server/issues/13099
|
||||
See <link xlink:href="https://github.com/nextcloud/server/issues/13099" />.
|
||||
'' // {
|
||||
default = true;
|
||||
};
|
||||
|
@ -464,13 +508,6 @@ in {
|
|||
|
||||
config = mkIf cfg.enable (mkMerge [
|
||||
{ assertions = let acfg = cfg.config; in [
|
||||
{ assertion = !(acfg.dbpass != null && acfg.dbpassFile != null);
|
||||
message = "Please specify no more than one of dbpass or dbpassFile";
|
||||
}
|
||||
{ assertion = ((acfg.adminpass != null || acfg.adminpassFile != null)
|
||||
&& !(acfg.adminpass != null && acfg.adminpassFile != null));
|
||||
message = "Please specify exactly one of adminpass or adminpassFile";
|
||||
}
|
||||
{ assertion = versionOlder cfg.package.version "21" -> cfg.config.defaultPhoneRegion == null;
|
||||
message = "The `defaultPhoneRegion'-setting is only supported for Nextcloud >=21!";
|
||||
}
|
||||
|
@ -542,6 +579,8 @@ in {
|
|||
else nextcloud22
|
||||
);
|
||||
|
||||
services.nextcloud.datadir = mkOptionDefault config.services.nextcloud.home;
|
||||
|
||||
services.nextcloud.phpPackage =
|
||||
if versionOlder cfg.package.version "21" then pkgs.php74
|
||||
else pkgs.php80;
|
||||
|
@ -581,6 +620,14 @@ in {
|
|||
]
|
||||
'';
|
||||
|
||||
showAppStoreSetting = cfg.appstoreEnable != null || cfg.extraApps != {};
|
||||
renderedAppStoreSetting =
|
||||
let
|
||||
x = cfg.appstoreEnable;
|
||||
in
|
||||
if x == null then "false"
|
||||
else boolToString x;
|
||||
|
||||
overrideConfig = pkgs.writeText "nextcloud-config.php" ''
|
||||
<?php
|
||||
${optionalString requiresReadSecretFunction ''
|
||||
|
@ -599,10 +646,12 @@ in {
|
|||
''}
|
||||
$CONFIG = [
|
||||
'apps_paths' => [
|
||||
${optionalString (cfg.extraApps != { }) "[ 'path' => '${cfg.home}/nix-apps', 'url' => '/nix-apps', 'writable' => false ],"}
|
||||
[ 'path' => '${cfg.home}/apps', 'url' => '/apps', 'writable' => false ],
|
||||
[ 'path' => '${cfg.home}/store-apps', 'url' => '/store-apps', 'writable' => true ],
|
||||
],
|
||||
'datadirectory' => '${cfg.home}/data',
|
||||
${optionalString (showAppStoreSetting) "'appstoreenabled' => ${renderedAppStoreSetting},"}
|
||||
'datadirectory' => '${datadir}/data',
|
||||
'skeletondirectory' => '${cfg.skeletonDirectory}',
|
||||
${optionalString cfg.caching.apcu "'memcache.local' => '\\OC\\Memcache\\APCu',"}
|
||||
'log_type' => 'syslog',
|
||||
|
@ -613,7 +662,6 @@ in {
|
|||
${optionalString (c.dbport != null) "'dbport' => '${toString c.dbport}',"}
|
||||
${optionalString (c.dbuser != null) "'dbuser' => '${c.dbuser}',"}
|
||||
${optionalString (c.dbtableprefix != null) "'dbtableprefix' => '${toString c.dbtableprefix}',"}
|
||||
${optionalString (c.dbpass != null) "'dbpassword' => '${c.dbpass}',"}
|
||||
${optionalString (c.dbpassFile != null) "'dbpassword' => nix_read_secret('${c.dbpassFile}'),"}
|
||||
'dbtype' => '${c.dbtype}',
|
||||
'trusted_domains' => ${writePhpArrary ([ cfg.hostName ] ++ c.extraTrustedDomains)},
|
||||
|
@ -623,14 +671,17 @@ in {
|
|||
];
|
||||
'';
|
||||
occInstallCmd = let
|
||||
dbpass = if c.dbpassFile != null
|
||||
then ''"$(<"${toString c.dbpassFile}")"''
|
||||
else if c.dbpass != null
|
||||
then ''"${toString c.dbpass}"''
|
||||
else ''""'';
|
||||
adminpass = if c.adminpassFile != null
|
||||
then ''"$(<"${toString c.adminpassFile}")"''
|
||||
else ''"${toString c.adminpass}"'';
|
||||
mkExport = { arg, value }: "export ${arg}=${value}";
|
||||
dbpass = {
|
||||
arg = "DBPASS";
|
||||
value = if c.dbpassFile != null
|
||||
then ''"$(<"${toString c.dbpassFile}")"''
|
||||
else ''""'';
|
||||
};
|
||||
adminpass = {
|
||||
arg = "ADMINPASS";
|
||||
value = ''"$(<"${toString c.adminpassFile}")"'';
|
||||
};
|
||||
installFlags = concatStringsSep " \\\n "
|
||||
(mapAttrsToList (k: v: "${k} ${toString v}") {
|
||||
"--database" = ''"${c.dbtype}"'';
|
||||
|
@ -641,12 +692,14 @@ in {
|
|||
${if c.dbhost != null then "--database-host" else null} = ''"${c.dbhost}"'';
|
||||
${if c.dbport != null then "--database-port" else null} = ''"${toString c.dbport}"'';
|
||||
${if c.dbuser != null then "--database-user" else null} = ''"${c.dbuser}"'';
|
||||
"--database-pass" = dbpass;
|
||||
"--database-pass" = "\$${dbpass.arg}";
|
||||
"--admin-user" = ''"${c.adminuser}"'';
|
||||
"--admin-pass" = adminpass;
|
||||
"--data-dir" = ''"${cfg.home}/data"'';
|
||||
"--admin-pass" = "\$${adminpass.arg}";
|
||||
"--data-dir" = ''"${datadir}/data"'';
|
||||
});
|
||||
in ''
|
||||
${mkExport dbpass}
|
||||
${mkExport adminpass}
|
||||
${occ}/bin/nextcloud-occ maintenance:install \
|
||||
${installFlags}
|
||||
'';
|
||||
|
@ -673,22 +726,26 @@ in {
|
|||
exit 1
|
||||
fi
|
||||
''}
|
||||
${optionalString (c.adminpassFile != null) ''
|
||||
if [ ! -r "${c.adminpassFile}" ]; then
|
||||
echo "adminpassFile ${c.adminpassFile} is not readable by nextcloud:nextcloud! Aborting..."
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "$(<${c.adminpassFile})" ]; then
|
||||
echo "adminpassFile ${c.adminpassFile} is empty!"
|
||||
exit 1
|
||||
fi
|
||||
''}
|
||||
if [ ! -r "${c.adminpassFile}" ]; then
|
||||
echo "adminpassFile ${c.adminpassFile} is not readable by nextcloud:nextcloud! Aborting..."
|
||||
exit 1
|
||||
fi
|
||||
if [ -z "$(<${c.adminpassFile})" ]; then
|
||||
echo "adminpassFile ${c.adminpassFile} is empty!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
ln -sf ${cfg.package}/apps ${cfg.home}/
|
||||
|
||||
# Install extra apps
|
||||
ln -sfT \
|
||||
${pkgs.linkFarm "nix-apps"
|
||||
(mapAttrsToList (name: path: { inherit name path; }) cfg.extraApps)} \
|
||||
${cfg.home}/nix-apps
|
||||
|
||||
# create nextcloud directories.
|
||||
# if the directories exist already with wrong permissions, we fix that
|
||||
for dir in ${cfg.home}/config ${cfg.home}/data ${cfg.home}/store-apps; do
|
||||
for dir in ${datadir}/config ${datadir}/data ${cfg.home}/store-apps ${cfg.home}/nix-apps; do
|
||||
if [ ! -e $dir ]; then
|
||||
install -o nextcloud -g nextcloud -d $dir
|
||||
elif [ $(stat -c "%G" $dir) != "nextcloud" ]; then
|
||||
|
@ -696,23 +753,29 @@ in {
|
|||
fi
|
||||
done
|
||||
|
||||
ln -sf ${overrideConfig} ${cfg.home}/config/override.config.php
|
||||
ln -sf ${overrideConfig} ${datadir}/config/override.config.php
|
||||
|
||||
# Do not install if already installed
|
||||
if [[ ! -e ${cfg.home}/config/config.php ]]; then
|
||||
if [[ ! -e ${datadir}/config/config.php ]]; then
|
||||
${occInstallCmd}
|
||||
fi
|
||||
|
||||
${occ}/bin/nextcloud-occ upgrade
|
||||
|
||||
${occ}/bin/nextcloud-occ config:system:delete trusted_domains
|
||||
|
||||
${optionalString (cfg.extraAppsEnable && cfg.extraApps != { }) ''
|
||||
# Try to enable apps (don't fail when one of them cannot be enabled , eg. due to incompatible version)
|
||||
${occ}/bin/nextcloud-occ app:enable ${concatStringsSep " " (attrNames cfg.extraApps)}
|
||||
''}
|
||||
|
||||
${occSetTrustedDomainsCmd}
|
||||
'';
|
||||
serviceConfig.Type = "oneshot";
|
||||
serviceConfig.User = "nextcloud";
|
||||
};
|
||||
nextcloud-cron = {
|
||||
environment.NEXTCLOUD_CONFIG_DIR = "${cfg.home}/config";
|
||||
environment.NEXTCLOUD_CONFIG_DIR = "${datadir}/config";
|
||||
serviceConfig.Type = "oneshot";
|
||||
serviceConfig.User = "nextcloud";
|
||||
serviceConfig.ExecStart = "${phpPackage}/bin/php -f ${cfg.package}/cron.php";
|
||||
|
@ -731,7 +794,7 @@ in {
|
|||
group = "nextcloud";
|
||||
phpPackage = phpPackage;
|
||||
phpEnv = {
|
||||
NEXTCLOUD_CONFIG_DIR = "${cfg.home}/config";
|
||||
NEXTCLOUD_CONFIG_DIR = "${datadir}/config";
|
||||
PATH = "/run/wrappers/bin:/nix/var/nix/profiles/default/bin:/run/current-system/sw/bin:/usr/bin:/bin";
|
||||
};
|
||||
settings = mapAttrs (name: mkDefault) {
|
||||
|
@ -781,6 +844,10 @@ in {
|
|||
priority = 201;
|
||||
extraConfig = "root ${cfg.home};";
|
||||
};
|
||||
"~ ^/nix-apps" = {
|
||||
priority = 201;
|
||||
extraConfig = "root ${cfg.home};";
|
||||
};
|
||||
"^~ /.well-known" = {
|
||||
priority = 210;
|
||||
extraConfig = ''
|
||||
|
|
|
@ -237,6 +237,12 @@
|
|||
Some apps may require extra PHP extensions to be installed.
|
||||
This can be configured with the <xref linkend="opt-services.nextcloud.phpExtraExtensions" /> setting.
|
||||
</para>
|
||||
|
||||
<para>
|
||||
Alternatively, extra apps can also be declared with the <xref linkend="opt-services.nextcloud.extraApps" /> setting.
|
||||
When using this setting, apps can no longer be managed statefully because this can lead to Nextcloud updating apps
|
||||
that are managed by Nix. If you want automatic updates it is recommended that you use web interface to install apps.
|
||||
</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="module-services-nextcloud-maintainer-info">
|
||||
|
|
|
@ -372,7 +372,13 @@ in
|
|||
services.xserver.libinput.enable = mkDefault true; # for controlling touchpad settings via gnome control center
|
||||
|
||||
xdg.portal.enable = true;
|
||||
xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ];
|
||||
xdg.portal.extraPortals = [
|
||||
pkgs.xdg-desktop-portal-gnome
|
||||
(pkgs.xdg-desktop-portal-gtk.override {
|
||||
# Do not build portals that we already have.
|
||||
buildPortalsInGnome = false;
|
||||
})
|
||||
];
|
||||
|
||||
# Harmonize Qt5 application style and also make them use the portal for file chooser dialog.
|
||||
qt5 = {
|
||||
|
|
|
@ -219,6 +219,7 @@ in
|
|||
] config.environment.pantheon.excludePackages);
|
||||
|
||||
programs.evince.enable = mkDefault true;
|
||||
programs.evince.package = pkgs.pantheon.evince;
|
||||
programs.file-roller.enable = mkDefault true;
|
||||
|
||||
# Settings from elementary-default-settings
|
||||
|
|
|
@ -13,7 +13,6 @@ let
|
|||
|
||||
pulseaudio = config.hardware.pulseaudio;
|
||||
pactl = "${getBin pulseaudio.package}/bin/pactl";
|
||||
startplasma-x11 = "${getBin plasma5.plasma-workspace}/bin/startplasma-x11";
|
||||
sed = "${getBin pkgs.gnused}/bin/sed";
|
||||
|
||||
gtkrc2 = writeText "gtkrc-2.0" ''
|
||||
|
@ -136,9 +135,6 @@ let
|
|||
fi
|
||||
fi
|
||||
|
||||
''
|
||||
+ ''
|
||||
exec "${startplasma-x11}"
|
||||
'';
|
||||
|
||||
in
|
||||
|
@ -172,6 +168,12 @@ in
|
|||
disabled by default.
|
||||
'';
|
||||
};
|
||||
|
||||
useQtScaling = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Enable HiDPI scaling in Qt.";
|
||||
};
|
||||
};
|
||||
|
||||
};
|
||||
|
@ -183,6 +185,7 @@ in
|
|||
|
||||
config = mkMerge [
|
||||
(mkIf cfg.enable {
|
||||
|
||||
# Seed our configuration into nixos-generate-config
|
||||
system.nixos-generate-config.desktopConfiguration = [''
|
||||
# Enable the Plasma 5 Desktop Environment.
|
||||
|
@ -190,11 +193,7 @@ in
|
|||
services.xserver.desktopManager.plasma5.enable = true;
|
||||
''];
|
||||
|
||||
services.xserver.desktopManager.session = singleton {
|
||||
name = "plasma5";
|
||||
bgSupport = true;
|
||||
start = startplasma;
|
||||
};
|
||||
services.xserver.displayManager.sessionPackages = [ pkgs.libsForQt5.plasma5.plasma-workspace ];
|
||||
|
||||
security.wrappers = {
|
||||
kcheckpass =
|
||||
|
@ -347,6 +346,8 @@ in
|
|||
|
||||
environment.etc."X11/xkb".source = xcfg.xkbDir;
|
||||
|
||||
environment.sessionVariables.PLASMA_USE_QT_SCALING = mkIf cfg.useQtScaling "1";
|
||||
|
||||
# Enable GTK applications to load SVG icons
|
||||
services.xserver.gdk-pixbuf.modulePackages = [ pkgs.librsvg ];
|
||||
|
||||
|
@ -389,6 +390,7 @@ in
|
|||
|
||||
# Update the start menu for each user that is currently logged in
|
||||
system.userActivationScripts.plasmaSetup = activationScript;
|
||||
services.xserver.displayManager.setupCommands = startplasma;
|
||||
|
||||
nixpkgs.config.firefox.enablePlasmaBrowserIntegration = true;
|
||||
})
|
||||
|
|
|
@ -26,7 +26,6 @@ let
|
|||
load-module module-udev-detect
|
||||
load-module module-native-protocol-unix
|
||||
load-module module-default-device-restore
|
||||
load-module module-rescue-streams
|
||||
load-module module-always-sink
|
||||
load-module module-intended-roles
|
||||
load-module module-suspend-on-idle
|
||||
|
|
|
@ -11,7 +11,6 @@ use Cwd 'abs_path';
|
|||
|
||||
my $out = "@out@";
|
||||
|
||||
# FIXME: maybe we should use /proc/1/exe to get the current systemd.
|
||||
my $curSystemd = abs_path("/run/current-system/sw/bin");
|
||||
|
||||
# To be robust against interruption, record what units need to be started etc.
|
||||
|
@ -19,13 +18,16 @@ my $startListFile = "/run/nixos/start-list";
|
|||
my $restartListFile = "/run/nixos/restart-list";
|
||||
my $reloadListFile = "/run/nixos/reload-list";
|
||||
|
||||
# Parse restart/reload requests by the activation script
|
||||
# Parse restart/reload requests by the activation script.
|
||||
# Activation scripts may write newline-separated units to this
|
||||
# file and switch-to-configuration will handle them. While
|
||||
# `stopIfChanged = true` is ignored, switch-to-configuration will
|
||||
# handle `restartIfChanged = false` and `reloadIfChanged = true`.
|
||||
# This also works for socket-activated units.
|
||||
my $restartByActivationFile = "/run/nixos/activation-restart-list";
|
||||
my $reloadByActivationFile = "/run/nixos/activation-reload-list";
|
||||
my $dryRestartByActivationFile = "/run/nixos/dry-activation-restart-list";
|
||||
my $dryReloadByActivationFile = "/run/nixos/dry-activation-reload-list";
|
||||
|
||||
make_path("/run/nixos", { mode => 0755 });
|
||||
make_path("/run/nixos", { mode => oct(755) });
|
||||
|
||||
my $action = shift @ARGV;
|
||||
|
||||
|
@ -147,6 +149,92 @@ sub fingerprintUnit {
|
|||
return abs_path($s) . (-f "${s}.d/overrides.conf" ? " " . abs_path "${s}.d/overrides.conf" : "");
|
||||
}
|
||||
|
||||
sub handleModifiedUnit {
|
||||
my ($unit, $baseName, $newUnitFile, $activePrev, $unitsToStop, $unitsToStart, $unitsToReload, $unitsToRestart, $unitsToSkip) = @_;
|
||||
|
||||
if ($unit eq "sysinit.target" || $unit eq "basic.target" || $unit eq "multi-user.target" || $unit eq "graphical.target" || $unit =~ /\.slice$/ || $unit =~ /\.path$/) {
|
||||
# Do nothing. These cannot be restarted directly.
|
||||
# Slices and Paths don't have to be restarted since
|
||||
# properties (resource limits and inotify watches)
|
||||
# seem to get applied on daemon-reload.
|
||||
} elsif ($unit =~ /\.mount$/) {
|
||||
# Reload the changed mount unit to force a remount.
|
||||
$unitsToReload->{$unit} = 1;
|
||||
recordUnit($reloadListFile, $unit);
|
||||
} else {
|
||||
my $unitInfo = parseUnit($newUnitFile);
|
||||
if (boolIsTrue($unitInfo->{'X-ReloadIfChanged'} // "no")) {
|
||||
$unitsToReload->{$unit} = 1;
|
||||
recordUnit($reloadListFile, $unit);
|
||||
}
|
||||
elsif (!boolIsTrue($unitInfo->{'X-RestartIfChanged'} // "yes") || boolIsTrue($unitInfo->{'RefuseManualStop'} // "no") || boolIsTrue($unitInfo->{'X-OnlyManualStart'} // "no")) {
|
||||
$unitsToSkip->{$unit} = 1;
|
||||
} else {
|
||||
# If this unit is socket-activated, then stop it instead
|
||||
# of restarting it to make sure the new version of it is
|
||||
# socket-activated.
|
||||
my $socketActivated = 0;
|
||||
if ($unit =~ /\.service$/) {
|
||||
my @sockets = split / /, ($unitInfo->{Sockets} // "");
|
||||
if (scalar @sockets == 0) {
|
||||
@sockets = ("$baseName.socket");
|
||||
}
|
||||
foreach my $socket (@sockets) {
|
||||
if (-e "$out/etc/systemd/system/$socket") {
|
||||
$socketActivated = 1;
|
||||
$unitsToStop->{$unit} = 1;
|
||||
# If the socket was not running previously,
|
||||
# start it now.
|
||||
if (not defined $activePrev->{$socket}) {
|
||||
$unitsToStart->{$socket} = 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Don't do the rest of this for socket-activated units
|
||||
# because we handled these above where we stop the unit.
|
||||
# Since only services can be socket-activated, the
|
||||
# following condition always evaluates to `true` for
|
||||
# non-service units.
|
||||
if ($socketActivated) {
|
||||
return;
|
||||
}
|
||||
|
||||
# If we are restarting a socket, also stop the corresponding
|
||||
# service. This is required because restarting a socket
|
||||
# when the service is already activated fails.
|
||||
if ($unit =~ /\.socket$/) {
|
||||
my $service = $unitInfo->{Service} // "";
|
||||
if ($service eq "") {
|
||||
$service = "$baseName.service";
|
||||
}
|
||||
if (defined $activePrev->{$service}) {
|
||||
$unitsToStop->{$service} = 1;
|
||||
}
|
||||
$unitsToRestart->{$unit} = 1;
|
||||
recordUnit($restartListFile, $unit);
|
||||
} else {
|
||||
# Always restart non-services instead of stopping and starting them
|
||||
# because it doesn't make sense to stop them with a config from
|
||||
# the old evaluation.
|
||||
if (!boolIsTrue($unitInfo->{'X-StopIfChanged'} // "yes") || $unit !~ /\.service$/) {
|
||||
# This unit should be restarted instead of
|
||||
# stopped and started.
|
||||
$unitsToRestart->{$unit} = 1;
|
||||
recordUnit($restartListFile, $unit);
|
||||
} else {
|
||||
# We write to a file to ensure that the
|
||||
# service gets restarted if we're interrupted.
|
||||
$unitsToStart->{$unit} = 1;
|
||||
recordUnit($startListFile, $unit);
|
||||
$unitsToStop->{$unit} = 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Figure out what units need to be stopped, started, restarted or reloaded.
|
||||
my (%unitsToStop, %unitsToSkip, %unitsToStart, %unitsToRestart, %unitsToReload);
|
||||
|
||||
|
@ -219,65 +307,7 @@ while (my ($unit, $state) = each %{$activePrev}) {
|
|||
}
|
||||
|
||||
elsif (fingerprintUnit($prevUnitFile) ne fingerprintUnit($newUnitFile)) {
|
||||
if ($unit eq "sysinit.target" || $unit eq "basic.target" || $unit eq "multi-user.target" || $unit eq "graphical.target") {
|
||||
# Do nothing. These cannot be restarted directly.
|
||||
} elsif ($unit =~ /\.mount$/) {
|
||||
# Reload the changed mount unit to force a remount.
|
||||
$unitsToReload{$unit} = 1;
|
||||
recordUnit($reloadListFile, $unit);
|
||||
} elsif ($unit =~ /\.socket$/ || $unit =~ /\.path$/ || $unit =~ /\.slice$/) {
|
||||
# FIXME: do something?
|
||||
} else {
|
||||
my $unitInfo = parseUnit($newUnitFile);
|
||||
if (boolIsTrue($unitInfo->{'X-ReloadIfChanged'} // "no")) {
|
||||
$unitsToReload{$unit} = 1;
|
||||
recordUnit($reloadListFile, $unit);
|
||||
}
|
||||
elsif (!boolIsTrue($unitInfo->{'X-RestartIfChanged'} // "yes") || boolIsTrue($unitInfo->{'RefuseManualStop'} // "no") || boolIsTrue($unitInfo->{'X-OnlyManualStart'} // "no")) {
|
||||
$unitsToSkip{$unit} = 1;
|
||||
} else {
|
||||
if (!boolIsTrue($unitInfo->{'X-StopIfChanged'} // "yes")) {
|
||||
# This unit should be restarted instead of
|
||||
# stopped and started.
|
||||
$unitsToRestart{$unit} = 1;
|
||||
recordUnit($restartListFile, $unit);
|
||||
} else {
|
||||
# If this unit is socket-activated, then stop the
|
||||
# socket unit(s) as well, and restart the
|
||||
# socket(s) instead of the service.
|
||||
my $socketActivated = 0;
|
||||
if ($unit =~ /\.service$/) {
|
||||
my @sockets = split / /, ($unitInfo->{Sockets} // "");
|
||||
if (scalar @sockets == 0) {
|
||||
@sockets = ("$baseName.socket");
|
||||
}
|
||||
foreach my $socket (@sockets) {
|
||||
if (defined $activePrev->{$socket}) {
|
||||
$unitsToStop{$socket} = 1;
|
||||
# Only restart sockets that actually
|
||||
# exist in new configuration:
|
||||
if (-e "$out/etc/systemd/system/$socket") {
|
||||
$unitsToStart{$socket} = 1;
|
||||
recordUnit($startListFile, $socket);
|
||||
$socketActivated = 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# If the unit is not socket-activated, record
|
||||
# that this unit needs to be started below.
|
||||
# We write this to a file to ensure that the
|
||||
# service gets restarted if we're interrupted.
|
||||
if (!$socketActivated) {
|
||||
$unitsToStart{$unit} = 1;
|
||||
recordUnit($startListFile, $unit);
|
||||
}
|
||||
|
||||
$unitsToStop{$unit} = 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
handleModifiedUnit($unit, $baseName, $newUnitFile, $activePrev, \%unitsToStop, \%unitsToStart, \%unitsToReload, \%unitsToRestart, %unitsToSkip);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -362,8 +392,6 @@ sub filterUnits {
|
|||
}
|
||||
|
||||
my @unitsToStopFiltered = filterUnits(\%unitsToStop);
|
||||
my @unitsToStartFiltered = filterUnits(\%unitsToStart);
|
||||
|
||||
|
||||
# Show dry-run actions.
|
||||
if ($action eq "dry-activate") {
|
||||
|
@ -375,21 +403,44 @@ if ($action eq "dry-activate") {
|
|||
print STDERR "would activate the configuration...\n";
|
||||
system("$out/dry-activate", "$out");
|
||||
|
||||
$unitsToRestart{$_} = 1 foreach
|
||||
split('\n', read_file($dryRestartByActivationFile, err_mode => 'quiet') // "");
|
||||
# Handle the activation script requesting the restart or reload of a unit.
|
||||
my %unitsToAlsoStop;
|
||||
my %unitsToAlsoSkip;
|
||||
foreach (split('\n', read_file($dryRestartByActivationFile, err_mode => 'quiet') // "")) {
|
||||
my $unit = $_;
|
||||
my $baseUnit = $unit;
|
||||
my $newUnitFile = "$out/etc/systemd/system/$baseUnit";
|
||||
|
||||
$unitsToReload{$_} = 1 foreach
|
||||
split('\n', read_file($dryReloadByActivationFile, err_mode => 'quiet') // "");
|
||||
# Detect template instances.
|
||||
if (!-e $newUnitFile && $unit =~ /^(.*)@[^\.]*\.(.*)$/) {
|
||||
$baseUnit = "$1\@.$2";
|
||||
$newUnitFile = "$out/etc/systemd/system/$baseUnit";
|
||||
}
|
||||
|
||||
my $baseName = $baseUnit;
|
||||
$baseName =~ s/\.[a-z]*$//;
|
||||
|
||||
handleModifiedUnit($unit, $baseName, $newUnitFile, $activePrev, \%unitsToAlsoStop, \%unitsToStart, \%unitsToReload, \%unitsToRestart, %unitsToAlsoSkip);
|
||||
}
|
||||
unlink($dryRestartByActivationFile);
|
||||
|
||||
my @unitsToAlsoStopFiltered = filterUnits(\%unitsToAlsoStop);
|
||||
if (scalar(keys %unitsToAlsoStop) > 0) {
|
||||
print STDERR "would stop the following units as well: ", join(", ", @unitsToAlsoStopFiltered), "\n"
|
||||
if scalar @unitsToAlsoStopFiltered;
|
||||
}
|
||||
|
||||
print STDERR "would NOT restart the following changed units as well: ", join(", ", sort(keys %unitsToAlsoSkip)), "\n"
|
||||
if scalar(keys %unitsToAlsoSkip) > 0;
|
||||
|
||||
print STDERR "would restart systemd\n" if $restartSystemd;
|
||||
print STDERR "would restart the following units: ", join(", ", sort(keys %unitsToRestart)), "\n"
|
||||
if scalar(keys %unitsToRestart) > 0;
|
||||
print STDERR "would start the following units: ", join(", ", @unitsToStartFiltered), "\n"
|
||||
if scalar @unitsToStartFiltered;
|
||||
print STDERR "would reload the following units: ", join(", ", sort(keys %unitsToReload)), "\n"
|
||||
if scalar(keys %unitsToReload) > 0;
|
||||
unlink($dryRestartByActivationFile);
|
||||
unlink($dryReloadByActivationFile);
|
||||
print STDERR "would restart the following units: ", join(", ", sort(keys %unitsToRestart)), "\n"
|
||||
if scalar(keys %unitsToRestart) > 0;
|
||||
my @unitsToStartFiltered = filterUnits(\%unitsToStart);
|
||||
print STDERR "would start the following units: ", join(", ", @unitsToStartFiltered), "\n"
|
||||
if scalar @unitsToStartFiltered;
|
||||
exit 0;
|
||||
}
|
||||
|
||||
|
@ -400,7 +451,7 @@ if (scalar (keys %unitsToStop) > 0) {
|
|||
print STDERR "stopping the following units: ", join(", ", @unitsToStopFiltered), "\n"
|
||||
if scalar @unitsToStopFiltered;
|
||||
# Use current version of systemctl binary before daemon is reexeced.
|
||||
system("$curSystemd/systemctl", "stop", "--", sort(keys %unitsToStop)); # FIXME: ignore errors?
|
||||
system("$curSystemd/systemctl", "stop", "--", sort(keys %unitsToStop));
|
||||
}
|
||||
|
||||
print STDERR "NOT restarting the following changed units: ", join(", ", sort(keys %unitsToSkip)), "\n"
|
||||
|
@ -414,12 +465,38 @@ system("$out/activate", "$out") == 0 or $res = 2;
|
|||
|
||||
# Handle the activation script requesting the restart or reload of a unit.
|
||||
# We can only restart and reload (not stop/start) because the units to be
|
||||
# stopped are already stopped before the activation script is run.
|
||||
$unitsToRestart{$_} = 1 foreach
|
||||
split('\n', read_file($restartByActivationFile, err_mode => 'quiet') // "");
|
||||
# stopped are already stopped before the activation script is run. We do however
|
||||
# make an exception for services that are socket-activated and that have to be stopped
|
||||
# instead of being restarted.
|
||||
my %unitsToAlsoStop;
|
||||
my %unitsToAlsoSkip;
|
||||
foreach (split('\n', read_file($restartByActivationFile, err_mode => 'quiet') // "")) {
|
||||
my $unit = $_;
|
||||
my $baseUnit = $unit;
|
||||
my $newUnitFile = "$out/etc/systemd/system/$baseUnit";
|
||||
|
||||
$unitsToReload{$_} = 1 foreach
|
||||
split('\n', read_file($reloadByActivationFile, err_mode => 'quiet') // "");
|
||||
# Detect template instances.
|
||||
if (!-e $newUnitFile && $unit =~ /^(.*)@[^\.]*\.(.*)$/) {
|
||||
$baseUnit = "$1\@.$2";
|
||||
$newUnitFile = "$out/etc/systemd/system/$baseUnit";
|
||||
}
|
||||
|
||||
my $baseName = $baseUnit;
|
||||
$baseName =~ s/\.[a-z]*$//;
|
||||
|
||||
handleModifiedUnit($unit, $baseName, $newUnitFile, $activePrev, \%unitsToAlsoStop, \%unitsToStart, \%unitsToReload, \%unitsToRestart, %unitsToAlsoSkip);
|
||||
}
|
||||
unlink($restartByActivationFile);
|
||||
|
||||
my @unitsToAlsoStopFiltered = filterUnits(\%unitsToAlsoStop);
|
||||
if (scalar(keys %unitsToAlsoStop) > 0) {
|
||||
print STDERR "stopping the following units as well: ", join(", ", @unitsToAlsoStopFiltered), "\n"
|
||||
if scalar @unitsToAlsoStopFiltered;
|
||||
system("$curSystemd/systemctl", "stop", "--", sort(keys %unitsToAlsoStop));
|
||||
}
|
||||
|
||||
print STDERR "NOT restarting the following changed units as well: ", join(", ", sort(keys %unitsToAlsoSkip)), "\n"
|
||||
if scalar(keys %unitsToAlsoSkip) > 0;
|
||||
|
||||
# Restart systemd if necessary. Note that this is done using the
|
||||
# current version of systemd, just in case the new one has trouble
|
||||
|
@ -460,14 +537,40 @@ if (scalar(keys %unitsToReload) > 0) {
|
|||
print STDERR "reloading the following units: ", join(", ", sort(keys %unitsToReload)), "\n";
|
||||
system("@systemd@/bin/systemctl", "reload", "--", sort(keys %unitsToReload)) == 0 or $res = 4;
|
||||
unlink($reloadListFile);
|
||||
unlink($reloadByActivationFile);
|
||||
}
|
||||
|
||||
# Restart changed services (those that have to be restarted rather
|
||||
# than stopped and started).
|
||||
if (scalar(keys %unitsToRestart) > 0) {
|
||||
print STDERR "restarting the following units: ", join(", ", sort(keys %unitsToRestart)), "\n";
|
||||
system("@systemd@/bin/systemctl", "restart", "--", sort(keys %unitsToRestart)) == 0 or $res = 4;
|
||||
|
||||
# We split the units to be restarted into sockets and non-sockets.
|
||||
# This is because restarting sockets may fail which is not bad by
|
||||
# itself but which will prevent changes on the sockets. We usually
|
||||
# restart the socket and stop the service before that. Restarting
|
||||
# the socket will fail however when the service was re-activated
|
||||
# in the meantime. There is no proper way to prevent that from happening.
|
||||
my @unitsWithErrorHandling = grep { $_ !~ /\.socket$/ } sort(keys %unitsToRestart);
|
||||
my @unitsWithoutErrorHandling = grep { $_ =~ /\.socket$/ } sort(keys %unitsToRestart);
|
||||
|
||||
if (scalar(@unitsWithErrorHandling) > 0) {
|
||||
system("@systemd@/bin/systemctl", "restart", "--", @unitsWithErrorHandling) == 0 or $res = 4;
|
||||
}
|
||||
if (scalar(@unitsWithoutErrorHandling) > 0) {
|
||||
# Don't print warnings from systemctl
|
||||
no warnings 'once';
|
||||
open(OLDERR, ">&", \*STDERR);
|
||||
close(STDERR);
|
||||
|
||||
my $ret = system("@systemd@/bin/systemctl", "restart", "--", @unitsWithoutErrorHandling);
|
||||
|
||||
# Print stderr again
|
||||
open(STDERR, ">&OLDERR");
|
||||
|
||||
if ($ret ne 0) {
|
||||
print STDERR "warning: some sockets failed to restart. Please check your journal (journalctl -eb) and act accordingly.\n";
|
||||
}
|
||||
}
|
||||
unlink($restartListFile);
|
||||
unlink($restartByActivationFile);
|
||||
}
|
||||
|
@ -478,6 +581,7 @@ if (scalar(keys %unitsToRestart) > 0) {
|
|||
# that are symlinks to other units. We shouldn't start both at the
|
||||
# same time because we'll get a "Failed to add path to set" error from
|
||||
# systemd.
|
||||
my @unitsToStartFiltered = filterUnits(\%unitsToStart);
|
||||
print STDERR "starting the following units: ", join(", ", @unitsToStartFiltered), "\n"
|
||||
if scalar @unitsToStartFiltered;
|
||||
system("@systemd@/bin/systemctl", "start", "--", sort(keys %unitsToStart)) == 0 or $res = 4;
|
||||
|
@ -485,7 +589,7 @@ unlink($startListFile);
|
|||
|
||||
|
||||
# Print failed and new units.
|
||||
my (@failed, @new, @restarting);
|
||||
my (@failed, @new);
|
||||
my $activeNew = getActiveUnits;
|
||||
while (my ($unit, $state) = each %{$activeNew}) {
|
||||
if ($state->{state} eq "failed") {
|
||||
|
@ -501,7 +605,9 @@ while (my ($unit, $state) = each %{$activeNew}) {
|
|||
push @failed, $unit;
|
||||
}
|
||||
}
|
||||
elsif ($state->{state} ne "failed" && !defined $activePrev->{$unit}) {
|
||||
# Ignore scopes since they are not managed by this script but rather
|
||||
# created and managed by third-party services via the systemd dbus API.
|
||||
elsif ($state->{state} ne "failed" && !defined $activePrev->{$unit} && $unit !~ /\.scope$/) {
|
||||
push @new, $unit;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -84,6 +84,13 @@ let
|
|||
export localeArchive="${config.i18n.glibcLocales}/lib/locale/locale-archive"
|
||||
substituteAll ${./switch-to-configuration.pl} $out/bin/switch-to-configuration
|
||||
chmod +x $out/bin/switch-to-configuration
|
||||
${optionalString (pkgs.stdenv.hostPlatform == pkgs.stdenv.buildPlatform) ''
|
||||
if ! output=$($perl/bin/perl -c $out/bin/switch-to-configuration 2>&1); then
|
||||
echo "switch-to-configuration syntax is not valid:"
|
||||
echo "$output"
|
||||
exit 1
|
||||
fi
|
||||
''}
|
||||
|
||||
echo -n "${toString config.system.extraDependencies}" > $out/extra-dependencies
|
||||
|
||||
|
|
|
@ -83,7 +83,10 @@ in
|
|||
};
|
||||
|
||||
boot.kernelParams = mkOption {
|
||||
type = types.listOf types.str;
|
||||
type = types.listOf (types.strMatching ''([^"[:space:]]|"[^"]*")+'' // {
|
||||
name = "kernelParam";
|
||||
description = "string, with spaces inside double quotes";
|
||||
});
|
||||
default = [ ];
|
||||
description = "Parameters added to the kernel command line.";
|
||||
};
|
||||
|
|
|
@ -208,10 +208,15 @@ def main() -> None:
|
|||
if os.path.exists("@efiSysMountPoint@/loader/loader.conf"):
|
||||
os.unlink("@efiSysMountPoint@/loader/loader.conf")
|
||||
|
||||
if "@canTouchEfiVariables@" == "1":
|
||||
subprocess.check_call(["@systemd@/bin/bootctl", "--path=@efiSysMountPoint@", "install"])
|
||||
else:
|
||||
subprocess.check_call(["@systemd@/bin/bootctl", "--path=@efiSysMountPoint@", "--no-variables", "install"])
|
||||
flags = []
|
||||
|
||||
if "@canTouchEfiVariables@" != "1":
|
||||
flags.append("--no-variables")
|
||||
|
||||
if "@graceful@" == "1":
|
||||
flags.append("--graceful")
|
||||
|
||||
subprocess.check_call(["@systemd@/bin/bootctl", "--path=@efiSysMountPoint@"] + flags + ["install"])
|
||||
else:
|
||||
# Update bootloader to latest if needed
|
||||
systemd_version = subprocess.check_output(["@systemd@/bin/bootctl", "--version"], universal_newlines=True).split()[1]
|
||||
|
|
|
@ -24,7 +24,7 @@ let
|
|||
|
||||
configurationLimit = if cfg.configurationLimit == null then 0 else cfg.configurationLimit;
|
||||
|
||||
inherit (cfg) consoleMode;
|
||||
inherit (cfg) consoleMode graceful;
|
||||
|
||||
inherit (efi) efiSysMountPoint canTouchEfiVariables;
|
||||
|
||||
|
@ -126,6 +126,22 @@ in {
|
|||
'';
|
||||
};
|
||||
};
|
||||
|
||||
graceful = mkOption {
|
||||
default = false;
|
||||
|
||||
type = types.bool;
|
||||
|
||||
description = ''
|
||||
Invoke <literal>bootctl install</literal> with the <literal>--graceful</literal> option,
|
||||
which ignores errors when EFI variables cannot be written or when the EFI System Partition
|
||||
cannot be found. Currently only applies to random seed operations.
|
||||
|
||||
Only enable this option if <literal>systemd-boot</literal> otherwise fails to install, as the
|
||||
scope or implication of the <literal>--graceful</literal> option may change in the future.
|
||||
'';
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
|
|
|
@ -250,6 +250,16 @@ let
|
|||
(assertRange "ERSPANIndex" 1 1048575)
|
||||
];
|
||||
|
||||
sectionFooOverUDP = checkUnitConfig "FooOverUDP" [
|
||||
(assertOnlyFields [
|
||||
"Port"
|
||||
"Encapsulation"
|
||||
"Protocol"
|
||||
])
|
||||
(assertPort "Port")
|
||||
(assertValueOneOf "Encapsulation" ["FooOverUDP" "GenericUDPEncapsulation"])
|
||||
];
|
||||
|
||||
sectionPeer = checkUnitConfig "Peer" [
|
||||
(assertOnlyFields [
|
||||
"Name"
|
||||
|
@ -919,6 +929,18 @@ let
|
|||
'';
|
||||
};
|
||||
|
||||
fooOverUDPConfig = mkOption {
|
||||
default = { };
|
||||
example = { Port = 9001; };
|
||||
type = types.addCheck (types.attrsOf unitOption) check.netdev.sectionFooOverUDP;
|
||||
description = ''
|
||||
Each attribute in this set specifies an option in the
|
||||
<literal>[FooOverUDP]</literal> section of the unit. See
|
||||
<citerefentry><refentrytitle>systemd.netdev</refentrytitle>
|
||||
<manvolnum>5</manvolnum></citerefentry> for details.
|
||||
'';
|
||||
};
|
||||
|
||||
peerConfig = mkOption {
|
||||
default = {};
|
||||
example = { Name = "veth2"; };
|
||||
|
@ -1449,6 +1471,10 @@ let
|
|||
[Tunnel]
|
||||
${attrsToSection def.tunnelConfig}
|
||||
''
|
||||
+ optionalString (def.fooOverUDPConfig != { }) ''
|
||||
[FooOverUDP]
|
||||
${attrsToSection def.fooOverUDPConfig}
|
||||
''
|
||||
+ optionalString (def.peerConfig != { }) ''
|
||||
[Peer]
|
||||
${attrsToSection def.peerConfig}
|
||||
|
|
|
@ -137,6 +137,14 @@ let
|
|||
copy_bin_and_libs ${pkgs.e2fsprogs}/sbin/resize2fs
|
||||
''}
|
||||
|
||||
# Copy multipath.
|
||||
${optionalString config.services.multipath.enable ''
|
||||
copy_bin_and_libs ${config.services.multipath.package}/bin/multipath
|
||||
copy_bin_and_libs ${config.services.multipath.package}/bin/multipathd
|
||||
# Copy lib/multipath manually.
|
||||
cp -rpv ${config.services.multipath.package}/lib/multipath $out/lib
|
||||
''}
|
||||
|
||||
# Copy secrets if needed.
|
||||
#
|
||||
# TODO: move out to a separate script; see #85000.
|
||||
|
@ -199,6 +207,10 @@ let
|
|||
$out/bin/dmsetup --version 2>&1 | tee -a log | grep -q "version:"
|
||||
LVM_SYSTEM_DIR=$out $out/bin/lvm version 2>&1 | tee -a log | grep -q "LVM"
|
||||
$out/bin/mdadm --version
|
||||
${optionalString config.services.multipath.enable ''
|
||||
($out/bin/multipath || true) 2>&1 | grep -q 'need to be root'
|
||||
($out/bin/multipathd || true) 2>&1 | grep -q 'need to be root'
|
||||
''}
|
||||
|
||||
${config.boot.initrd.extraUtilsCommandsTest}
|
||||
fi
|
||||
|
@ -338,7 +350,26 @@ let
|
|||
{ object = pkgs.kmod-debian-aliases;
|
||||
symlink = "/etc/modprobe.d/debian.conf";
|
||||
}
|
||||
];
|
||||
] ++ lib.optionals config.services.multipath.enable [
|
||||
{ object = pkgs.runCommand "multipath.conf" {
|
||||
src = config.environment.etc."multipath.conf".text;
|
||||
preferLocalBuild = true;
|
||||
} ''
|
||||
target=$out
|
||||
printf "$src" > $out
|
||||
substituteInPlace $out \
|
||||
--replace ${config.services.multipath.package}/lib ${extraUtils}/lib
|
||||
'';
|
||||
symlink = "/etc/multipath.conf";
|
||||
}
|
||||
] ++ (lib.mapAttrsToList
|
||||
(symlink: options:
|
||||
{
|
||||
inherit symlink;
|
||||
object = options.source;
|
||||
}
|
||||
)
|
||||
config.boot.initrd.extraFiles);
|
||||
};
|
||||
|
||||
# Script to add secret files to the initrd at bootloader update time
|
||||
|
@ -419,6 +450,22 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
boot.initrd.extraFiles = mkOption {
|
||||
default = { };
|
||||
type = types.attrsOf
|
||||
(types.submodule {
|
||||
options = {
|
||||
source = mkOption {
|
||||
type = types.package;
|
||||
description = "The object to make available inside the initrd.";
|
||||
};
|
||||
};
|
||||
});
|
||||
description = ''
|
||||
Extra files to link and copy in to the initrd.
|
||||
'';
|
||||
};
|
||||
|
||||
boot.initrd.prepend = mkOption {
|
||||
default = [ ];
|
||||
type = types.listOf types.str;
|
||||
|
|
|
@ -61,6 +61,8 @@ let
|
|||
MACAddress = i.macAddress;
|
||||
} // optionalAttrs (i.mtu != null) {
|
||||
MTUBytes = toString i.mtu;
|
||||
} // optionalAttrs (i.wakeOnLan.enable == true) {
|
||||
WakeOnLan = "magic";
|
||||
};
|
||||
};
|
||||
in listToAttrs (map createNetworkLink interfaces);
|
||||
|
@ -464,6 +466,39 @@ let
|
|||
'';
|
||||
});
|
||||
|
||||
createFouEncapsulation = n: v: nameValuePair "${n}-fou-encap"
|
||||
(let
|
||||
# if we have a device to bind to we can wait for its addresses to be
|
||||
# configured, otherwise external sequencing is required.
|
||||
deps = optionals (v.local != null && v.local.dev != null)
|
||||
(deviceDependency v.local.dev ++ [ "network-addresses-${v.local.dev}.service" ]);
|
||||
fouSpec = "port ${toString v.port} ${
|
||||
if v.protocol != null then "ipproto ${toString v.protocol}" else "gue"
|
||||
} ${
|
||||
optionalString (v.local != null) "local ${escapeShellArg v.local.address} ${
|
||||
optionalString (v.local.dev != null) "dev ${escapeShellArg v.local.dev}"
|
||||
}"
|
||||
}";
|
||||
in
|
||||
{ description = "FOU endpoint ${n}";
|
||||
wantedBy = [ "network-setup.service" (subsystemDevice n) ];
|
||||
bindsTo = deps;
|
||||
partOf = [ "network-setup.service" ];
|
||||
after = [ "network-pre.target" ] ++ deps;
|
||||
before = [ "network-setup.service" ];
|
||||
serviceConfig.Type = "oneshot";
|
||||
serviceConfig.RemainAfterExit = true;
|
||||
path = [ pkgs.iproute2 ];
|
||||
script = ''
|
||||
# always remove previous incarnation since show can't filter
|
||||
ip fou del ${fouSpec} >/dev/null 2>&1 || true
|
||||
ip fou add ${fouSpec}
|
||||
'';
|
||||
postStop = ''
|
||||
ip fou del ${fouSpec} || true
|
||||
'';
|
||||
});
|
||||
|
||||
createSitDevice = n: v: nameValuePair "${n}-netdev"
|
||||
(let
|
||||
deps = deviceDependency v.dev;
|
||||
|
@ -484,7 +519,12 @@ let
|
|||
${optionalString (v.remote != null) "remote \"${v.remote}\""} \
|
||||
${optionalString (v.local != null) "local \"${v.local}\""} \
|
||||
${optionalString (v.ttl != null) "ttl ${toString v.ttl}"} \
|
||||
${optionalString (v.dev != null) "dev \"${v.dev}\""}
|
||||
${optionalString (v.dev != null) "dev \"${v.dev}\""} \
|
||||
${optionalString (v.encapsulation != null)
|
||||
"encap ${v.encapsulation.type} encap-dport ${toString v.encapsulation.port} ${
|
||||
optionalString (v.encapsulation.sourcePort != null)
|
||||
"encap-sport ${toString v.encapsulation.sourcePort}"
|
||||
}"}
|
||||
ip link set "${n}" up
|
||||
'';
|
||||
postStop = ''
|
||||
|
@ -528,6 +568,7 @@ let
|
|||
// mapAttrs' createVswitchDevice cfg.vswitches
|
||||
// mapAttrs' createBondDevice cfg.bonds
|
||||
// mapAttrs' createMacvlanDevice cfg.macvlans
|
||||
// mapAttrs' createFouEncapsulation cfg.fooOverUDP
|
||||
// mapAttrs' createSitDevice cfg.sits
|
||||
// mapAttrs' createVlanDevice cfg.vlans
|
||||
// {
|
||||
|
|
|
@ -47,6 +47,9 @@ in
|
|||
} ] ++ flip mapAttrsToList cfg.bridges (n: { rstp, ... }: {
|
||||
assertion = !rstp;
|
||||
message = "networking.bridges.${n}.rstp is not supported by networkd.";
|
||||
}) ++ flip mapAttrsToList cfg.fooOverUDP (n: { local, ... }: {
|
||||
assertion = local == null;
|
||||
message = "networking.fooOverUDP.${n}.local is not supported by networkd.";
|
||||
});
|
||||
|
||||
networking.dhcpcd.enable = mkDefault false;
|
||||
|
@ -194,6 +197,23 @@ in
|
|||
macvlan = [ name ];
|
||||
} ]);
|
||||
})))
|
||||
(mkMerge (flip mapAttrsToList cfg.fooOverUDP (name: fou: {
|
||||
netdevs."40-${name}" = {
|
||||
netdevConfig = {
|
||||
Name = name;
|
||||
Kind = "fou";
|
||||
};
|
||||
# unfortunately networkd cannot encode dependencies of netdevs on addresses/routes,
|
||||
# so we cannot specify Local=, Peer=, PeerPort=. this looks like a missing feature
|
||||
# in networkd.
|
||||
fooOverUDPConfig = {
|
||||
Port = fou.port;
|
||||
Encapsulation = if fou.protocol != null then "FooOverUDP" else "GenericUDPEncapsulation";
|
||||
} // (optionalAttrs (fou.protocol != null) {
|
||||
Protocol = fou.protocol;
|
||||
});
|
||||
};
|
||||
})))
|
||||
(mkMerge (flip mapAttrsToList cfg.sits (name: sit: {
|
||||
netdevs."40-${name}" = {
|
||||
netdevConfig = {
|
||||
|
@ -207,7 +227,17 @@ in
|
|||
Local = sit.local;
|
||||
}) // (optionalAttrs (sit.ttl != null) {
|
||||
TTL = sit.ttl;
|
||||
});
|
||||
}) // (optionalAttrs (sit.encapsulation != null) (
|
||||
{
|
||||
FooOverUDP = true;
|
||||
Encapsulation =
|
||||
if sit.encapsulation.type == "fou"
|
||||
then "FooOverUDP"
|
||||
else "GenericUDPEncapsulation";
|
||||
FOUDestinationPort = sit.encapsulation.port;
|
||||
} // (optionalAttrs (sit.encapsulation.sourcePort != null) {
|
||||
FOUSourcePort = sit.encapsulation.sourcePort;
|
||||
})));
|
||||
};
|
||||
networks = mkIf (sit.dev != null) {
|
||||
"40-${sit.dev}" = (mkMerge [ (genericNetwork (mkOverride 999)) {
|
||||
|
|
|
@ -10,6 +10,8 @@ let
|
|||
hasVirtuals = any (i: i.virtual) interfaces;
|
||||
hasSits = cfg.sits != { };
|
||||
hasBonds = cfg.bonds != { };
|
||||
hasFous = cfg.fooOverUDP != { }
|
||||
|| filterAttrs (_: s: s.encapsulation != null) cfg.sits != { };
|
||||
|
||||
slaves = concatMap (i: i.interfaces) (attrValues cfg.bonds)
|
||||
++ concatMap (i: i.interfaces) (attrValues cfg.bridges)
|
||||
|
@ -284,6 +286,13 @@ let
|
|||
'';
|
||||
};
|
||||
|
||||
wakeOnLan = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = "Wether to enable wol on this interface.";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
config = {
|
||||
|
@ -816,6 +825,71 @@ in
|
|||
});
|
||||
};
|
||||
|
||||
networking.fooOverUDP = mkOption {
|
||||
default = { };
|
||||
example =
|
||||
{
|
||||
primary = { port = 9001; local = { address = "192.0.2.1"; dev = "eth0"; }; };
|
||||
backup = { port = 9002; };
|
||||
};
|
||||
description = ''
|
||||
This option allows you to configure Foo Over UDP and Generic UDP Encapsulation
|
||||
endpoints. See <citerefentry><refentrytitle>ip-fou</refentrytitle>
|
||||
<manvolnum>8</manvolnum></citerefentry> for details.
|
||||
'';
|
||||
type = with types; attrsOf (submodule {
|
||||
options = {
|
||||
port = mkOption {
|
||||
type = port;
|
||||
description = ''
|
||||
Local port of the encapsulation UDP socket.
|
||||
'';
|
||||
};
|
||||
|
||||
protocol = mkOption {
|
||||
type = nullOr (ints.between 1 255);
|
||||
default = null;
|
||||
description = ''
|
||||
Protocol number of the encapsulated packets. Specifying <literal>null</literal>
|
||||
(the default) creates a GUE endpoint, specifying a protocol number will create
|
||||
a FOU endpoint.
|
||||
'';
|
||||
};
|
||||
|
||||
local = mkOption {
|
||||
type = nullOr (submodule {
|
||||
options = {
|
||||
address = mkOption {
|
||||
type = types.str;
|
||||
description = ''
|
||||
Local address to bind to. The address must be available when the FOU
|
||||
endpoint is created, using the scripted network setup this can be achieved
|
||||
either by setting <literal>dev</literal> or adding dependency information to
|
||||
<literal>systemd.services.<name>-fou-encap</literal>; it isn't supported
|
||||
when using networkd.
|
||||
'';
|
||||
};
|
||||
|
||||
dev = mkOption {
|
||||
type = nullOr str;
|
||||
default = null;
|
||||
example = "eth0";
|
||||
description = ''
|
||||
Network device to bind to.
|
||||
'';
|
||||
};
|
||||
};
|
||||
});
|
||||
default = null;
|
||||
example = { address = "203.0.113.22"; };
|
||||
description = ''
|
||||
Local address (and optionally device) to bind to using the given port.
|
||||
'';
|
||||
};
|
||||
};
|
||||
});
|
||||
};
|
||||
|
||||
networking.sits = mkOption {
|
||||
default = { };
|
||||
example = literalExpression ''
|
||||
|
@ -875,6 +949,44 @@ in
|
|||
'';
|
||||
};
|
||||
|
||||
encapsulation = with types; mkOption {
|
||||
type = nullOr (submodule {
|
||||
options = {
|
||||
type = mkOption {
|
||||
type = enum [ "fou" "gue" ];
|
||||
description = ''
|
||||
Selects encapsulation type. See
|
||||
<citerefentry><refentrytitle>ip-link</refentrytitle>
|
||||
<manvolnum>8</manvolnum></citerefentry> for details.
|
||||
'';
|
||||
};
|
||||
|
||||
port = mkOption {
|
||||
type = port;
|
||||
example = 9001;
|
||||
description = ''
|
||||
Destination port for encapsulated packets.
|
||||
'';
|
||||
};
|
||||
|
||||
sourcePort = mkOption {
|
||||
type = nullOr types.port;
|
||||
default = null;
|
||||
example = 9002;
|
||||
description = ''
|
||||
Source port for encapsulated packets. Will be chosen automatically by
|
||||
the kernel if unset.
|
||||
'';
|
||||
};
|
||||
};
|
||||
});
|
||||
default = null;
|
||||
example = { type = "fou"; port = 9001; };
|
||||
description = ''
|
||||
Configures encapsulation in UDP packets.
|
||||
'';
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
});
|
||||
|
@ -1109,7 +1221,8 @@ in
|
|||
boot.kernelModules = [ ]
|
||||
++ optional hasVirtuals "tun"
|
||||
++ optional hasSits "sit"
|
||||
++ optional hasBonds "bonding";
|
||||
++ optional hasBonds "bonding"
|
||||
++ optional hasFous "fou";
|
||||
|
||||
boot.extraModprobeConfig =
|
||||
# This setting is intentional as it prevents default bond devices
|
||||
|
|
|
@ -34,7 +34,7 @@ in {
|
|||
initrd.availableKernelModules = [ "hyperv_keyboard" ];
|
||||
|
||||
kernelParams = [
|
||||
"video=hyperv_fb:${cfg.videoMode} elevator=noop"
|
||||
"video=hyperv_fb:${cfg.videoMode}" "elevator=noop"
|
||||
];
|
||||
};
|
||||
|
||||
|
|
|
@ -13,23 +13,140 @@ let
|
|||
'';
|
||||
ovmfFilePrefix = if pkgs.stdenv.isAarch64 then "AAVMF" else "OVMF";
|
||||
qemuConfigFile = pkgs.writeText "qemu.conf" ''
|
||||
${optionalString cfg.qemuOvmf ''
|
||||
${optionalString cfg.qemu.ovmf.enable ''
|
||||
nvram = [ "/run/libvirt/nix-ovmf/${ovmfFilePrefix}_CODE.fd:/run/libvirt/nix-ovmf/${ovmfFilePrefix}_VARS.fd" ]
|
||||
''}
|
||||
${optionalString (!cfg.qemuRunAsRoot) ''
|
||||
${optionalString (!cfg.qemu.runAsRoot) ''
|
||||
user = "qemu-libvirtd"
|
||||
group = "qemu-libvirtd"
|
||||
''}
|
||||
${cfg.qemuVerbatimConfig}
|
||||
${cfg.qemu.verbatimConfig}
|
||||
'';
|
||||
dirName = "libvirt";
|
||||
subDirs = list: [ dirName ] ++ map (e: "${dirName}/${e}") list;
|
||||
|
||||
in {
|
||||
ovmfModule = types.submodule {
|
||||
options = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Allows libvirtd to take advantage of OVMF when creating new
|
||||
QEMU VMs with UEFI boot.
|
||||
'';
|
||||
};
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.OVMF;
|
||||
defaultText = literalExpression "pkgs.OVMF";
|
||||
example = literalExpression "pkgs.OVMFFull";
|
||||
description = ''
|
||||
OVMF package to use.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
swtpmModule = types.submodule {
|
||||
options = {
|
||||
enable = mkOption {
|
||||
type = types.bool;
|
||||
default = false;
|
||||
description = ''
|
||||
Allows libvirtd to use swtpm to create an emulated TPM.
|
||||
'';
|
||||
};
|
||||
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.swtpm;
|
||||
defaultText = literalExpression "pkgs.swtpm";
|
||||
description = ''
|
||||
swtpm package to use.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
qemuModule = types.submodule {
|
||||
options = {
|
||||
package = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.qemu;
|
||||
defaultText = literalExpression "pkgs.qemu";
|
||||
description = ''
|
||||
Qemu package to use with libvirt.
|
||||
`pkgs.qemu` can emulate alien architectures (e.g. aarch64 on x86)
|
||||
`pkgs.qemu_kvm` saves disk space allowing to emulate only host architectures.
|
||||
'';
|
||||
};
|
||||
|
||||
runAsRoot = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
If true, libvirtd runs qemu as root.
|
||||
If false, libvirtd runs qemu as unprivileged user qemu-libvirtd.
|
||||
Changing this option to false may cause file permission issues
|
||||
for existing guests. To fix these, manually change ownership
|
||||
of affected files in /var/lib/libvirt/qemu to qemu-libvirtd.
|
||||
'';
|
||||
};
|
||||
|
||||
verbatimConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = ''
|
||||
namespaces = []
|
||||
'';
|
||||
description = ''
|
||||
Contents written to the qemu configuration file, qemu.conf.
|
||||
Make sure to include a proper namespace configuration when
|
||||
supplying custom configuration.
|
||||
'';
|
||||
};
|
||||
|
||||
ovmf = mkOption {
|
||||
type = ovmfModule;
|
||||
default = { };
|
||||
description = ''
|
||||
QEMU's OVMF options.
|
||||
'';
|
||||
};
|
||||
|
||||
swtpm = mkOption {
|
||||
type = swtpmModule;
|
||||
default = { };
|
||||
description = ''
|
||||
QEMU's swtpm options.
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
in
|
||||
{
|
||||
|
||||
imports = [
|
||||
(mkRemovedOptionModule [ "virtualisation" "libvirtd" "enableKVM" ]
|
||||
"Set the option `virtualisation.libvirtd.qemuPackage' instead.")
|
||||
"Set the option `virtualisation.libvirtd.qemu.package' instead.")
|
||||
(mkRenamedOptionModule
|
||||
[ "virtualisation" "libvirtd" "qemuPackage" ]
|
||||
[ "virtualisation" "libvirtd" "qemu" "package" ])
|
||||
(mkRenamedOptionModule
|
||||
[ "virtualisation" "libvirtd" "qemuRunAsRoot" ]
|
||||
[ "virtualisation" "libvirtd" "qemu" "runAsRoot" ])
|
||||
(mkRenamedOptionModule
|
||||
[ "virtualisation" "libvirtd" "qemuVerbatimConfig" ]
|
||||
[ "virtualisation" "libvirtd" "qemu" "verbatimConfig" ])
|
||||
(mkRenamedOptionModule
|
||||
[ "virtualisation" "libvirtd" "qemuOvmf" ]
|
||||
[ "virtualisation" "libvirtd" "qemu" "ovmf" "enable" ])
|
||||
(mkRenamedOptionModule
|
||||
[ "virtualisation" "libvirtd" "qemuOvmfPackage" ]
|
||||
[ "virtualisation" "libvirtd" "qemu" "ovmf" "package" ])
|
||||
(mkRenamedOptionModule
|
||||
[ "virtualisation" "libvirtd" "qemuSwtpm" ]
|
||||
[ "virtualisation" "libvirtd" "qemu" "swtpm" "enable" ])
|
||||
];
|
||||
|
||||
###### interface
|
||||
|
@ -56,17 +173,6 @@ in {
|
|||
'';
|
||||
};
|
||||
|
||||
qemuPackage = mkOption {
|
||||
type = types.package;
|
||||
default = pkgs.qemu;
|
||||
defaultText = literalExpression "pkgs.qemu";
|
||||
description = ''
|
||||
Qemu package to use with libvirt.
|
||||
`pkgs.qemu` can emulate alien architectures (e.g. aarch64 on x86)
|
||||
`pkgs.qemu_kvm` saves disk space allowing to emulate only host architectures.
|
||||
'';
|
||||
};
|
||||
|
||||
extraConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = "";
|
||||
|
@ -76,39 +182,6 @@ in {
|
|||
'';
|
||||
};
|
||||
|
||||
qemuRunAsRoot = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
If true, libvirtd runs qemu as root.
|
||||
If false, libvirtd runs qemu as unprivileged user qemu-libvirtd.
|
||||
Changing this option to false may cause file permission issues
|
||||
for existing guests. To fix these, manually change ownership
|
||||
of affected files in /var/lib/libvirt/qemu to qemu-libvirtd.
|
||||
'';
|
||||
};
|
||||
|
||||
qemuVerbatimConfig = mkOption {
|
||||
type = types.lines;
|
||||
default = ''
|
||||
namespaces = []
|
||||
'';
|
||||
description = ''
|
||||
Contents written to the qemu configuration file, qemu.conf.
|
||||
Make sure to include a proper namespace configuration when
|
||||
supplying custom configuration.
|
||||
'';
|
||||
};
|
||||
|
||||
qemuOvmf = mkOption {
|
||||
type = types.bool;
|
||||
default = true;
|
||||
description = ''
|
||||
Allows libvirtd to take advantage of OVMF when creating new
|
||||
QEMU VMs with UEFI boot.
|
||||
'';
|
||||
};
|
||||
|
||||
extraOptions = mkOption {
|
||||
type = types.listOf types.str;
|
||||
default = [ ];
|
||||
|
@ -119,7 +192,7 @@ in {
|
|||
};
|
||||
|
||||
onBoot = mkOption {
|
||||
type = types.enum ["start" "ignore" ];
|
||||
type = types.enum [ "start" "ignore" ];
|
||||
default = "start";
|
||||
description = ''
|
||||
Specifies the action to be done to / on the guests when the host boots.
|
||||
|
@ -131,7 +204,7 @@ in {
|
|||
};
|
||||
|
||||
onShutdown = mkOption {
|
||||
type = types.enum ["shutdown" "suspend" ];
|
||||
type = types.enum [ "shutdown" "suspend" ];
|
||||
default = "suspend";
|
||||
description = ''
|
||||
When shutting down / restarting the host what method should
|
||||
|
@ -149,6 +222,13 @@ in {
|
|||
'';
|
||||
};
|
||||
|
||||
qemu = mkOption {
|
||||
type = qemuModule;
|
||||
default = { };
|
||||
description = ''
|
||||
QEMU related options.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
|
@ -161,13 +241,19 @@ in {
|
|||
assertion = config.security.polkit.enable;
|
||||
message = "The libvirtd module currently requires Polkit to be enabled ('security.polkit.enable = true').";
|
||||
}
|
||||
{
|
||||
assertion = builtins.elem "fd" cfg.qemu.ovmf.package.outputs;
|
||||
message = "The option 'virtualisation.libvirtd.qemuOvmfPackage' needs a package that has an 'fd' output.";
|
||||
}
|
||||
];
|
||||
|
||||
environment = {
|
||||
# this file is expected in /etc/qemu and not sysconfdir (/var/lib)
|
||||
etc."qemu/bridge.conf".text = lib.concatMapStringsSep "\n" (e:
|
||||
"allow ${e}") cfg.allowedBridges;
|
||||
systemPackages = with pkgs; [ libressl.nc iptables cfg.package cfg.qemuPackage ];
|
||||
etc."qemu/bridge.conf".text = lib.concatMapStringsSep "\n"
|
||||
(e:
|
||||
"allow ${e}")
|
||||
cfg.allowedBridges;
|
||||
systemPackages = with pkgs; [ libressl.nc iptables cfg.package cfg.qemu.package ];
|
||||
etc.ethertypes.source = "${pkgs.ebtables}/etc/ethertypes";
|
||||
};
|
||||
|
||||
|
@ -209,17 +295,17 @@ in {
|
|||
cp -f ${qemuConfigFile} /var/lib/${dirName}/qemu.conf
|
||||
|
||||
# stable (not GC'able as in /nix/store) paths for using in <emulator> section of xml configs
|
||||
for emulator in ${cfg.package}/libexec/libvirt_lxc ${cfg.qemuPackage}/bin/qemu-kvm ${cfg.qemuPackage}/bin/qemu-system-*; do
|
||||
for emulator in ${cfg.package}/libexec/libvirt_lxc ${cfg.qemu.package}/bin/qemu-kvm ${cfg.qemu.package}/bin/qemu-system-*; do
|
||||
ln -s --force "$emulator" /run/${dirName}/nix-emulators/
|
||||
done
|
||||
|
||||
for helper in libexec/qemu-bridge-helper bin/qemu-pr-helper; do
|
||||
ln -s --force ${cfg.qemuPackage}/$helper /run/${dirName}/nix-helpers/
|
||||
ln -s --force ${cfg.qemu.package}/$helper /run/${dirName}/nix-helpers/
|
||||
done
|
||||
|
||||
${optionalString cfg.qemuOvmf ''
|
||||
ln -s --force ${pkgs.OVMF.fd}/FV/${ovmfFilePrefix}_CODE.fd /run/${dirName}/nix-ovmf/
|
||||
ln -s --force ${pkgs.OVMF.fd}/FV/${ovmfFilePrefix}_VARS.fd /run/${dirName}/nix-ovmf/
|
||||
${optionalString cfg.qemu.ovmf.enable ''
|
||||
ln -s --force ${cfg.qemu.ovmf.package.fd}/FV/${ovmfFilePrefix}_CODE.fd /run/${dirName}/nix-ovmf/
|
||||
ln -s --force ${cfg.qemu.ovmf.package.fd}/FV/${ovmfFilePrefix}_VARS.fd /run/${dirName}/nix-ovmf/
|
||||
''}
|
||||
'';
|
||||
|
||||
|
@ -235,15 +321,20 @@ in {
|
|||
systemd.services.libvirtd = {
|
||||
requires = [ "libvirtd-config.service" ];
|
||||
after = [ "libvirtd-config.service" ]
|
||||
++ optional vswitch.enable "ovs-vswitchd.service";
|
||||
++ optional vswitch.enable "ovs-vswitchd.service";
|
||||
|
||||
environment.LIBVIRTD_ARGS = escapeShellArgs (
|
||||
[ "--config" configFile
|
||||
"--timeout" "120" # from ${libvirt}/var/lib/sysconfig/libvirtd
|
||||
] ++ cfg.extraOptions);
|
||||
[
|
||||
"--config"
|
||||
configFile
|
||||
"--timeout"
|
||||
"120" # from ${libvirt}/var/lib/sysconfig/libvirtd
|
||||
] ++ cfg.extraOptions
|
||||
);
|
||||
|
||||
path = [ cfg.qemuPackage ] # libvirtd requires qemu-img to manage disk images
|
||||
++ optional vswitch.enable vswitch.package;
|
||||
path = [ cfg.qemu.package ] # libvirtd requires qemu-img to manage disk images
|
||||
++ optional vswitch.enable vswitch.package
|
||||
++ optional cfg.qemu.swtpm.enable cfg.qemu.swtpm.package;
|
||||
|
||||
serviceConfig = {
|
||||
Type = "notify";
|
||||
|
|
|
@ -311,6 +311,7 @@ in
|
|||
nitter = handleTest ./nitter.nix {};
|
||||
nix-serve = handleTest ./nix-ssh-serve.nix {};
|
||||
nix-ssh-serve = handleTest ./nix-ssh-serve.nix {};
|
||||
nixops = handleTest ./nixops/default.nix {};
|
||||
nixos-generate-config = handleTest ./nixos-generate-config.nix {};
|
||||
node-red = handleTest ./node-red.nix {};
|
||||
nomad = handleTest ./nomad.nix {};
|
||||
|
@ -375,6 +376,7 @@ in
|
|||
prosody = handleTest ./xmpp/prosody.nix {};
|
||||
prosodyMysql = handleTest ./xmpp/prosody-mysql.nix {};
|
||||
proxy = handleTest ./proxy.nix {};
|
||||
prowlarr = handleTest ./prowlarr.nix {};
|
||||
pt2-clone = handleTest ./pt2-clone.nix {};
|
||||
qboot = handleTestOn ["x86_64-linux" "i686-linux"] ./qboot.nix {};
|
||||
quorum = handleTest ./quorum.nix {};
|
||||
|
|
|
@ -383,5 +383,18 @@ import ./make-test-python.nix ({ pkgs, ... }: {
|
|||
docker.succeed(
|
||||
"tar -tf ${examples.exportBash} | grep '\./bin/bash' > /dev/null"
|
||||
)
|
||||
|
||||
with subtest("Ensure bare paths in contents are loaded correctly"):
|
||||
docker.succeed(
|
||||
"docker load --input='${examples.build-image-with-path}'",
|
||||
"docker run --rm build-image-with-path bash -c '[[ -e /hello.txt ]]'",
|
||||
"docker rmi build-image-with-path",
|
||||
)
|
||||
docker.succeed(
|
||||
"${examples.layered-image-with-path} | docker load",
|
||||
"docker run --rm layered-image-with-path bash -c '[[ -e /hello.txt ]]'",
|
||||
"docker rmi layered-image-with-path",
|
||||
)
|
||||
|
||||
'';
|
||||
})
|
||||
|
|
|
@ -25,6 +25,21 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : {
|
|||
services.xserver.desktopManager.gnome.debug = true;
|
||||
services.xserver.displayManager.defaultSession = "gnome-xorg";
|
||||
|
||||
systemd.user.services = {
|
||||
"org.gnome.Shell@x11" = {
|
||||
serviceConfig = {
|
||||
ExecStart = [
|
||||
# Clear the list before overriding it.
|
||||
""
|
||||
# Eval API is now internal so Shell needs to run in unsafe mode.
|
||||
# TODO: improve test driver so that it supports openqa-like manipulation
|
||||
# that would allow us to drop this mess.
|
||||
"${pkgs.gnome.gnome-shell}/bin/gnome-shell --unsafe-mode"
|
||||
];
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
virtualisation.memorySize = 1024;
|
||||
};
|
||||
|
||||
|
|
|
@ -30,6 +30,21 @@ import ./make-test-python.nix ({ pkgs, lib, ...} : {
|
|||
})
|
||||
];
|
||||
|
||||
systemd.user.services = {
|
||||
"org.gnome.Shell@wayland" = {
|
||||
serviceConfig = {
|
||||
ExecStart = [
|
||||
# Clear the list before overriding it.
|
||||
""
|
||||
# Eval API is now internal so Shell needs to run in unsafe mode.
|
||||
# TODO: improve test driver so that it supports openqa-like manipulation
|
||||
# that would allow us to drop this mess.
|
||||
"${pkgs.gnome.gnome-shell}/bin/gnome-shell --unsafe-mode"
|
||||
];
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
virtualisation.memorySize = 1024;
|
||||
};
|
||||
|
||||
|
|
267
nixos/tests/iscsi-multipath-root.nix
Normal file
267
nixos/tests/iscsi-multipath-root.nix
Normal file
|
@ -0,0 +1,267 @@
|
|||
import ./make-test-python.nix (
|
||||
{ pkgs, lib, ... }:
|
||||
let
|
||||
initiatorName = "iqn.2020-08.org.linux-iscsi.initiatorhost:example";
|
||||
targetName = "iqn.2003-01.org.linux-iscsi.target.x8664:sn.acf8fd9c23af";
|
||||
in
|
||||
{
|
||||
name = "iscsi";
|
||||
meta = {
|
||||
maintainers = pkgs.lib.teams.deshaw.members;
|
||||
};
|
||||
|
||||
nodes = {
|
||||
target = { config, pkgs, lib, ... }: {
|
||||
virtualisation.vlans = [ 1 2 ];
|
||||
services.target = {
|
||||
enable = true;
|
||||
config = {
|
||||
fabric_modules = [ ];
|
||||
storage_objects = [
|
||||
{
|
||||
dev = "/dev/vdb";
|
||||
name = "test";
|
||||
plugin = "block";
|
||||
write_back = true;
|
||||
wwn = "92b17c3f-6b40-4168-b082-ceeb7b495522";
|
||||
}
|
||||
];
|
||||
targets = [
|
||||
{
|
||||
fabric = "iscsi";
|
||||
tpgs = [
|
||||
{
|
||||
enable = true;
|
||||
attributes = {
|
||||
authentication = 0;
|
||||
generate_node_acls = 1;
|
||||
};
|
||||
luns = [
|
||||
{
|
||||
alias = "94dfe06967";
|
||||
alua_tg_pt_gp_name = "default_tg_pt_gp";
|
||||
index = 0;
|
||||
storage_object = "/backstores/block/test";
|
||||
}
|
||||
];
|
||||
node_acls = [
|
||||
{
|
||||
mapped_luns = [
|
||||
{
|
||||
alias = "d42f5bdf8a";
|
||||
index = 0;
|
||||
tpg_lun = 0;
|
||||
write_protect = false;
|
||||
}
|
||||
];
|
||||
node_wwn = initiatorName;
|
||||
}
|
||||
];
|
||||
portals = [
|
||||
{
|
||||
ip_address = "0.0.0.0";
|
||||
iser = false;
|
||||
offload = false;
|
||||
port = 3260;
|
||||
}
|
||||
];
|
||||
tag = 1;
|
||||
}
|
||||
];
|
||||
wwn = targetName;
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
networking.firewall.allowedTCPPorts = [ 3260 ];
|
||||
networking.firewall.allowedUDPPorts = [ 3260 ];
|
||||
|
||||
virtualisation.memorySize = 2048;
|
||||
virtualisation.emptyDiskImages = [ 2048 ];
|
||||
};
|
||||
|
||||
initiatorAuto = { nodes, config, pkgs, ... }: {
|
||||
virtualisation.vlans = [ 1 2 ];
|
||||
|
||||
services.multipath = {
|
||||
enable = true;
|
||||
defaults = ''
|
||||
find_multipaths yes
|
||||
user_friendly_names yes
|
||||
'';
|
||||
pathGroups = [
|
||||
{
|
||||
alias = 123456;
|
||||
wwid = "3600140592b17c3f6b404168b082ceeb7";
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
services.openiscsi = {
|
||||
enable = true;
|
||||
enableAutoLoginOut = true;
|
||||
discoverPortal = "target";
|
||||
name = initiatorName;
|
||||
};
|
||||
|
||||
environment.systemPackages = with pkgs; [
|
||||
xfsprogs
|
||||
];
|
||||
|
||||
environment.etc."initiator-root-disk-closure".source = nodes.initiatorRootDisk.config.system.build.toplevel;
|
||||
|
||||
nix.binaryCaches = lib.mkForce [ ];
|
||||
nix.extraOptions = ''
|
||||
hashed-mirrors =
|
||||
connect-timeout = 1
|
||||
'';
|
||||
};
|
||||
|
||||
initiatorRootDisk = { config, pkgs, modulesPath, lib, ... }: {
|
||||
boot.initrd.network.enable = true;
|
||||
boot.loader.grub.enable = false;
|
||||
|
||||
boot.kernelParams = lib.mkOverride 5 (
|
||||
[
|
||||
"boot.shell_on_fail"
|
||||
"console=tty1"
|
||||
"ip=192.168.1.1:::255.255.255.0::ens9:none"
|
||||
"ip=192.168.2.1:::255.255.255.0::ens10:none"
|
||||
]
|
||||
);
|
||||
|
||||
# defaults to true, puts some code in the initrd that tries to mount an overlayfs on /nix/store
|
||||
virtualisation.writableStore = false;
|
||||
virtualisation.vlans = [ 1 2 ];
|
||||
|
||||
services.multipath = {
|
||||
enable = true;
|
||||
defaults = ''
|
||||
find_multipaths yes
|
||||
user_friendly_names yes
|
||||
'';
|
||||
pathGroups = [
|
||||
{
|
||||
alias = 123456;
|
||||
wwid = "3600140592b17c3f6b404168b082ceeb7";
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
fileSystems = lib.mkOverride 5 {
|
||||
"/" = {
|
||||
fsType = "xfs";
|
||||
device = "/dev/mapper/123456";
|
||||
options = [ "_netdev" ];
|
||||
};
|
||||
};
|
||||
|
||||
boot.initrd.extraFiles."etc/multipath/wwids".source = pkgs.writeText "wwids" "/3600140592b17c3f6b404168b082ceeb7/";
|
||||
|
||||
boot.iscsi-initiator = {
|
||||
discoverPortal = "target";
|
||||
name = initiatorName;
|
||||
target = targetName;
|
||||
extraIscsiCommands = ''
|
||||
iscsiadm -m discovery -o update -t sendtargets -p 192.168.2.3 --login
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
};
|
||||
|
||||
testScript = { nodes, ... }: ''
|
||||
target.start()
|
||||
target.wait_for_unit("iscsi-target.service")
|
||||
|
||||
initiatorAuto.start()
|
||||
|
||||
initiatorAuto.wait_for_unit("iscsid.service")
|
||||
initiatorAuto.wait_for_unit("iscsi.service")
|
||||
initiatorAuto.get_unit_info("iscsi")
|
||||
|
||||
# Expecting this to fail since we should already know about 192.168.1.3
|
||||
initiatorAuto.fail("iscsiadm -m discovery -o update -t sendtargets -p 192.168.1.3 --login")
|
||||
# Expecting this to succeed since we don't yet know about 192.168.2.3
|
||||
initiatorAuto.succeed("iscsiadm -m discovery -o update -t sendtargets -p 192.168.2.3 --login")
|
||||
|
||||
# /dev/sda is provided by iscsi on target
|
||||
initiatorAuto.succeed("set -x; while ! test -e /dev/sda; do sleep 1; done")
|
||||
|
||||
initiatorAuto.succeed("mkfs.xfs /dev/sda")
|
||||
initiatorAuto.succeed("mkdir /mnt")
|
||||
|
||||
# Start by verifying /dev/sda and /dev/sdb are both the same disk
|
||||
initiatorAuto.succeed("mount /dev/sda /mnt")
|
||||
initiatorAuto.succeed("touch /mnt/hi")
|
||||
initiatorAuto.succeed("umount /mnt")
|
||||
|
||||
initiatorAuto.succeed("mount /dev/sdb /mnt")
|
||||
initiatorAuto.succeed("test -e /mnt/hi")
|
||||
initiatorAuto.succeed("umount /mnt")
|
||||
|
||||
initiatorAuto.succeed("systemctl restart multipathd")
|
||||
initiatorAuto.succeed("multipath -ll | systemd-cat")
|
||||
|
||||
# Install our RootDisk machine to 123456, the alias to the device that multipath is now managing
|
||||
initiatorAuto.succeed("mount /dev/mapper/123456 /mnt")
|
||||
initiatorAuto.succeed("mkdir -p /mnt/etc/{multipath,iscsi}")
|
||||
initiatorAuto.succeed("cp -r /etc/multipath/wwids /mnt/etc/multipath/wwids")
|
||||
initiatorAuto.succeed("cp -r /etc/iscsi/{nodes,send_targets} /mnt/etc/iscsi")
|
||||
initiatorAuto.succeed(
|
||||
"nixos-install --no-bootloader --no-root-passwd --system /etc/initiator-root-disk-closure"
|
||||
)
|
||||
initiatorAuto.succeed("umount /mnt")
|
||||
initiatorAuto.shutdown()
|
||||
|
||||
initiatorRootDisk.start()
|
||||
initiatorRootDisk.wait_for_unit("multi-user.target")
|
||||
initiatorRootDisk.wait_for_unit("iscsid")
|
||||
|
||||
# Log in over both nodes
|
||||
initiatorRootDisk.fail("iscsiadm -m discovery -o update -t sendtargets -p 192.168.1.3 --login")
|
||||
initiatorRootDisk.fail("iscsiadm -m discovery -o update -t sendtargets -p 192.168.2.3 --login")
|
||||
initiatorRootDisk.succeed("systemctl restart multipathd")
|
||||
initiatorRootDisk.succeed("multipath -ll | systemd-cat")
|
||||
|
||||
# Verify we can write and sync the root disk
|
||||
initiatorRootDisk.succeed("mkdir /scratch")
|
||||
initiatorRootDisk.succeed("touch /scratch/both-up")
|
||||
initiatorRootDisk.succeed("sync /scratch")
|
||||
|
||||
# Verify we can write to the root with ens9 (sda, 192.168.1.3) down
|
||||
initiatorRootDisk.succeed("ip link set ens9 down")
|
||||
initiatorRootDisk.succeed("touch /scratch/ens9-down")
|
||||
initiatorRootDisk.succeed("sync /scratch")
|
||||
initiatorRootDisk.succeed("ip link set ens9 up")
|
||||
|
||||
# todo: better way to wait until multipath notices the link is back
|
||||
initiatorRootDisk.succeed("sleep 5")
|
||||
initiatorRootDisk.succeed("touch /scratch/both-down")
|
||||
initiatorRootDisk.succeed("sync /scratch")
|
||||
|
||||
# Verify we can write to the root with ens10 (sdb, 192.168.2.3) down
|
||||
initiatorRootDisk.succeed("ip link set ens10 down")
|
||||
initiatorRootDisk.succeed("touch /scratch/ens10-down")
|
||||
initiatorRootDisk.succeed("sync /scratch")
|
||||
initiatorRootDisk.succeed("ip link set ens10 up")
|
||||
initiatorRootDisk.succeed("touch /scratch/ens10-down")
|
||||
initiatorRootDisk.succeed("sync /scratch")
|
||||
|
||||
initiatorRootDisk.succeed("ip link set ens9 up")
|
||||
initiatorRootDisk.succeed("ip link set ens10 up")
|
||||
initiatorRootDisk.shutdown()
|
||||
|
||||
# Verify we can boot with the target's eth1 down, forcing
|
||||
# it to multipath via the second link
|
||||
target.succeed("ip link set eth1 down")
|
||||
initiatorRootDisk.start()
|
||||
initiatorRootDisk.wait_for_unit("multi-user.target")
|
||||
initiatorRootDisk.wait_for_unit("iscsid")
|
||||
initiatorRootDisk.succeed("test -e /scratch/both-up")
|
||||
'';
|
||||
}
|
||||
)
|
||||
|
||||
|
|
@ -380,12 +380,57 @@ let
|
|||
router.wait_until_succeeds("ping -c 1 192.168.1.3")
|
||||
'';
|
||||
};
|
||||
fou = {
|
||||
name = "foo-over-udp";
|
||||
nodes.machine = { ... }: {
|
||||
virtualisation.vlans = [ 1 ];
|
||||
networking = {
|
||||
useNetworkd = networkd;
|
||||
useDHCP = false;
|
||||
interfaces.eth1.ipv4.addresses = mkOverride 0
|
||||
[ { address = "192.168.1.1"; prefixLength = 24; } ];
|
||||
fooOverUDP = {
|
||||
fou1 = { port = 9001; };
|
||||
fou2 = { port = 9002; protocol = 41; };
|
||||
fou3 = mkIf (!networkd)
|
||||
{ port = 9003; local.address = "192.168.1.1"; };
|
||||
fou4 = mkIf (!networkd)
|
||||
{ port = 9004; local = { address = "192.168.1.1"; dev = "eth1"; }; };
|
||||
};
|
||||
};
|
||||
systemd.services = {
|
||||
fou3-fou-encap.after = optional (!networkd) "network-addresses-eth1.service";
|
||||
};
|
||||
};
|
||||
testScript = { ... }:
|
||||
''
|
||||
import json
|
||||
|
||||
machine.wait_for_unit("network.target")
|
||||
fous = json.loads(machine.succeed("ip -json fou show"))
|
||||
assert {"port": 9001, "gue": None, "family": "inet"} in fous, "fou1 exists"
|
||||
assert {"port": 9002, "ipproto": 41, "family": "inet"} in fous, "fou2 exists"
|
||||
'' + optionalString (!networkd) ''
|
||||
assert {
|
||||
"port": 9003,
|
||||
"gue": None,
|
||||
"family": "inet",
|
||||
"local": "192.168.1.1",
|
||||
} in fous, "fou3 exists"
|
||||
assert {
|
||||
"port": 9004,
|
||||
"gue": None,
|
||||
"family": "inet",
|
||||
"local": "192.168.1.1",
|
||||
"dev": "eth1",
|
||||
} in fous, "fou4 exists"
|
||||
'';
|
||||
};
|
||||
sit = let
|
||||
node = { address4, remote, address6 }: { pkgs, ... }: with pkgs.lib; {
|
||||
virtualisation.vlans = [ 1 ];
|
||||
networking = {
|
||||
useNetworkd = networkd;
|
||||
firewall.enable = false;
|
||||
useDHCP = false;
|
||||
sits.sit = {
|
||||
inherit remote;
|
||||
|
@ -400,8 +445,30 @@ let
|
|||
};
|
||||
in {
|
||||
name = "Sit";
|
||||
nodes.client1 = node { address4 = "192.168.1.1"; remote = "192.168.1.2"; address6 = "fc00::1"; };
|
||||
nodes.client2 = node { address4 = "192.168.1.2"; remote = "192.168.1.1"; address6 = "fc00::2"; };
|
||||
# note on firewalling: the two nodes are explicitly asymmetric.
|
||||
# client1 sends SIT packets in UDP, but accepts only proto-41 incoming.
|
||||
# client2 does the reverse, sending in proto-41 and accepting only UDP incoming.
|
||||
# that way we'll notice when either SIT itself or FOU breaks.
|
||||
nodes.client1 = args@{ pkgs, ... }:
|
||||
mkMerge [
|
||||
(node { address4 = "192.168.1.1"; remote = "192.168.1.2"; address6 = "fc00::1"; } args)
|
||||
{
|
||||
networking = {
|
||||
firewall.extraCommands = "iptables -A INPUT -p 41 -j ACCEPT";
|
||||
sits.sit.encapsulation = { type = "fou"; port = 9001; };
|
||||
};
|
||||
}
|
||||
];
|
||||
nodes.client2 = args@{ pkgs, ... }:
|
||||
mkMerge [
|
||||
(node { address4 = "192.168.1.2"; remote = "192.168.1.1"; address6 = "fc00::2"; } args)
|
||||
{
|
||||
networking = {
|
||||
firewall.allowedUDPPorts = [ 9001 ];
|
||||
fooOverUDP.fou1 = { port = 9001; protocol = 41; };
|
||||
};
|
||||
}
|
||||
];
|
||||
testScript = { ... }:
|
||||
''
|
||||
start_all()
|
||||
|
|
|
@ -33,12 +33,17 @@ in {
|
|||
in {
|
||||
networking.firewall.allowedTCPPorts = [ 80 ];
|
||||
|
||||
systemd.tmpfiles.rules = [
|
||||
"d /var/lib/nextcloud-data 0750 nextcloud nginx - -"
|
||||
];
|
||||
|
||||
services.nextcloud = {
|
||||
enable = true;
|
||||
datadir = "/var/lib/nextcloud-data";
|
||||
hostName = "nextcloud";
|
||||
config = {
|
||||
# Don't inherit adminuser since "root" is supposed to be the default
|
||||
inherit adminpass;
|
||||
adminpassFile = "${pkgs.writeText "adminpass" adminpass}"; # Don't try this at home!
|
||||
dbtableprefix = "nixos_";
|
||||
};
|
||||
package = pkgs.${"nextcloud" + (toString nextcloudVersion)};
|
||||
|
@ -98,6 +103,7 @@ in {
|
|||
"${withRcloneEnv} ${copySharedFile}"
|
||||
)
|
||||
client.wait_for_unit("multi-user.target")
|
||||
nextcloud.succeed("test -f /var/lib/nextcloud-data/data/root/files/test-shared-file")
|
||||
client.succeed(
|
||||
"${withRcloneEnv} ${diffSharedFile}"
|
||||
)
|
||||
|
|
|
@ -32,9 +32,9 @@ in {
|
|||
dbuser = "nextcloud";
|
||||
dbhost = "127.0.0.1";
|
||||
dbport = 3306;
|
||||
dbpass = "hunter2";
|
||||
dbpassFile = "${pkgs.writeText "dbpass" "hunter2" }";
|
||||
# Don't inherit adminuser since "root" is supposed to be the default
|
||||
inherit adminpass;
|
||||
adminpassFile = "${pkgs.writeText "adminpass" adminpass}"; # Don't try this at home!
|
||||
};
|
||||
};
|
||||
|
||||
|
|
115
nixos/tests/nixops/default.nix
Normal file
115
nixos/tests/nixops/default.nix
Normal file
|
@ -0,0 +1,115 @@
|
|||
{ pkgs, ... }:
|
||||
let
|
||||
inherit (pkgs) lib;
|
||||
|
||||
tests = {
|
||||
# TODO: uncomment stable
|
||||
# - Blocked on https://github.com/NixOS/nixpkgs/issues/138584 which has a
|
||||
# PR in staging: https://github.com/NixOS/nixpkgs/pull/139986
|
||||
# - Alternatively, blocked on a NixOps 2 release
|
||||
# https://github.com/NixOS/nixops/issues/1242
|
||||
# stable = testsLegacyNetwork { nixopsPkg = pkgs.nixops; };
|
||||
unstable = testsForPackage { nixopsPkg = pkgs.nixopsUnstable; };
|
||||
|
||||
# inherit testsForPackage;
|
||||
};
|
||||
|
||||
testsForPackage = lib.makeOverridable (args: lib.recurseIntoAttrs {
|
||||
legacyNetwork = testLegacyNetwork args;
|
||||
});
|
||||
|
||||
testLegacyNetwork = { nixopsPkg }: pkgs.nixosTest ({
|
||||
nodes = {
|
||||
deployer = { config, lib, nodes, pkgs, ... }: {
|
||||
imports = [ ../../modules/installer/cd-dvd/channel.nix ];
|
||||
environment.systemPackages = [ nixopsPkg ];
|
||||
nix.binaryCaches = lib.mkForce [ ];
|
||||
users.users.person.isNormalUser = true;
|
||||
virtualisation.writableStore = true;
|
||||
virtualisation.memorySize = 1024 /*MiB*/;
|
||||
virtualisation.pathsInNixDB = [
|
||||
pkgs.hello
|
||||
pkgs.figlet
|
||||
|
||||
# This includes build dependencies all the way down. Not efficient,
|
||||
# but we do need build deps to an *arbitrary* depth, which is hard to
|
||||
# determine.
|
||||
(allDrvOutputs nodes.server.config.system.build.toplevel)
|
||||
];
|
||||
};
|
||||
server = { lib, ... }: {
|
||||
imports = [ ./legacy/base-configuration.nix ];
|
||||
};
|
||||
};
|
||||
|
||||
testScript = { nodes }:
|
||||
let
|
||||
deployerSetup = pkgs.writeScript "deployerSetup" ''
|
||||
#!${pkgs.runtimeShell}
|
||||
set -eux -o pipefail
|
||||
cp --no-preserve=mode -r ${./legacy} unicorn
|
||||
cp --no-preserve=mode ${../ssh-keys.nix} unicorn/ssh-keys.nix
|
||||
mkdir -p ~/.ssh
|
||||
cp ${snakeOilPrivateKey} ~/.ssh/id_ed25519
|
||||
chmod 0400 ~/.ssh/id_ed25519
|
||||
'';
|
||||
serverNetworkJSON = pkgs.writeText "server-network.json"
|
||||
(builtins.toJSON nodes.server.config.system.build.networkConfig);
|
||||
in
|
||||
''
|
||||
import shlex
|
||||
|
||||
def deployer_do(cmd):
|
||||
cmd = shlex.quote(cmd)
|
||||
return deployer.succeed(f"su person -l -c {cmd} &>/dev/console")
|
||||
|
||||
start_all()
|
||||
|
||||
deployer_do("cat /etc/hosts")
|
||||
|
||||
deployer_do("${deployerSetup}")
|
||||
deployer_do("cp ${serverNetworkJSON} unicorn/server-network.json")
|
||||
|
||||
# Establish that ssh works, regardless of nixops
|
||||
# Easy way to accept the server host key too.
|
||||
server.wait_for_open_port(22)
|
||||
deployer.wait_for_unit("network.target")
|
||||
|
||||
# Put newlines on console, to flush the console reader's line buffer
|
||||
# in case nixops' last output did not end in a newline, as is the case
|
||||
# with a status line (if implemented?)
|
||||
deployer.succeed("while sleep 60s; do echo [60s passed] >/dev/console; done &")
|
||||
|
||||
deployer_do("cd ~/unicorn; ssh -oStrictHostKeyChecking=accept-new root@server echo hi")
|
||||
|
||||
# Create and deploy
|
||||
deployer_do("cd ~/unicorn; nixops create")
|
||||
|
||||
deployer_do("cd ~/unicorn; nixops deploy --confirm")
|
||||
|
||||
deployer_do("cd ~/unicorn; nixops ssh server 'hello | figlet'")
|
||||
'';
|
||||
});
|
||||
|
||||
inherit (import ../ssh-keys.nix pkgs) snakeOilPrivateKey snakeOilPublicKey;
|
||||
|
||||
/*
|
||||
Return a store path with a closure containing everything including
|
||||
derivations and all build dependency outputs, all the way down.
|
||||
*/
|
||||
allDrvOutputs = pkg:
|
||||
let name = lib.strings.sanitizeDerivationName "allDrvOutputs-${pkg.pname or pkg.name or "unknown"}";
|
||||
in
|
||||
pkgs.runCommand name { refs = pkgs.writeReferencesToFile pkg.drvPath; } ''
|
||||
touch $out
|
||||
while read ref; do
|
||||
case $ref in
|
||||
*.drv)
|
||||
cat $ref >>$out
|
||||
;;
|
||||
esac
|
||||
done <$refs
|
||||
'';
|
||||
|
||||
in
|
||||
tests
|
31
nixos/tests/nixops/legacy/base-configuration.nix
Normal file
31
nixos/tests/nixops/legacy/base-configuration.nix
Normal file
|
@ -0,0 +1,31 @@
|
|||
{ lib, modulesPath, pkgs, ... }:
|
||||
let
|
||||
ssh-keys =
|
||||
if builtins.pathExists ../../ssh-keys.nix
|
||||
then # Outside sandbox
|
||||
../../ssh-keys.nix
|
||||
else # In sandbox
|
||||
./ssh-keys.nix;
|
||||
|
||||
inherit (import ssh-keys pkgs)
|
||||
snakeOilPrivateKey snakeOilPublicKey;
|
||||
in
|
||||
{
|
||||
imports = [
|
||||
(modulesPath + "/virtualisation/qemu-vm.nix")
|
||||
(modulesPath + "/testing/test-instrumentation.nix")
|
||||
];
|
||||
virtualisation.writableStore = true;
|
||||
nix.binaryCaches = lib.mkForce [ ];
|
||||
virtualisation.graphics = false;
|
||||
documentation.enable = false;
|
||||
services.qemuGuest.enable = true;
|
||||
boot.loader.grub.enable = false;
|
||||
|
||||
services.openssh.enable = true;
|
||||
users.users.root.openssh.authorizedKeys.keys = [
|
||||
snakeOilPublicKey
|
||||
];
|
||||
security.pam.services.sshd.limits =
|
||||
[{ domain = "*"; item = "memlock"; type = "-"; value = 1024; }];
|
||||
}
|
15
nixos/tests/nixops/legacy/nixops.nix
Normal file
15
nixos/tests/nixops/legacy/nixops.nix
Normal file
|
@ -0,0 +1,15 @@
|
|||
{
|
||||
network = {
|
||||
description = "Legacy Network using <nixpkgs> and legacy state.";
|
||||
# NB this is not really what makes it a legacy network; lack of flakes is.
|
||||
storage.legacy = { };
|
||||
};
|
||||
server = { lib, pkgs, ... }: {
|
||||
deployment.targetEnv = "none";
|
||||
imports = [
|
||||
./base-configuration.nix
|
||||
(lib.modules.importJSON ./server-network.json)
|
||||
];
|
||||
environment.systemPackages = [ pkgs.hello pkgs.figlet ];
|
||||
};
|
||||
}
|
|
@ -12,7 +12,7 @@ import ./make-test-python.nix ({ pkgs, ...} :
|
|||
imports = [ ./common/user-account.nix ];
|
||||
services.xserver.enable = true;
|
||||
services.xserver.displayManager.sddm.enable = true;
|
||||
services.xserver.displayManager.defaultSession = "plasma5";
|
||||
services.xserver.displayManager.defaultSession = "plasma";
|
||||
services.xserver.desktopManager.plasma5.enable = true;
|
||||
services.xserver.displayManager.autoLogin = {
|
||||
enable = true;
|
||||
|
|
18
nixos/tests/prowlarr.nix
Normal file
18
nixos/tests/prowlarr.nix
Normal file
|
@ -0,0 +1,18 @@
|
|||
import ./make-test-python.nix ({ lib, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
{
|
||||
name = "prowlarr";
|
||||
meta.maintainers = with maintainers; [ jdreaver ];
|
||||
|
||||
nodes.machine =
|
||||
{ pkgs, ... }:
|
||||
{ services.prowlarr.enable = true; };
|
||||
|
||||
testScript = ''
|
||||
machine.wait_for_unit("prowlarr.service")
|
||||
machine.wait_for_open_port("9696")
|
||||
machine.succeed("curl --fail http://localhost:9696/")
|
||||
'';
|
||||
})
|
|
@ -20,6 +20,7 @@ import ./make-test-python.nix ({ pkgs, ... }:
|
|||
server =
|
||||
{ ... }:
|
||||
{ services.samba.enable = true;
|
||||
services.samba.openFirewall = true;
|
||||
services.samba.shares.public =
|
||||
{ path = "/public";
|
||||
"read only" = true;
|
||||
|
@ -27,8 +28,6 @@ import ./make-test-python.nix ({ pkgs, ... }:
|
|||
"guest ok" = "yes";
|
||||
comment = "Public samba share.";
|
||||
};
|
||||
networking.firewall.allowedTCPPorts = [ 139 445 ];
|
||||
networking.firewall.allowedUDPPorts = [ 137 138 ];
|
||||
};
|
||||
};
|
||||
|
||||
|
|
|
@ -7,15 +7,224 @@ import ./make-test-python.nix ({ pkgs, ...} : {
|
|||
};
|
||||
|
||||
nodes = {
|
||||
machine = { ... }: {
|
||||
machine = { config, pkgs, lib, ... }: {
|
||||
environment.systemPackages = [ pkgs.socat ]; # for the socket activation stuff
|
||||
users.mutableUsers = false;
|
||||
|
||||
specialisation = {
|
||||
# A system with a simple socket-activated unit
|
||||
simple-socket.configuration = {
|
||||
systemd.services.socket-activated.serviceConfig = {
|
||||
ExecStart = pkgs.writeScript "socket-test.py" /* python */ ''
|
||||
#!${pkgs.python3}/bin/python3
|
||||
|
||||
from socketserver import TCPServer, StreamRequestHandler
|
||||
import socket
|
||||
|
||||
class Handler(StreamRequestHandler):
|
||||
def handle(self):
|
||||
self.wfile.write("hello".encode("utf-8"))
|
||||
|
||||
class Server(TCPServer):
|
||||
def __init__(self, server_address, handler_cls):
|
||||
# Invoke base but omit bind/listen steps (performed by systemd activation!)
|
||||
TCPServer.__init__(
|
||||
self, server_address, handler_cls, bind_and_activate=False)
|
||||
# Override socket
|
||||
self.socket = socket.fromfd(3, self.address_family, self.socket_type)
|
||||
|
||||
if __name__ == "__main__":
|
||||
server = Server(("localhost", 1234), Handler)
|
||||
server.serve_forever()
|
||||
'';
|
||||
};
|
||||
systemd.sockets.socket-activated = {
|
||||
wantedBy = [ "sockets.target" ];
|
||||
listenStreams = [ "/run/test.sock" ];
|
||||
socketConfig.SocketMode = lib.mkDefault "0777";
|
||||
};
|
||||
};
|
||||
|
||||
# The same system but the socket is modified
|
||||
modified-socket.configuration = {
|
||||
imports = [ config.specialisation.simple-socket.configuration ];
|
||||
systemd.sockets.socket-activated.socketConfig.SocketMode = "0666";
|
||||
};
|
||||
|
||||
# The same system but the service is modified
|
||||
modified-service.configuration = {
|
||||
imports = [ config.specialisation.simple-socket.configuration ];
|
||||
systemd.services.socket-activated.serviceConfig.X-Test = "test";
|
||||
};
|
||||
|
||||
# The same system but both service and socket are modified
|
||||
modified-service-and-socket.configuration = {
|
||||
imports = [ config.specialisation.simple-socket.configuration ];
|
||||
systemd.services.socket-activated.serviceConfig.X-Test = "some_value";
|
||||
systemd.sockets.socket-activated.socketConfig.SocketMode = "0444";
|
||||
};
|
||||
|
||||
# A system with a socket-activated service and some simple services
|
||||
service-and-socket.configuration = {
|
||||
imports = [ config.specialisation.simple-socket.configuration ];
|
||||
systemd.services.simple-service = {
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
RemainAfterExit = true;
|
||||
ExecStart = "${pkgs.coreutils}/bin/true";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.simple-restart-service = {
|
||||
stopIfChanged = false;
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
RemainAfterExit = true;
|
||||
ExecStart = "${pkgs.coreutils}/bin/true";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.simple-reload-service = {
|
||||
reloadIfChanged = true;
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
RemainAfterExit = true;
|
||||
ExecStart = "${pkgs.coreutils}/bin/true";
|
||||
ExecReload = "${pkgs.coreutils}/bin/true";
|
||||
};
|
||||
};
|
||||
|
||||
systemd.services.no-restart-service = {
|
||||
restartIfChanged = false;
|
||||
wantedBy = [ "multi-user.target" ];
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
RemainAfterExit = true;
|
||||
ExecStart = "${pkgs.coreutils}/bin/true";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# The same system but with an activation script that restarts all services
|
||||
restart-and-reload-by-activation-script.configuration = {
|
||||
imports = [ config.specialisation.service-and-socket.configuration ];
|
||||
system.activationScripts.restart-and-reload-test = {
|
||||
supportsDryActivation = true;
|
||||
deps = [];
|
||||
text = ''
|
||||
if [ "$NIXOS_ACTION" = dry-activate ]; then
|
||||
f=/run/nixos/dry-activation-restart-list
|
||||
else
|
||||
f=/run/nixos/activation-restart-list
|
||||
fi
|
||||
cat <<EOF >> "$f"
|
||||
simple-service.service
|
||||
simple-restart-service.service
|
||||
simple-reload-service.service
|
||||
no-restart-service.service
|
||||
socket-activated.service
|
||||
EOF
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
# A system with a timer
|
||||
with-timer.configuration = {
|
||||
systemd.timers.test-timer = {
|
||||
wantedBy = [ "timers.target" ];
|
||||
timerConfig.OnCalendar = "@1395716396"; # chosen by fair dice roll
|
||||
};
|
||||
systemd.services.test-timer = {
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
ExecStart = "${pkgs.coreutils}/bin/true";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# The same system but with another time
|
||||
with-timer-modified.configuration = {
|
||||
imports = [ config.specialisation.with-timer.configuration ];
|
||||
systemd.timers.test-timer.timerConfig.OnCalendar = lib.mkForce "Fri 2012-11-23 16:00:00";
|
||||
};
|
||||
|
||||
# A system with a systemd mount
|
||||
with-mount.configuration = {
|
||||
systemd.mounts = [
|
||||
{
|
||||
description = "Testmount";
|
||||
what = "tmpfs";
|
||||
type = "tmpfs";
|
||||
where = "/testmount";
|
||||
options = "size=1M";
|
||||
wantedBy = [ "local-fs.target" ];
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
# The same system but with another time
|
||||
with-mount-modified.configuration = {
|
||||
systemd.mounts = [
|
||||
{
|
||||
description = "Testmount";
|
||||
what = "tmpfs";
|
||||
type = "tmpfs";
|
||||
where = "/testmount";
|
||||
options = "size=10M";
|
||||
wantedBy = [ "local-fs.target" ];
|
||||
}
|
||||
];
|
||||
};
|
||||
|
||||
# A system with a path unit
|
||||
with-path.configuration = {
|
||||
systemd.paths.test-watch = {
|
||||
wantedBy = [ "paths.target" ];
|
||||
pathConfig.PathExists = "/testpath";
|
||||
};
|
||||
systemd.services.test-watch = {
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
ExecStart = "${pkgs.coreutils}/bin/touch /testpath-modified";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# The same system but watching another file
|
||||
with-path-modified.configuration = {
|
||||
imports = [ config.specialisation.with-path.configuration ];
|
||||
systemd.paths.test-watch.pathConfig.PathExists = lib.mkForce "/testpath2";
|
||||
};
|
||||
|
||||
# A system with a slice
|
||||
with-slice.configuration = {
|
||||
systemd.slices.testslice.sliceConfig.MemoryMax = "1"; # don't allow memory allocation
|
||||
systemd.services.testservice = {
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
RemainAfterExit = true;
|
||||
ExecStart = "${pkgs.coreutils}/bin/true";
|
||||
Slice = "testslice.slice";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# The same system but the slice allows to allocate memory
|
||||
with-slice-non-crashing.configuration = {
|
||||
imports = [ config.specialisation.with-slice.configuration ];
|
||||
systemd.slices.testslice.sliceConfig.MemoryMax = lib.mkForce null;
|
||||
};
|
||||
};
|
||||
};
|
||||
other = { ... }: {
|
||||
users.mutableUsers = true;
|
||||
};
|
||||
};
|
||||
|
||||
testScript = {nodes, ...}: let
|
||||
testScript = { nodes, ... }: let
|
||||
originalSystem = nodes.machine.config.system.build.toplevel;
|
||||
otherSystem = nodes.other.config.system.build.toplevel;
|
||||
|
||||
|
@ -27,12 +236,182 @@ import ./make-test-python.nix ({ pkgs, ...} : {
|
|||
set -o pipefail
|
||||
exec env -i "$@" | tee /dev/stderr
|
||||
'';
|
||||
in ''
|
||||
in /* python */ ''
|
||||
def switch_to_specialisation(name, action="test"):
|
||||
out = machine.succeed(f"${originalSystem}/specialisation/{name}/bin/switch-to-configuration {action} 2>&1")
|
||||
assert_lacks(out, "switch-to-configuration line") # Perl warnings
|
||||
return out
|
||||
|
||||
def assert_contains(haystack, needle):
|
||||
if needle not in haystack:
|
||||
print("The haystack that will cause the following exception is:")
|
||||
print("---")
|
||||
print(haystack)
|
||||
print("---")
|
||||
raise Exception(f"Expected string '{needle}' was not found")
|
||||
|
||||
def assert_lacks(haystack, needle):
|
||||
if needle in haystack:
|
||||
print("The haystack that will cause the following exception is:")
|
||||
print("---")
|
||||
print(haystack, end="")
|
||||
print("---")
|
||||
raise Exception(f"Unexpected string '{needle}' was found")
|
||||
|
||||
|
||||
machine.succeed(
|
||||
"${stderrRunner} ${originalSystem}/bin/switch-to-configuration test"
|
||||
)
|
||||
machine.succeed(
|
||||
"${stderrRunner} ${otherSystem}/bin/switch-to-configuration test"
|
||||
)
|
||||
|
||||
with subtest("systemd sockets"):
|
||||
machine.succeed("${originalSystem}/bin/switch-to-configuration test")
|
||||
|
||||
# Simple socket is created
|
||||
out = switch_to_specialisation("simple-socket")
|
||||
assert_lacks(out, "stopping the following units:")
|
||||
# not checking for reload because dbus gets reloaded
|
||||
assert_lacks(out, "restarting the following units:")
|
||||
assert_lacks(out, "\nstarting the following units:")
|
||||
assert_contains(out, "the following new units were started: socket-activated.socket\n")
|
||||
assert_lacks(out, "as well:")
|
||||
machine.succeed("[ $(stat -c%a /run/test.sock) = 777 ]")
|
||||
|
||||
# Changing the socket restarts it
|
||||
out = switch_to_specialisation("modified-socket")
|
||||
assert_lacks(out, "stopping the following units:")
|
||||
#assert_lacks(out, "reloading the following units:")
|
||||
assert_contains(out, "restarting the following units: socket-activated.socket\n")
|
||||
assert_lacks(out, "\nstarting the following units:")
|
||||
assert_lacks(out, "the following new units were started:")
|
||||
assert_lacks(out, "as well:")
|
||||
machine.succeed("[ $(stat -c%a /run/test.sock) = 666 ]") # change was applied
|
||||
|
||||
# The unit is properly activated when the socket is accessed
|
||||
if machine.succeed("socat - UNIX-CONNECT:/run/test.sock") != "hello":
|
||||
raise Exception("Socket was not properly activated")
|
||||
|
||||
# Changing the socket restarts it and ignores the active service
|
||||
out = switch_to_specialisation("simple-socket")
|
||||
assert_contains(out, "stopping the following units: socket-activated.service\n")
|
||||
assert_lacks(out, "reloading the following units:")
|
||||
assert_contains(out, "restarting the following units: socket-activated.socket\n")
|
||||
assert_lacks(out, "\nstarting the following units:")
|
||||
assert_lacks(out, "the following new units were started:")
|
||||
assert_lacks(out, "as well:")
|
||||
machine.succeed("[ $(stat -c%a /run/test.sock) = 777 ]") # change was applied
|
||||
|
||||
# Changing the service does nothing when the service is not active
|
||||
out = switch_to_specialisation("modified-service")
|
||||
assert_lacks(out, "stopping the following units:")
|
||||
assert_lacks(out, "reloading the following units:")
|
||||
assert_lacks(out, "restarting the following units:")
|
||||
assert_lacks(out, "\nstarting the following units:")
|
||||
assert_lacks(out, "the following new units were started:")
|
||||
assert_lacks(out, "as well:")
|
||||
|
||||
# Activating the service and modifying it stops it but leaves the socket untouched
|
||||
machine.succeed("socat - UNIX-CONNECT:/run/test.sock")
|
||||
out = switch_to_specialisation("simple-socket")
|
||||
assert_contains(out, "stopping the following units: socket-activated.service\n")
|
||||
assert_lacks(out, "reloading the following units:")
|
||||
assert_lacks(out, "restarting the following units:")
|
||||
assert_lacks(out, "\nstarting the following units:")
|
||||
assert_lacks(out, "the following new units were started:")
|
||||
assert_lacks(out, "as well:")
|
||||
|
||||
# Activating the service and both the service and the socket stops the service and restarts the socket
|
||||
machine.succeed("socat - UNIX-CONNECT:/run/test.sock")
|
||||
out = switch_to_specialisation("modified-service-and-socket")
|
||||
assert_contains(out, "stopping the following units: socket-activated.service\n")
|
||||
assert_lacks(out, "reloading the following units:")
|
||||
assert_contains(out, "restarting the following units: socket-activated.socket\n")
|
||||
assert_lacks(out, "\nstarting the following units:")
|
||||
assert_lacks(out, "the following new units were started:")
|
||||
assert_lacks(out, "as well:")
|
||||
|
||||
with subtest("restart and reload by activation file"):
|
||||
out = switch_to_specialisation("service-and-socket")
|
||||
# Switch to a system where the example services get restarted
|
||||
# by the activation script
|
||||
out = switch_to_specialisation("restart-and-reload-by-activation-script")
|
||||
assert_lacks(out, "stopping the following units:")
|
||||
assert_contains(out, "stopping the following units as well: simple-service.service, socket-activated.service\n")
|
||||
assert_contains(out, "reloading the following units: simple-reload-service.service\n")
|
||||
assert_contains(out, "restarting the following units: simple-restart-service.service\n")
|
||||
assert_contains(out, "\nstarting the following units: simple-service.service")
|
||||
|
||||
# The same, but in dry mode
|
||||
switch_to_specialisation("service-and-socket")
|
||||
out = switch_to_specialisation("restart-and-reload-by-activation-script", action="dry-activate")
|
||||
assert_lacks(out, "would stop the following units:")
|
||||
assert_contains(out, "would stop the following units as well: simple-service.service, socket-activated.service\n")
|
||||
assert_contains(out, "would reload the following units: simple-reload-service.service\n")
|
||||
assert_contains(out, "would restart the following units: simple-restart-service.service\n")
|
||||
assert_contains(out, "\nwould start the following units: simple-service.service")
|
||||
|
||||
with subtest("mounts"):
|
||||
switch_to_specialisation("with-mount")
|
||||
out = machine.succeed("mount | grep 'on /testmount'")
|
||||
assert_contains(out, "size=1024k")
|
||||
|
||||
out = switch_to_specialisation("with-mount-modified")
|
||||
assert_lacks(out, "stopping the following units:")
|
||||
assert_contains(out, "reloading the following units: testmount.mount\n")
|
||||
assert_lacks(out, "restarting the following units:")
|
||||
assert_lacks(out, "\nstarting the following units:")
|
||||
assert_lacks(out, "the following new units were started:")
|
||||
assert_lacks(out, "as well:")
|
||||
# It changed
|
||||
out = machine.succeed("mount | grep 'on /testmount'")
|
||||
assert_contains(out, "size=10240k")
|
||||
|
||||
with subtest("timers"):
|
||||
switch_to_specialisation("with-timer")
|
||||
out = machine.succeed("systemctl show test-timer.timer")
|
||||
assert_contains(out, "OnCalendar=2014-03-25 02:59:56 UTC")
|
||||
|
||||
out = switch_to_specialisation("with-timer-modified")
|
||||
assert_lacks(out, "stopping the following units:")
|
||||
assert_lacks(out, "reloading the following units:")
|
||||
assert_contains(out, "restarting the following units: test-timer.timer\n")
|
||||
assert_lacks(out, "\nstarting the following units:")
|
||||
assert_lacks(out, "the following new units were started:")
|
||||
assert_lacks(out, "as well:")
|
||||
# It changed
|
||||
out = machine.succeed("systemctl show test-timer.timer")
|
||||
assert_contains(out, "OnCalendar=Fri 2012-11-23 16:00:00")
|
||||
|
||||
with subtest("paths"):
|
||||
switch_to_specialisation("with-path")
|
||||
machine.fail("test -f /testpath-modified")
|
||||
|
||||
# touch the file, unit should be triggered
|
||||
machine.succeed("touch /testpath")
|
||||
machine.wait_until_succeeds("test -f /testpath-modified")
|
||||
|
||||
machine.succeed("rm /testpath /testpath-modified")
|
||||
switch_to_specialisation("with-path-modified")
|
||||
|
||||
machine.succeed("touch /testpath")
|
||||
machine.fail("test -f /testpath-modified")
|
||||
machine.succeed("touch /testpath2")
|
||||
machine.wait_until_succeeds("test -f /testpath-modified")
|
||||
|
||||
# This test ensures that changes to slice configuration get applied.
|
||||
# We test this by having a slice that allows no memory allocation at
|
||||
# all and starting a service within it. If the service crashes, the slice
|
||||
# is applied and if we modify the slice to allow memory allocation, the
|
||||
# service should successfully start.
|
||||
with subtest("slices"):
|
||||
machine.succeed("echo 0 > /proc/sys/vm/panic_on_oom") # allow OOMing
|
||||
out = switch_to_specialisation("with-slice")
|
||||
machine.fail("systemctl start testservice.service")
|
||||
out = switch_to_specialisation("with-slice-non-crashing")
|
||||
machine.succeed("systemctl start testservice.service")
|
||||
machine.succeed("echo 1 > /proc/sys/vm/panic_on_oom") # disallow OOMing
|
||||
|
||||
'';
|
||||
})
|
||||
|
|
|
@ -14,9 +14,6 @@
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "csound";
|
||||
# When updating, please check if https://github.com/csound/csound/issues/1078
|
||||
# has been fixed in the new version so we can use the normal fluidsynth
|
||||
# version and remove fluidsynth 1.x from nixpkgs again.
|
||||
version = "6.16.2";
|
||||
|
||||
hardeningDisable = [ "format" ];
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
, lilv
|
||||
, lsp-plugins
|
||||
, lv2
|
||||
, mda_lv2
|
||||
, meson
|
||||
, ninja
|
||||
, nlohmann_json
|
||||
|
@ -25,20 +26,20 @@
|
|||
, rnnoise
|
||||
, rubberband
|
||||
, speexdsp
|
||||
, wrapGAppsHook
|
||||
, wrapGAppsHook4
|
||||
, zam-plugins
|
||||
, zita-convolver
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "easyeffects";
|
||||
version = "6.0.3";
|
||||
version = "6.1.3";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "wwmm";
|
||||
repo = "easyeffects";
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-GzqPC/m/HMthLMamhJ4EXX6fxZYscdX1QmXgqHOPEcg=";
|
||||
sha256 = "sha256-1UfeqPJxY4YT98UdqTZtG+QUBOZlKfK+7WbszhO22A0=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
|
@ -48,7 +49,7 @@ stdenv.mkDerivation rec {
|
|||
ninja
|
||||
pkg-config
|
||||
python3
|
||||
wrapGAppsHook
|
||||
wrapGAppsHook4
|
||||
];
|
||||
|
||||
buildInputs = [
|
||||
|
@ -74,17 +75,20 @@ stdenv.mkDerivation rec {
|
|||
postPatch = ''
|
||||
chmod +x meson_post_install.py
|
||||
patchShebangs meson_post_install.py
|
||||
# https://github.com/wwmm/easyeffects/pull/1205
|
||||
substituteInPlace meson_post_install.py --replace "gtk-update-icon-cache" "gtk4-update-icon-cache"
|
||||
'';
|
||||
|
||||
preFixup =
|
||||
let
|
||||
lv2Plugins = [
|
||||
calf # limiter, compressor exciter, bass enhancer and others
|
||||
lsp-plugins # delay
|
||||
calf # compressor exciter, bass enhancer and others
|
||||
lsp-plugins # delay, limiter, multiband compressor
|
||||
mda_lv2 # loudness
|
||||
zam-plugins # maximizer
|
||||
];
|
||||
ladspaPlugins = [
|
||||
rubberband # pitch shifting
|
||||
zam-plugins # maximizer
|
||||
];
|
||||
in
|
||||
''
|
||||
|
|
|
@ -1,13 +1,13 @@
|
|||
{ stdenv, lib, fetchFromGitHub, faust2jaqt, faust2lv2 }:
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "faustPhysicalModeling";
|
||||
version = "2.30.5";
|
||||
version = "2.33.1";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "grame-cncm";
|
||||
repo = "faust";
|
||||
rev = version;
|
||||
sha256 = "sha256-hfpMeUhv6FC9lnPCfdWnAFCaKiteplyrS/o3Lf7cQY4=";
|
||||
sha256 = "sha256-gzkfLfNhJHg/jEhf/RQDhHnXxn3UI15eDZfutKt3yGk=";
|
||||
};
|
||||
|
||||
buildInputs = [ faust2jaqt faust2lv2 ];
|
||||
|
|
|
@ -24,7 +24,7 @@
|
|||
, lrdf
|
||||
, lv2
|
||||
, pkg-config
|
||||
, python2
|
||||
, python3
|
||||
, sassc
|
||||
, serd
|
||||
, sord
|
||||
|
@ -63,7 +63,7 @@ stdenv.mkDerivation rec {
|
|||
hicolor-icon-theme
|
||||
intltool
|
||||
pkg-config
|
||||
python2
|
||||
python3
|
||||
wafHook
|
||||
wrapGAppsHook
|
||||
];
|
||||
|
|
73
pkgs/applications/audio/hushboard/default.nix
Normal file
73
pkgs/applications/audio/hushboard/default.nix
Normal file
|
@ -0,0 +1,73 @@
|
|||
{ lib
|
||||
, buildPythonApplication
|
||||
, fetchFromGitHub
|
||||
, gobject-introspection
|
||||
, gtk3
|
||||
, libappindicator
|
||||
, libpulseaudio
|
||||
, librsvg
|
||||
, pycairo
|
||||
, pygobject3
|
||||
, six
|
||||
, wrapGAppsHook
|
||||
, xlib
|
||||
}:
|
||||
|
||||
buildPythonApplication {
|
||||
pname = "hushboard";
|
||||
version = "unstable-2021-03-17";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "stuartlangridge";
|
||||
repo = "hushboard";
|
||||
rev = "c16611c539be111891116a737b02c5fb359ad1fc";
|
||||
sha256 = "06jav6j0bsxhawrq31cnls8zpf80fpwk0cak5s82js6wl4vw2582";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
wrapGAppsHook
|
||||
];
|
||||
|
||||
buildInputs = [
|
||||
gobject-introspection
|
||||
gtk3
|
||||
libappindicator
|
||||
libpulseaudio
|
||||
];
|
||||
|
||||
propagatedBuildInputs = [
|
||||
pycairo
|
||||
pygobject3
|
||||
six
|
||||
xlib
|
||||
];
|
||||
|
||||
postPatch = ''
|
||||
substituteInPlace hushboard/_pulsectl.py \
|
||||
--replace "ctypes.util.find_library('libpulse') or 'libpulse.so.0'" "'${libpulseaudio}/lib/libpulse.so.0'"
|
||||
substituteInPlace snap/gui/hushboard.desktop \
|
||||
--replace "\''${SNAP}/hushboard/icons/hushboard.svg" "hushboard"
|
||||
'';
|
||||
|
||||
postInstall = ''
|
||||
# Fix tray icon, see e.g. https://github.com/NixOS/nixpkgs/pull/43421
|
||||
wrapProgram $out/bin/hushboard \
|
||||
--set GDK_PIXBUF_MODULE_FILE "${librsvg.out}/lib/gdk-pixbuf-2.0/2.10.0/loaders.cache"
|
||||
|
||||
mkdir -p $out/share/applications $out/share/icons/hicolor/{scalable,512x512}/apps
|
||||
cp snap/gui/hushboard.desktop $out/share/applications
|
||||
cp hushboard/icons/hushboard.svg $out/share/icons/hicolor/scalable/apps
|
||||
cp hushboard-512.png $out/share/icons/hicolor/512x512/apps/hushboard.png
|
||||
'';
|
||||
|
||||
# There are no tests
|
||||
doCheck = false;
|
||||
|
||||
meta = with lib; {
|
||||
homepage = "https://kryogenix.org/code/hushboard/";
|
||||
license = licenses.mit;
|
||||
description = "Mute your microphone while typing";
|
||||
platforms = platforms.linux;
|
||||
maintainers = with maintainers; [ sersorrel ];
|
||||
};
|
||||
}
|
45
pkgs/applications/audio/in-formant/default.nix
Normal file
45
pkgs/applications/audio/in-formant/default.nix
Normal file
|
@ -0,0 +1,45 @@
|
|||
{ stdenv, cmake, lib, fetchFromGitHub, qt5, fftw, libtorch-bin, portaudio, eigen
|
||||
, xorg, pkg-config, autoPatchelfHook, soxr
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "in-formant";
|
||||
version = "2021-06-30";
|
||||
|
||||
# no Qt6 yet, so we're stuck in the last Qt5-supporting commit: https://github.com/NixOS/nixpkgs/issues/108008
|
||||
src = fetchFromGitHub {
|
||||
owner = "in-formant";
|
||||
repo = "in-formant";
|
||||
rev = "e28e628cf5ff0949a7b046d220cc884f6035f31a";
|
||||
sha256 = "sha256-YvtV0wGUNmI/+GGxrIfTk/l8tqUsWgc/LAI17X+AWGI=";
|
||||
fetchSubmodules = true;
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ cmake pkg-config qt5.wrapQtAppsHook autoPatchelfHook ];
|
||||
|
||||
buildInputs = [
|
||||
qt5.qtbase
|
||||
qt5.qtquickcontrols
|
||||
qt5.qtquickcontrols2
|
||||
qt5.qtcharts
|
||||
fftw
|
||||
libtorch-bin
|
||||
portaudio
|
||||
eigen
|
||||
xorg.libxcb
|
||||
soxr
|
||||
];
|
||||
|
||||
installPhase = ''
|
||||
mkdir -p $out/bin
|
||||
cp in-formant $out/bin
|
||||
'';
|
||||
|
||||
meta = with lib; {
|
||||
description = "A real-time pitch and formant tracking software";
|
||||
homepage = "https://github.com/in-formant/in-formant";
|
||||
license = licenses.asl20;
|
||||
platforms = platforms.linux;
|
||||
maintainers = with maintainers; [ ckie ];
|
||||
};
|
||||
}
|
File diff suppressed because it is too large
Load diff
|
@ -4,17 +4,16 @@
|
|||
|
||||
rustPlatform.buildRustPackage rec {
|
||||
pname = "librespot";
|
||||
version = "0.1.6";
|
||||
version = "0.3.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "librespot-org";
|
||||
repo = "librespot";
|
||||
rev = "v${version}";
|
||||
sha256 = "153i9n3qwmmwc29f62cz8nbqrlry16iygvibm1sdnvpf0s6wk5f3";
|
||||
sha256 = "0n7h690gplpp47gdj038g6ncgwr7wvwfkg00cbrbvxhv7kzqqa1f";
|
||||
};
|
||||
|
||||
cargoPatches = [ ./cargo-lock.patch ];
|
||||
cargoSha256 = "11d64rpq4b5rdxk5wx0hhzgc6mvs6h2br0w3kfncfklp67vn3v4v";
|
||||
cargoSha256 = "0qakvpxvn84ppgs3qlsfan4flqkmjcgs698w25jasx9ymiv8wc3s";
|
||||
|
||||
cargoBuildFlags = with lib; [
|
||||
"--no-default-features"
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
{ lib, fetchFromGitHub, cmake, pkg-config, alsa-lib ? null, fftwFloat, fltk13
|
||||
, fluidsynth_1 ? null, lame ? null, libgig ? null, libjack2 ? null, libpulseaudio ? null
|
||||
, fluidsynth ? null, lame ? null, libgig ? null, libjack2 ? null, libpulseaudio ? null
|
||||
, libsamplerate, libsoundio ? null, libsndfile, libvorbis ? null, portaudio ? null
|
||||
, qtbase, qtx11extras, qttools, SDL ? null, mkDerivation }:
|
||||
|
||||
|
@ -21,7 +21,7 @@ mkDerivation rec {
|
|||
alsa-lib
|
||||
fftwFloat
|
||||
fltk13
|
||||
fluidsynth_1
|
||||
fluidsynth
|
||||
lame
|
||||
libgig
|
||||
libjack2
|
||||
|
|
|
@ -7,9 +7,9 @@ stdenv.mkDerivation rec {
|
|||
version = "1.3.0.1";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
rev = version;
|
||||
repo = "mimic1";
|
||||
owner = "MycroftAI";
|
||||
repo = "mimic1";
|
||||
rev = version;
|
||||
sha256 = "1agwgby9ql8r3x5rd1rgx3xp9y4cdg4pi3kqlz3vanv9na8nf3id";
|
||||
};
|
||||
|
||||
|
|
|
@ -1,29 +1,56 @@
|
|||
{ lib, python3Packages, mopidy }:
|
||||
{ lib
|
||||
, fetchFromGitHub
|
||||
, python3
|
||||
, mopidy
|
||||
}:
|
||||
|
||||
python3Packages.buildPythonApplication rec {
|
||||
python3.pkgs.buildPythonApplication rec {
|
||||
pname = "mopidy-youtube";
|
||||
version = "3.4";
|
||||
|
||||
src = python3Packages.fetchPypi {
|
||||
inherit version;
|
||||
pname = "Mopidy-YouTube";
|
||||
sha256 = "sha256-996MNByMcKq1woDGK6jsmAHS9TOoBrwSGgPmcShvTRw=";
|
||||
disabled = python3.pythonOlder "3.7";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "natumbri";
|
||||
repo = pname;
|
||||
rev = "v${version}";
|
||||
sha256 = "0lm6nn926qkrwzvj64yracdixfrnv5zk243msjskrnlzkhgk01rk";
|
||||
};
|
||||
|
||||
postPatch = "sed s/bs4/beautifulsoup4/ -i setup.cfg";
|
||||
|
||||
propagatedBuildInputs = with python3Packages; [
|
||||
propagatedBuildInputs = with python3.pkgs; [
|
||||
beautifulsoup4
|
||||
cachetools
|
||||
pykka
|
||||
requests
|
||||
youtube-dl
|
||||
ytmusicapi
|
||||
] ++ [ mopidy ];
|
||||
] ++ [
|
||||
mopidy
|
||||
];
|
||||
|
||||
doCheck = false;
|
||||
checkInputs = with python3.pkgs; [
|
||||
vcrpy
|
||||
pytestCheckHook
|
||||
];
|
||||
|
||||
disabledTests = [
|
||||
# Test requires a YouTube API key
|
||||
"test_get_default_config"
|
||||
];
|
||||
|
||||
disabledTestPaths = [
|
||||
# Fails with an import error
|
||||
"tests/test_backend.py"
|
||||
];
|
||||
|
||||
pythonImportsCheck = [
|
||||
"mopidy_youtube"
|
||||
];
|
||||
|
||||
meta = with lib; {
|
||||
description = "Mopidy extension for playing music from YouTube";
|
||||
homepage = "https://github.com/natumbri/mopidy-youtube";
|
||||
license = licenses.asl20;
|
||||
maintainers = [ maintainers.spwhitt ];
|
||||
maintainers = with maintainers; [ spwhitt ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
{ lib
|
||||
, python3
|
||||
, fetchFromGitHub
|
||||
, substituteAll
|
||||
, appstream-glib
|
||||
, desktop-file-utils
|
||||
, gettext
|
||||
|
@ -13,12 +14,13 @@
|
|||
, meson
|
||||
, ninja
|
||||
, pkg-config
|
||||
, pulseaudio
|
||||
, wrapGAppsHook
|
||||
}:
|
||||
|
||||
python3.pkgs.buildPythonApplication rec {
|
||||
pname = "mousai";
|
||||
version = "0.4.2";
|
||||
version = "0.6.6";
|
||||
|
||||
format = "other";
|
||||
|
||||
|
@ -26,9 +28,16 @@ python3.pkgs.buildPythonApplication rec {
|
|||
owner = "SeaDve";
|
||||
repo = "Mousai";
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-zH++GGFIz3oxkKOYB4zhY6yL3vENEXxtrv8mZZ+41kU=";
|
||||
sha256 = "sha256-nCbFVFg+nVF8BOBfdzQVgdTRXR5UF18PJFC266yTFwg=";
|
||||
};
|
||||
|
||||
patches = [
|
||||
(substituteAll {
|
||||
src = ./paths.patch;
|
||||
pactl = "${lib.getBin pulseaudio}/bin/pactl";
|
||||
})
|
||||
];
|
||||
|
||||
postPatch = ''
|
||||
patchShebangs build-aux/meson
|
||||
'';
|
||||
|
@ -53,6 +62,7 @@ python3.pkgs.buildPythonApplication rec {
|
|||
gtk4
|
||||
libadwaita
|
||||
librsvg
|
||||
pulseaudio
|
||||
];
|
||||
|
||||
propagatedBuildInputs = with python3.pkgs; [
|
||||
|
|
13
pkgs/applications/audio/mousai/paths.patch
Normal file
13
pkgs/applications/audio/mousai/paths.patch
Normal file
|
@ -0,0 +1,13 @@
|
|||
diff --git a/src/backend/utils.py b/src/backend/utils.py
|
||||
index cebc009..0087c09 100644
|
||||
--- a/src/backend/utils.py
|
||||
+++ b/src/backend/utils.py
|
||||
@@ -79,7 +79,7 @@ class Utils:
|
||||
@staticmethod
|
||||
def get_default_audio_sources():
|
||||
pactl_output = subprocess.run(
|
||||
- ['/usr/bin/pactl', 'info'],
|
||||
+ ['@pactl@', 'info'],
|
||||
stdout=subprocess.PIPE,
|
||||
text=True
|
||||
).stdout.splitlines()
|
|
@ -8,7 +8,7 @@ let
|
|||
src = fetchurl {
|
||||
url = "https://plexamp.plex.tv/plexamp.plex.tv/desktop/Plexamp-${version}.AppImage";
|
||||
name="${pname}-${version}.AppImage";
|
||||
sha512 = "n+ZFfKYUx6silpH4bGNRdh5JJPchjKNzFLAhZQPecK2DkmygY35/ZYUNSBioqxuGKax+I/mY5podmQ5iD95ohQ==";
|
||||
sha512 = "jKuuM1vQANGYE2W0OGl+35mB1ve5K/xPcBTk2O1azPRBDlRVU0DHRSQy2T71kwhxES1ASRt91qAV/dATk6oUkw==";
|
||||
};
|
||||
|
||||
appimageContents = appimageTools.extractType2 {
|
||||
|
|
|
@ -13,13 +13,13 @@
|
|||
|
||||
mkDerivation rec {
|
||||
pname = "ptcollab";
|
||||
version = "0.4.3";
|
||||
version = "0.5.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "yuxshao";
|
||||
repo = "ptcollab";
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-bFFWPl7yaTwCKz7/f9Vk6mg0roUnig0dFERS4IE4R7g=";
|
||||
sha256 = "sha256-sN3O8m+ib6Chb/RXTFbNWW6PnrolCHpmC/avRX93AH4=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ qmake pkg-config ];
|
||||
|
|
|
@ -17,12 +17,14 @@
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "reaper";
|
||||
version = "6.29";
|
||||
version = "6.38";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://www.reaper.fm/files/${lib.versions.major version}.x/reaper${builtins.replaceStrings ["."] [""] version}_linux_${stdenv.targetPlatform.qemuArch}.tar.xz";
|
||||
hash = if stdenv.isx86_64 then "sha256-DOul6J2Y7szy4+Q4SeO0uG6PSuU+MELE7ky8W3mSpTQ="
|
||||
else "sha256-67iTi6bFlbQtyCjnPIjK8K/3aV+zaCsWBRCWmgYonM4=";
|
||||
url = "https://www.reaper.fm/files/${lib.versions.major version}.x/reaper${builtins.replaceStrings ["."] [""] version}_linux_${stdenv.hostPlatform.qemuArch}.tar.xz";
|
||||
hash = {
|
||||
x86_64-linux = "sha256-K5EnrmzP8pyW9dR1fbMzkPzpS6aHm8JF1+m3afnH4rU=";
|
||||
aarch64-linux = "sha256-6wNWDXjQNyfU2l9Xi9JtmAuoKtHuIY5cvNMjYkwh2Sk=";
|
||||
}.${stdenv.hostPlatform.system};
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
|
@ -76,6 +78,6 @@ stdenv.mkDerivation rec {
|
|||
homepage = "https://www.reaper.fm/";
|
||||
license = licenses.unfree;
|
||||
platforms = [ "x86_64-linux" "aarch64-linux" ];
|
||||
maintainers = with maintainers; [ jfrankenau ilian ];
|
||||
maintainers = with maintainers; [ jfrankenau ilian orivej ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -15,13 +15,13 @@ in
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "btcpayserver";
|
||||
version = "1.2.3";
|
||||
version = "1.2.4";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = pname;
|
||||
repo = pname;
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-6ktlnbYb+pOXwl52QmnqDsPlXaiF1ghjQg1yfznulqo=";
|
||||
sha256 = "sha256-vjNJ08twsJ036TTFF6srOGshDpP7ZwWCGN0XjrtFT/g=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ dotnetSdk dotnetPackages.Nuget makeWrapper ];
|
||||
|
|
|
@ -7,13 +7,13 @@
|
|||
with lib;
|
||||
stdenv.mkDerivation rec {
|
||||
name = "dogecoin" + (toString (optional (!withGui) "d")) + "-" + version;
|
||||
version = "1.14.3";
|
||||
version = "1.14.4";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "dogecoin";
|
||||
repo = "dogecoin";
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-kozUnIislQDtgjeesYHKu4sB1j9juqaWvyax+Lb/0pc=";
|
||||
sha256 = "sha256-uITX5DSyC/m0ynwCkkbGgUj8kMuNgnsNo8H8RQSGPEA=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ pkg-config autoreconfHook ];
|
||||
|
|
|
@ -9,16 +9,16 @@
|
|||
|
||||
rustPlatform.buildRustPackage rec {
|
||||
pname = "electrs";
|
||||
version = "0.9.0";
|
||||
version = "0.9.1";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "romanz";
|
||||
repo = pname;
|
||||
rev = "v${version}";
|
||||
sha256 = "04dqbn2nfzllxfcn3v9vkfy2hn2syihijr575621r1pj65pcgf8y";
|
||||
hash = "sha256-GDO8iGntQncvdJiDMBJk9GrGF9JToasbLRzju3S0TS0=";
|
||||
};
|
||||
|
||||
cargoSha256 = "0hl8q62lankrab8gq9vxmkn68drs0hw5pk0q6aiq8fxsb63dzsw0";
|
||||
cargoHash = "sha256-Ms785+3Z4xEUW8FRRu1FIHk7HSWYLBThKlJDFjW6j0I=";
|
||||
|
||||
# needed for librocksdb-sys
|
||||
nativeBuildInputs = [ llvmPackages.clang ];
|
||||
|
|
39
pkgs/applications/blockchains/electrs/update.sh
Executable file
39
pkgs/applications/blockchains/electrs/update.sh
Executable file
|
@ -0,0 +1,39 @@
|
|||
#!/usr/bin/env nix-shell
|
||||
#!nix-shell -i bash -p coreutils curl jq git gnupg common-updater-scripts
|
||||
set -euo pipefail
|
||||
|
||||
# Fetch latest release, GPG-verify the tag, update derivation
|
||||
|
||||
scriptDir=$(cd "${BASH_SOURCE[0]%/*}" && pwd)
|
||||
nixpkgs=$(realpath "$scriptDir"/../../../..)
|
||||
|
||||
oldVersion=$(nix-instantiate --eval -E "(import \"$nixpkgs\" { config = {}; overlays = []; }).electrs.version" | tr -d '"')
|
||||
version=$(curl -s --show-error "https://api.github.com/repos/romanz/electrs/releases/latest" | jq -r '.tag_name' | tail -c +2)
|
||||
|
||||
if [[ $version == $oldVersion ]]; then
|
||||
echo "Already at latest version $version"
|
||||
exit 0
|
||||
fi
|
||||
echo "New version: $version"
|
||||
|
||||
tmpdir=$(mktemp -d /tmp/electrs-verify-gpg.XXX)
|
||||
repo=$tmpdir/repo
|
||||
trap "rm -rf $tmpdir" EXIT
|
||||
|
||||
git clone --depth 1 --branch v${version} -c advice.detachedHead=false https://github.com/romanz/electrs $repo
|
||||
|
||||
export GNUPGHOME=$tmpdir
|
||||
echo
|
||||
echo "Fetching romanz's key"
|
||||
gpg --keyserver hkps://keys.openpgp.org --recv-keys 15c8c3574ae4f1e25f3f35c587cae5fa46917cbb 2> /dev/null
|
||||
echo
|
||||
echo "Verifying commit"
|
||||
git -C $repo verify-tag v${version}
|
||||
|
||||
rm -rf $repo/.git
|
||||
hash=$(nix hash path $repo)
|
||||
|
||||
(cd "$nixpkgs" && update-source-version electrs "$version" "$hash")
|
||||
sed -i 's|cargoHash = .*|cargoHash = "sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=";|' "$scriptDir/default.nix"
|
||||
echo
|
||||
echo "electrs: $oldVersion -> $version"
|
|
@ -5,16 +5,16 @@
|
|||
|
||||
buildGoModule rec {
|
||||
pname = "lightning-pool";
|
||||
version = "0.5.0-alpha";
|
||||
version = "0.5.1-alpha";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "lightninglabs";
|
||||
repo = "pool";
|
||||
rev = "v${version}";
|
||||
sha256 = "0i8qkxnrx3a89aw3v0mx7przlldl8kc0ng6g1m435366y6nzdarb";
|
||||
sha256 = "147s0p4arfxl2akzm267p8zfy6hgssym5rwxv78kp8i39mfinpkn";
|
||||
};
|
||||
|
||||
vendorSha256 = "04v2788w8l734n5xz6fwjbwkqlbk8q77nwncjpn7890mw75yd3rn";
|
||||
vendorSha256 = "0zd3bwqi0hnk0562x9hd62cwjw1xj386m83jagg41kzz0cpcr7zl";
|
||||
|
||||
subPackages = [ "cmd/pool" "cmd/poold" ];
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue