Merge branch 'master' into iosevka-buildnpmpackage
This commit is contained in:
commit
b7f9535f82
802 changed files with 19270 additions and 18060 deletions
4
.github/CODEOWNERS
vendored
4
.github/CODEOWNERS
vendored
|
@ -134,7 +134,9 @@
|
|||
/pkgs/development/ruby-modules @marsam
|
||||
|
||||
# Rust
|
||||
/pkgs/development/compilers/rust @Mic92 @LnL7 @zowoq
|
||||
/pkgs/development/compilers/rust @Mic92 @LnL7 @zowoq @winterqt @figsoda
|
||||
/pkgs/build-support/rust @zowoq @winterqt @figsoda
|
||||
/doc/languages-frameworks/rust.section.md @zowoq @winterqt @figsoda
|
||||
|
||||
# C compilers
|
||||
/pkgs/development/compilers/gcc @matthewbauer
|
||||
|
|
|
@ -1,6 +1,19 @@
|
|||
# Testers {#chap-testers}
|
||||
This chapter describes several testing builders which are available in the <literal>testers</literal> namespace.
|
||||
|
||||
## `hasPkgConfigModule` {#tester-hasPkgConfigModule}
|
||||
|
||||
Checks whether a package exposes a certain `pkg-config` module.
|
||||
|
||||
Example:
|
||||
|
||||
```nix
|
||||
passthru.tests.pkg-config = testers.hasPkgConfigModule {
|
||||
package = finalAttrs.finalPackage;
|
||||
moduleName = "libfoo";
|
||||
}
|
||||
```
|
||||
|
||||
## `testVersion` {#tester-testVersion}
|
||||
|
||||
Checks the command output contains the specified version
|
||||
|
|
|
@ -27,7 +27,7 @@ If the build succeeds, the manual will be in `./result/share/doc/nixpkgs/manual.
|
|||
|
||||
As per [RFC 0072](https://github.com/NixOS/rfcs/pull/72), all new documentation content should be written in [CommonMark](https://commonmark.org/) Markdown dialect.
|
||||
|
||||
Additional syntax extensions are available, though not all extensions can be used in NixOS option documentation. The following extensions are currently used:
|
||||
Additional syntax extensions are available, all of which can be used in NixOS option documentation. The following extensions are currently used:
|
||||
|
||||
- []{#ssec-contributing-markup-anchors}
|
||||
Explicitly defined **anchors** on headings, to allow linking to sections. These should be always used, to ensure the anchors can be linked even when the heading text changes, and to prevent conflicts between [automatically assigned identifiers](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/auto_identifiers.md).
|
||||
|
@ -38,6 +38,10 @@ Additional syntax extensions are available, though not all extensions can be use
|
|||
## Syntax {#sec-contributing-markup}
|
||||
```
|
||||
|
||||
::: {.note}
|
||||
NixOS option documentation does not support headings in general.
|
||||
:::
|
||||
|
||||
- []{#ssec-contributing-markup-anchors-inline}
|
||||
**Inline anchors**, which allow linking arbitrary place in the text (e.g. individual list items, sentences…).
|
||||
|
||||
|
@ -67,10 +71,6 @@ Additional syntax extensions are available, though not all extensions can be use
|
|||
|
||||
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point). Though, the feature originates from [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage) with slightly different syntax.
|
||||
|
||||
::: {.note}
|
||||
Inline roles are available for option documentation.
|
||||
:::
|
||||
|
||||
- []{#ssec-contributing-markup-admonitions}
|
||||
**Admonitions**, set off from the text to bring attention to something.
|
||||
|
||||
|
@ -96,10 +96,6 @@ Additional syntax extensions are available, though not all extensions can be use
|
|||
- [`tip`](https://tdg.docbook.org/tdg/5.0/tip.html)
|
||||
- [`warning`](https://tdg.docbook.org/tdg/5.0/warning.html)
|
||||
|
||||
::: {.note}
|
||||
Admonitions are available for option documentation.
|
||||
:::
|
||||
|
||||
- []{#ssec-contributing-markup-definition-lists}
|
||||
[**Definition lists**](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/definition_lists.md), for defining a group of terms:
|
||||
|
||||
|
|
|
@ -32,6 +32,7 @@
|
|||
<xi:include href="octave.section.xml" />
|
||||
<xi:include href="perl.section.xml" />
|
||||
<xi:include href="php.section.xml" />
|
||||
<xi:include href="pkg-config.section.xml" />
|
||||
<xi:include href="python.section.xml" />
|
||||
<xi:include href="qt.section.xml" />
|
||||
<xi:include href="r.section.xml" />
|
||||
|
|
9
doc/languages-frameworks/pkg-config.section.md
Normal file
9
doc/languages-frameworks/pkg-config.section.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
# pkg-config {#sec-pkg-config}
|
||||
|
||||
*pkg-config* is a unified interface for declaring and querying built C/C++ libraries.
|
||||
|
||||
Nixpkgs provides a couple of facilities for working with this tool.
|
||||
|
||||
- A [setup hook](#setup-hook-pkg-config) bundled with in the `pkg-config` package, to bring a derivation's declared build inputs into the environment.
|
||||
- The [`validatePkgConfig` setup hook](https://nixos.org/manual/nixpkgs/stable/#validatepkgconfig), for packages that provide pkg-config modules.
|
||||
- The `defaultPkgConfigPackages` package set: a set of aliases, named after the modules they provide. This is meant to be used by language-to-nix integrations. Hand-written packages should use the normal Nixpkgs attribute name instead.
|
|
@ -253,7 +253,7 @@ The propagated equivalent of `depsTargetTarget`. This is prefixed for the same r
|
|||
|
||||
#### `NIX_DEBUG` {#var-stdenv-NIX_DEBUG}
|
||||
|
||||
A natural number indicating how much information to log. If set to 1 or higher, `stdenv` will print moderate debugging information during the build. In particular, the `gcc` and `ld` wrapper scripts will print out the complete command line passed to the wrapped tools. If set to 6 or higher, the `stdenv` setup script will be run with `set -x` tracing. If set to 7 or higher, the `gcc` and `ld` wrapper scripts will also be run with `set -x` tracing.
|
||||
A number between 0 and 7 indicating how much information to log. If set to 1 or higher, `stdenv` will print moderate debugging information during the build. In particular, the `gcc` and `ld` wrapper scripts will print out the complete command line passed to the wrapped tools. If set to 6 or higher, the `stdenv` setup script will be run with `set -x` tracing. If set to 7 or higher, the `gcc` and `ld` wrapper scripts will also be run with `set -x` tracing.
|
||||
|
||||
### Attributes affecting build properties {#attributes-affecting-build-properties}
|
||||
|
||||
|
|
|
@ -168,7 +168,7 @@ rec {
|
|||
] { a.b.c = 0; }
|
||||
=> { a = { b = { d = 1; }; }; x = { y = "xy"; }; }
|
||||
|
||||
Type: updateManyAttrsByPath :: [{ path :: [String], update :: (Any -> Any) }] -> AttrSet -> AttrSet
|
||||
Type: updateManyAttrsByPath :: [{ path :: [String]; update :: (Any -> Any); }] -> AttrSet -> AttrSet
|
||||
*/
|
||||
updateManyAttrsByPath = let
|
||||
# When recursing into attributes, instead of updating the `path` of each
|
||||
|
@ -414,7 +414,7 @@ rec {
|
|||
=> { name = "some"; value = 6; }
|
||||
|
||||
Type:
|
||||
nameValuePair :: String -> Any -> { name :: String, value :: Any }
|
||||
nameValuePair :: String -> Any -> { name :: String; value :: Any; }
|
||||
*/
|
||||
nameValuePair =
|
||||
# Attribute name
|
||||
|
@ -449,7 +449,7 @@ rec {
|
|||
=> { foo_x = "bar-a"; foo_y = "bar-b"; }
|
||||
|
||||
Type:
|
||||
mapAttrs' :: (String -> Any -> { name = String; value = Any }) -> AttrSet -> AttrSet
|
||||
mapAttrs' :: (String -> Any -> { name :: String; value :: Any; }) -> AttrSet -> AttrSet
|
||||
*/
|
||||
mapAttrs' =
|
||||
# A function, given an attribute's name and value, returns a new `nameValuePair`.
|
||||
|
@ -649,7 +649,7 @@ rec {
|
|||
|
||||
Example:
|
||||
zipAttrsWith (name: values: values) [{a = "x";} {a = "y"; b = "z";}]
|
||||
=> { a = ["x" "y"]; b = ["z"] }
|
||||
=> { a = ["x" "y"]; b = ["z"]; }
|
||||
|
||||
Type:
|
||||
zipAttrsWith :: (String -> [ Any ] -> Any) -> [ AttrSet ] -> AttrSet
|
||||
|
@ -664,7 +664,7 @@ rec {
|
|||
|
||||
Example:
|
||||
zipAttrs [{a = "x";} {a = "y"; b = "z";}]
|
||||
=> { a = ["x" "y"]; b = ["z"] }
|
||||
=> { a = ["x" "y"]; b = ["z"]; }
|
||||
|
||||
Type:
|
||||
zipAttrs :: [ AttrSet ] -> AttrSet
|
||||
|
|
|
@ -252,7 +252,8 @@ rec {
|
|||
outputsList = map makeOutput outputs;
|
||||
|
||||
drv' = (lib.head outputsList).value;
|
||||
in lib.deepSeq drv' drv';
|
||||
in if drv == null then null else
|
||||
lib.deepSeq drv' drv';
|
||||
|
||||
/* Make a set of packages with a common scope. All packages called
|
||||
with the provided `callPackage` will be evaluated with the same
|
||||
|
|
|
@ -306,7 +306,7 @@ rec {
|
|||
/* Splits the elements of a list in two lists, `right` and
|
||||
`wrong`, depending on the evaluation of a predicate.
|
||||
|
||||
Type: (a -> bool) -> [a] -> { right :: [a], wrong :: [a] }
|
||||
Type: (a -> bool) -> [a] -> { right :: [a]; wrong :: [a]; }
|
||||
|
||||
Example:
|
||||
partition (x: x > 2) [ 5 1 2 3 4 ]
|
||||
|
@ -374,7 +374,7 @@ rec {
|
|||
/* Merges two lists of the same size together. If the sizes aren't the same
|
||||
the merging stops at the shortest.
|
||||
|
||||
Type: zipLists :: [a] -> [b] -> [{ fst :: a, snd :: b}]
|
||||
Type: zipLists :: [a] -> [b] -> [{ fst :: a; snd :: b; }]
|
||||
|
||||
Example:
|
||||
zipLists [ 1 2 ] [ "a" "b" ]
|
||||
|
|
11
lib/meta.nix
11
lib/meta.nix
|
@ -76,20 +76,19 @@ rec {
|
|||
|
||||
1. (legacy) a system string.
|
||||
|
||||
2. (modern) a pattern for the platform `parsed` field.
|
||||
2. (modern) a pattern for the entire platform structure (see `lib.systems.inspect.platformPatterns`).
|
||||
|
||||
3. (functional) a predicate function returning a boolean.
|
||||
3. (modern) a pattern for the platform `parsed` field (see `lib.systems.inspect.patterns`).
|
||||
|
||||
We can inject these into a pattern for the whole of a structured platform,
|
||||
and then match that.
|
||||
*/
|
||||
platformMatch = platform: elem:
|
||||
if builtins.isFunction elem
|
||||
then elem platform
|
||||
else let
|
||||
platformMatch = platform: elem: let
|
||||
pattern =
|
||||
if builtins.isString elem
|
||||
then { system = elem; }
|
||||
else if elem?parsed
|
||||
then elem
|
||||
else { parsed = elem; };
|
||||
in lib.matchAttrs pattern platform;
|
||||
|
||||
|
|
|
@ -114,7 +114,7 @@ rec {
|
|||
|
||||
You can omit the default path if the name of the option is also attribute path in nixpkgs.
|
||||
|
||||
Type: mkPackageOption :: pkgs -> string -> { default :: [string], example :: null | string | [string] } -> option
|
||||
Type: mkPackageOption :: pkgs -> string -> { default :: [string]; example :: null | string | [string]; } -> option
|
||||
|
||||
Example:
|
||||
mkPackageOption pkgs "hello" { }
|
||||
|
@ -201,7 +201,7 @@ rec {
|
|||
|
||||
/* Extracts values of all "value" keys of the given list.
|
||||
|
||||
Type: getValues :: [ { value :: a } ] -> [a]
|
||||
Type: getValues :: [ { value :: a; } ] -> [a]
|
||||
|
||||
Example:
|
||||
getValues [ { value = 1; } { value = 2; } ] // => [ 1 2 ]
|
||||
|
@ -211,7 +211,7 @@ rec {
|
|||
|
||||
/* Extracts values of all "file" keys of the given list
|
||||
|
||||
Type: getFiles :: [ { file :: a } ] -> [a]
|
||||
Type: getFiles :: [ { file :: a; } ] -> [a]
|
||||
|
||||
Example:
|
||||
getFiles [ { file = "file1"; } { file = "file2"; } ] // => [ "file1" "file2" ]
|
||||
|
|
|
@ -7,6 +7,7 @@ let abis_ = abis; in
|
|||
let abis = lib.mapAttrs (_: abi: builtins.removeAttrs abi [ "assertions" ]) abis_; in
|
||||
|
||||
rec {
|
||||
# these patterns are to be matched against {host,build,target}Platform.parsed
|
||||
patterns = rec {
|
||||
isi686 = { cpu = cpuTypes.i686; };
|
||||
isx86_32 = { cpu = { family = "x86"; bits = 32; }; };
|
||||
|
@ -81,8 +82,13 @@ rec {
|
|||
isMusl = with abis; map (a: { abi = a; }) [ musl musleabi musleabihf muslabin32 muslabi64 ];
|
||||
isUClibc = with abis; map (a: { abi = a; }) [ uclibc uclibceabi uclibceabihf ];
|
||||
|
||||
isEfi = map (family: { cpu.family = family; })
|
||||
[ "x86" "arm" "aarch64" "riscv" ];
|
||||
isEfi = [
|
||||
{ cpu = { family = "arm"; version = "6"; }; }
|
||||
{ cpu = { family = "arm"; version = "7"; }; }
|
||||
{ cpu = { family = "arm"; version = "8"; }; }
|
||||
{ cpu = { family = "riscv"; }; }
|
||||
{ cpu = { family = "x86"; }; }
|
||||
];
|
||||
};
|
||||
|
||||
matchAnyAttrs = patterns:
|
||||
|
@ -90,4 +96,13 @@ rec {
|
|||
else matchAttrs patterns;
|
||||
|
||||
predicates = mapAttrs (_: matchAnyAttrs) patterns;
|
||||
|
||||
# these patterns are to be matched against the entire
|
||||
# {host,build,target}Platform structure; they include a `parsed={}` marker so
|
||||
# that `lib.meta.availableOn` can distinguish them from the patterns which
|
||||
# apply only to the `parsed` field.
|
||||
|
||||
platformPatterns = mapAttrs (_: p: { parsed = {}; } // p) {
|
||||
isStatic = { isStatic = true; };
|
||||
};
|
||||
}
|
||||
|
|
|
@ -2530,6 +2530,12 @@
|
|||
githubId = 89596;
|
||||
name = "Florian Friesdorf";
|
||||
};
|
||||
ChaosAttractor = {
|
||||
email = "lostattractor@gmail.com";
|
||||
github = "LostAttractor";
|
||||
githubId = 46527539;
|
||||
name = "ChaosAttractor";
|
||||
};
|
||||
chekoopa = {
|
||||
email = "chekoopa@mail.ru";
|
||||
github = "chekoopa";
|
||||
|
@ -4236,6 +4242,12 @@
|
|||
githubId = 103082;
|
||||
name = "Ed Brindley";
|
||||
};
|
||||
eliandoran = {
|
||||
email = "contact@eliandoran.me";
|
||||
name = "Elian Doran";
|
||||
github = "eliandoran";
|
||||
githubId = 21236836;
|
||||
};
|
||||
elizagamedev = {
|
||||
email = "eliza@eliza.sh";
|
||||
github = "elizagamedev";
|
||||
|
@ -5339,6 +5351,12 @@
|
|||
githubId = 60962839;
|
||||
name = "Mazen Zahr";
|
||||
};
|
||||
gkleen = {
|
||||
name = "Gregor Kleen";
|
||||
email = "xpnfr@bouncy.email";
|
||||
github = "gkleen";
|
||||
githubId = 20089782;
|
||||
};
|
||||
gleber = {
|
||||
email = "gleber.p@gmail.com";
|
||||
github = "gleber";
|
||||
|
@ -13990,6 +14008,13 @@
|
|||
githubId = 2666479;
|
||||
name = "Y Nguyen";
|
||||
};
|
||||
superherointj = {
|
||||
name = "Sérgio Marcelo";
|
||||
email = "sergiomarcelo+nixpkgs@ya.ru";
|
||||
matrix = "@superherointj:matrix.org";
|
||||
github = "superherointj";
|
||||
githubId = 5861043;
|
||||
};
|
||||
SuperSandro2000 = {
|
||||
email = "sandro.jaeckel@gmail.com";
|
||||
matrix = "@sandro:supersandro.de";
|
||||
|
@ -14124,6 +14149,15 @@
|
|||
githubId = 5991987;
|
||||
name = "Alexander Sosedkin";
|
||||
};
|
||||
t4ccer = {
|
||||
email = "t4ccer@gmail.com";
|
||||
github = "t4ccer";
|
||||
githubId = 64430288;
|
||||
name = "Tomasz Maciosowski";
|
||||
keys = [{
|
||||
fingerprint = "6866 981C 4992 4D64 D154 E1AC 19E5 A2D8 B1E4 3F19";
|
||||
}];
|
||||
};
|
||||
tadeokondrak = {
|
||||
email = "me@tadeo.ca";
|
||||
github = "tadeokondrak";
|
||||
|
|
|
@ -698,9 +698,11 @@ with lib.maintainers; {
|
|||
|
||||
rust = {
|
||||
members = [
|
||||
andir
|
||||
figsoda
|
||||
lnl7
|
||||
mic92
|
||||
tjni
|
||||
winter
|
||||
zowoq
|
||||
];
|
||||
scope = "Maintain the Rust compiler toolchain and nixpkgs integration.";
|
||||
|
|
|
@ -170,6 +170,6 @@ Packages
|
|||
```
|
||||
|
||||
The latter option definition changes the default PostgreSQL package
|
||||
used by NixOS's PostgreSQL service to 10.x. For more information on
|
||||
used by NixOS's PostgreSQL service to 14.x. For more information on
|
||||
packages, including how to add new ones, see
|
||||
[](#sec-custom-packages).
|
||||
|
|
|
@ -68,12 +68,15 @@ let
|
|||
|
||||
sources = lib.sourceFilesBySuffices ./. [".xml"];
|
||||
|
||||
modulesDoc = builtins.toFile "modules.xml" ''
|
||||
<section xmlns:xi="http://www.w3.org/2001/XInclude" id="modules">
|
||||
${(lib.concatMapStrings (path: ''
|
||||
<xi:include href="${path}" />
|
||||
'') (lib.catAttrs "value" config.meta.doc))}
|
||||
</section>
|
||||
modulesDoc = runCommand "modules.xml" {
|
||||
nativeBuildInputs = [ pkgs.nixos-render-docs ];
|
||||
} ''
|
||||
nixos-render-docs manual docbook \
|
||||
--manpage-urls ${pkgs.path + "/doc/manpage-urls.json"} \
|
||||
"$out" \
|
||||
--section \
|
||||
--section-id modules \
|
||||
--chapters ${lib.concatMapStrings (p: "${p.value} ") config.meta.doc}
|
||||
'';
|
||||
|
||||
generatedSources = runCommand "generated-docbook" {} ''
|
||||
|
|
|
@ -23,7 +23,7 @@ file.
|
|||
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ ericsagnes ];
|
||||
doc = ./default.xml;
|
||||
doc = ./default.md;
|
||||
buildDocsInSandbox = true;
|
||||
};
|
||||
}
|
||||
|
@ -31,7 +31,9 @@ file.
|
|||
|
||||
- `maintainers` contains a list of the module maintainers.
|
||||
|
||||
- `doc` points to a valid DocBook file containing the module
|
||||
- `doc` points to a valid [Nixpkgs-flavored CommonMark](
|
||||
https://nixos.org/manual/nixpkgs/unstable/#sec-contributing-markup
|
||||
) file containing the module
|
||||
documentation. Its contents is automatically added to
|
||||
[](#ch-configuration). Changes to a module documentation have to
|
||||
be checked to not break building the NixOS manual:
|
||||
|
@ -40,26 +42,6 @@ file.
|
|||
$ nix-build nixos/release.nix -A manual.x86_64-linux
|
||||
```
|
||||
|
||||
This file should *not* usually be written by hand. Instead it is preferred
|
||||
to write documentation using CommonMark and converting it to CommonMark
|
||||
using pandoc. The simplest documentation can be converted using just
|
||||
|
||||
```ShellSession
|
||||
$ pandoc doc.md -t docbook --top-level-division=chapter -f markdown+smart > doc.xml
|
||||
```
|
||||
|
||||
More elaborate documentation may wish to add one or more of the pandoc
|
||||
filters used to build the remainder of the manual, for example the GNOME
|
||||
desktop uses
|
||||
|
||||
```ShellSession
|
||||
$ pandoc gnome.md -t docbook --top-level-division=chapter \
|
||||
--extract-media=media -f markdown+smart \
|
||||
--lua-filter ../../../../../doc/build-aux/pandoc-filters/myst-reader/roles.lua \
|
||||
--lua-filter ../../../../../doc/build-aux/pandoc-filters/docbook-writer/rst-roles.lua \
|
||||
> gnome.xml
|
||||
```
|
||||
|
||||
- `buildDocsInSandbox` indicates whether the option documentation for the
|
||||
module can be built in a derivation sandbox. This option is currently only
|
||||
honored for modules shipped by nixpkgs. User modules and modules taken from
|
||||
|
|
|
@ -221,7 +221,7 @@ services.postgresql.package = pkgs.postgresql_14;
|
|||
</programlisting>
|
||||
<para>
|
||||
The latter option definition changes the default PostgreSQL
|
||||
package used by NixOS’s PostgreSQL service to 10.x. For more
|
||||
package used by NixOS’s PostgreSQL service to 14.x. For more
|
||||
information on packages, including how to add new ones, see
|
||||
<xref linkend="sec-custom-packages" />.
|
||||
</para>
|
||||
|
|
|
@ -28,7 +28,7 @@
|
|||
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ ericsagnes ];
|
||||
doc = ./default.xml;
|
||||
doc = ./default.md;
|
||||
buildDocsInSandbox = true;
|
||||
};
|
||||
}
|
||||
|
@ -42,35 +42,16 @@
|
|||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>doc</literal> points to a valid DocBook file containing
|
||||
the module documentation. Its contents is automatically added to
|
||||
<literal>doc</literal> points to a valid
|
||||
<link xlink:href="https://nixos.org/manual/nixpkgs/unstable/#sec-contributing-markup">Nixpkgs-flavored
|
||||
CommonMark</link> file containing the module documentation. Its
|
||||
contents is automatically added to
|
||||
<xref linkend="ch-configuration" />. Changes to a module
|
||||
documentation have to be checked to not break building the NixOS
|
||||
manual:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ nix-build nixos/release.nix -A manual.x86_64-linux
|
||||
</programlisting>
|
||||
<para>
|
||||
This file should <emphasis>not</emphasis> usually be written by
|
||||
hand. Instead it is preferred to write documentation using
|
||||
CommonMark and converting it to CommonMark using pandoc. The
|
||||
simplest documentation can be converted using just
|
||||
</para>
|
||||
<programlisting>
|
||||
$ pandoc doc.md -t docbook --top-level-division=chapter -f markdown+smart > doc.xml
|
||||
</programlisting>
|
||||
<para>
|
||||
More elaborate documentation may wish to add one or more of the
|
||||
pandoc filters used to build the remainder of the manual, for
|
||||
example the GNOME desktop uses
|
||||
</para>
|
||||
<programlisting>
|
||||
$ pandoc gnome.md -t docbook --top-level-division=chapter \
|
||||
--extract-media=media -f markdown+smart \
|
||||
--lua-filter ../../../../../doc/build-aux/pandoc-filters/myst-reader/roles.lua \
|
||||
--lua-filter ../../../../../doc/build-aux/pandoc-filters/docbook-writer/rst-roles.lua \
|
||||
> gnome.xml
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
|
|
|
@ -351,6 +351,12 @@
|
|||
relying on this should provide their own implementation.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Calling <literal>makeSetupHook</literal> without passing a
|
||||
<literal>name</literal> argument is deprecated.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Qt 5.12 and 5.14 have been removed, as the corresponding
|
||||
|
@ -413,6 +419,17 @@
|
|||
https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The iputils package, which is installed by default, no longer
|
||||
provides the <literal>ninfod</literal>,
|
||||
<literal>rarpd</literal> and <literal>rdisc</literal> tools.
|
||||
See
|
||||
<link xlink:href="https://github.com/iputils/iputils/releases/tag/20221126">upstream’s
|
||||
release notes</link> for more details and available
|
||||
replacements.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="sec-release-23.05-notable-changes">
|
||||
|
@ -702,6 +719,13 @@
|
|||
<literal>hipcc</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>services.nginx.recommendedProxySettings</literal> now
|
||||
removes the <literal>Connection</literal> header preventing
|
||||
clients from closing backend connections.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Resilio sync secret keys can now be provided using a secrets
|
||||
|
@ -766,15 +790,6 @@
|
|||
been fixed to allow more than one plugin in the path.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
A new option was added to the virtualisation module that
|
||||
enables specifying explicitly named network interfaces in QEMU
|
||||
VMs. The existing <literal>virtualisation.vlans</literal> is
|
||||
still supported for cases where the name of the network
|
||||
interface is irrelevant.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
@ -50,21 +50,3 @@ for mf in ${MD_FILES[*]}; do
|
|||
done
|
||||
|
||||
popd
|
||||
|
||||
# now handle module chapters. we'll need extra checks to ensure that we don't process
|
||||
# markdown files we're not interested in, so we'll require an x.nix file for ever x.md
|
||||
# that we'll convert to xml.
|
||||
pushd "$DIR/../../modules"
|
||||
|
||||
mapfile -t MD_FILES < <(find . -type f -regex '.*\.md$')
|
||||
|
||||
for mf in ${MD_FILES[*]}; do
|
||||
[ -f "${mf%.md}.nix" ] || continue
|
||||
|
||||
pandoc --top-level-division=chapter "$mf" "${pandoc_flags[@]}" -o "${mf%.md}.xml"
|
||||
sed -i -e '1 i <!-- Do not edit this file directly, edit its companion .md instead\
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->' \
|
||||
"${mf%.md}.xml"
|
||||
done
|
||||
|
||||
popd
|
||||
|
|
|
@ -87,6 +87,8 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- The EC2 image module previously detected and activated swap-formatted instance store devices and partitions in stage-1 (initramfs). This behaviour has been removed. Users relying on this should provide their own implementation.
|
||||
|
||||
- Calling `makeSetupHook` without passing a `name` argument is deprecated.
|
||||
|
||||
- Qt 5.12 and 5.14 have been removed, as the corresponding branches have been EOL upstream for a long time. This affected under 10 packages in nixpkgs, largely unmaintained upstream as well, however, out-of-tree package expressions may need to be updated manually.
|
||||
|
||||
- The [services.wordpress.sites.<name>.plugins](#opt-services.wordpress.sites._name_.plugins) and [services.wordpress.sites.<name>.themes](#opt-services.wordpress.sites._name_.themes) options have been converted from sets to attribute sets to allow for consumers to specify explicit install paths via attribute name.
|
||||
|
@ -101,6 +103,11 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- .NET 5.0 was removed due to being end-of-life, use a newer, supported .NET version - https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core
|
||||
|
||||
- The iputils package, which is installed by default, no longer provides the
|
||||
`ninfod`, `rarpd` and `rdisc` tools. See
|
||||
[upstream's release notes](https://github.com/iputils/iputils/releases/tag/20221126)
|
||||
for more details and available replacements.
|
||||
|
||||
## Other Notable Changes {#sec-release-23.05-notable-changes}
|
||||
|
||||
<!-- To avoid merge conflicts, consider adding your item at an arbitrary place in the list instead. -->
|
||||
|
@ -176,6 +183,8 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- `hip` has been separated into `hip`, `hip-common` and `hipcc`.
|
||||
|
||||
- `services.nginx.recommendedProxySettings` now removes the `Connection` header preventing clients from closing backend connections.
|
||||
|
||||
- Resilio sync secret keys can now be provided using a secrets file at runtime, preventing these secrets from ending up in the Nix store.
|
||||
|
||||
- The `firewall` and `nat` module now has a nftables based implementation. Enable `networking.nftables` to use it.
|
||||
|
@ -191,5 +200,3 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
- `nixos-version` now accepts `--configuration-revision` to display more information about the current generation revision
|
||||
|
||||
- The option `services.nomad.extraSettingsPlugins` has been fixed to allow more than one plugin in the path.
|
||||
|
||||
- A new option was added to the virtualisation module that enables specifying explicitly named network interfaces in QEMU VMs. The existing `virtualisation.vlans` is still supported for cases where the name of the network interface is irrelevant.
|
||||
|
|
|
@ -148,42 +148,19 @@ in rec {
|
|||
'';
|
||||
|
||||
optionsDocBook = pkgs.runCommand "options-docbook.xml" {
|
||||
MANPAGE_URLS = pkgs.path + "/doc/manpage-urls.json";
|
||||
OTD_DOCUMENT_TYPE = documentType;
|
||||
OTD_VARIABLE_LIST_ID = variablelistId;
|
||||
OTD_OPTION_ID_PREFIX = optionIdPrefix;
|
||||
OTD_REVISION = revision;
|
||||
|
||||
nativeBuildInputs = [
|
||||
(let
|
||||
# python3Minimal can't be overridden with packages on Darwin, due to a missing framework.
|
||||
# Instead of modifying stdenv, we take the easy way out, since most people on Darwin will
|
||||
# just be hacking on the Nixpkgs manual (which also uses make-options-doc).
|
||||
python = if pkgs.stdenv.isDarwin then pkgs.python3 else pkgs.python3Minimal;
|
||||
self = (python.override {
|
||||
inherit self;
|
||||
includeSiteCustomize = true;
|
||||
});
|
||||
in self.withPackages (p:
|
||||
let
|
||||
# TODO add our own small test suite when rendering is split out into a new tool
|
||||
markdown-it-py = p.markdown-it-py.override {
|
||||
disableTests = true;
|
||||
};
|
||||
mdit-py-plugins = p.mdit-py-plugins.override {
|
||||
inherit markdown-it-py;
|
||||
disableTests = true;
|
||||
};
|
||||
in [
|
||||
markdown-it-py
|
||||
mdit-py-plugins
|
||||
]))
|
||||
pkgs.nixos-render-docs
|
||||
];
|
||||
} ''
|
||||
python ${./optionsToDocbook.py} \
|
||||
nixos-render-docs options docbook \
|
||||
--manpage-urls ${pkgs.path + "/doc/manpage-urls.json"} \
|
||||
--revision ${lib.escapeShellArg revision} \
|
||||
--document-type ${lib.escapeShellArg documentType} \
|
||||
--varlist-id ${lib.escapeShellArg variablelistId} \
|
||||
--id-prefix ${lib.escapeShellArg optionIdPrefix} \
|
||||
${lib.optionalString markdownByDefault "--markdown-by-default"} \
|
||||
${optionsJSON}/share/doc/nixos/options.json \
|
||||
> options.xml
|
||||
options.xml
|
||||
|
||||
if grep /nixpkgs/nixos/modules options.xml; then
|
||||
echo "The manual appears to depend on the location of Nixpkgs, which is bad"
|
||||
|
|
|
@ -1,343 +0,0 @@
|
|||
import collections
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from typing import Any, Dict, List
|
||||
from collections.abc import MutableMapping, Sequence
|
||||
import inspect
|
||||
|
||||
# for MD conversion
|
||||
import markdown_it
|
||||
import markdown_it.renderer
|
||||
from markdown_it.token import Token
|
||||
from markdown_it.utils import OptionsDict
|
||||
from mdit_py_plugins.container import container_plugin
|
||||
from mdit_py_plugins.deflist import deflist_plugin
|
||||
from mdit_py_plugins.myst_role import myst_role_plugin
|
||||
from xml.sax.saxutils import escape, quoteattr
|
||||
|
||||
manpage_urls = json.load(open(os.getenv('MANPAGE_URLS')))
|
||||
|
||||
class Renderer(markdown_it.renderer.RendererProtocol):
|
||||
__output__ = "docbook"
|
||||
def __init__(self, parser=None):
|
||||
self.rules = {
|
||||
k: v
|
||||
for k, v in inspect.getmembers(self, predicate=inspect.ismethod)
|
||||
if not (k.startswith("render") or k.startswith("_"))
|
||||
} | {
|
||||
"container_{.note}_open": self._note_open,
|
||||
"container_{.note}_close": self._note_close,
|
||||
"container_{.important}_open": self._important_open,
|
||||
"container_{.important}_close": self._important_close,
|
||||
"container_{.warning}_open": self._warning_open,
|
||||
"container_{.warning}_close": self._warning_close,
|
||||
}
|
||||
def render(self, tokens: Sequence[Token], options: OptionsDict, env: MutableMapping) -> str:
|
||||
assert '-link-tag-stack' not in env
|
||||
env['-link-tag-stack'] = []
|
||||
assert '-deflist-stack' not in env
|
||||
env['-deflist-stack'] = []
|
||||
def do_one(i, token):
|
||||
if token.type == "inline":
|
||||
assert token.children is not None
|
||||
return self.renderInline(token.children, options, env)
|
||||
elif token.type in self.rules:
|
||||
return self.rules[token.type](tokens[i], tokens, i, options, env)
|
||||
else:
|
||||
raise NotImplementedError("md token not supported yet", token)
|
||||
return "".join(map(lambda arg: do_one(*arg), enumerate(tokens)))
|
||||
def renderInline(self, tokens: Sequence[Token], options: OptionsDict, env: MutableMapping) -> str:
|
||||
# HACK to support docbook links and xrefs. link handling is only necessary because the docbook
|
||||
# manpage stylesheet converts - in urls to a mathematical minus, which may be somewhat incorrect.
|
||||
for i, token in enumerate(tokens):
|
||||
if token.type != 'link_open':
|
||||
continue
|
||||
token.tag = 'link'
|
||||
# turn [](#foo) into xrefs
|
||||
if token.attrs['href'][0:1] == '#' and tokens[i + 1].type == 'link_close':
|
||||
token.tag = "xref"
|
||||
# turn <x> into links without contents
|
||||
if tokens[i + 1].type == 'text' and tokens[i + 1].content == token.attrs['href']:
|
||||
tokens[i + 1].content = ''
|
||||
|
||||
def do_one(i, token):
|
||||
if token.type in self.rules:
|
||||
return self.rules[token.type](tokens[i], tokens, i, options, env)
|
||||
else:
|
||||
raise NotImplementedError("md node not supported yet", token)
|
||||
return "".join(map(lambda arg: do_one(*arg), enumerate(tokens)))
|
||||
|
||||
def text(self, token, tokens, i, options, env):
|
||||
return escape(token.content)
|
||||
def paragraph_open(self, token, tokens, i, options, env):
|
||||
return "<para>"
|
||||
def paragraph_close(self, token, tokens, i, options, env):
|
||||
return "</para>"
|
||||
def hardbreak(self, token, tokens, i, options, env):
|
||||
return "<literallayout>\n</literallayout>"
|
||||
def softbreak(self, token, tokens, i, options, env):
|
||||
# should check options.breaks() and emit hard break if so
|
||||
return "\n"
|
||||
def code_inline(self, token, tokens, i, options, env):
|
||||
return f"<literal>{escape(token.content)}</literal>"
|
||||
def code_block(self, token, tokens, i, options, env):
|
||||
return f"<programlisting>{escape(token.content)}</programlisting>"
|
||||
def link_open(self, token, tokens, i, options, env):
|
||||
env['-link-tag-stack'].append(token.tag)
|
||||
(attr, start) = ('linkend', 1) if token.attrs['href'][0] == '#' else ('xlink:href', 0)
|
||||
return f"<{token.tag} {attr}={quoteattr(token.attrs['href'][start:])}>"
|
||||
def link_close(self, token, tokens, i, options, env):
|
||||
return f"</{env['-link-tag-stack'].pop()}>"
|
||||
def list_item_open(self, token, tokens, i, options, env):
|
||||
return "<listitem>"
|
||||
def list_item_close(self, token, tokens, i, options, env):
|
||||
return "</listitem>\n"
|
||||
# HACK open and close para for docbook change size. remove soon.
|
||||
def bullet_list_open(self, token, tokens, i, options, env):
|
||||
return "<para><itemizedlist>\n"
|
||||
def bullet_list_close(self, token, tokens, i, options, env):
|
||||
return "\n</itemizedlist></para>"
|
||||
def em_open(self, token, tokens, i, options, env):
|
||||
return "<emphasis>"
|
||||
def em_close(self, token, tokens, i, options, env):
|
||||
return "</emphasis>"
|
||||
def strong_open(self, token, tokens, i, options, env):
|
||||
return "<emphasis role=\"strong\">"
|
||||
def strong_close(self, token, tokens, i, options, env):
|
||||
return "</emphasis>"
|
||||
def fence(self, token, tokens, i, options, env):
|
||||
info = f" language={quoteattr(token.info)}" if token.info != "" else ""
|
||||
return f"<programlisting{info}>{escape(token.content)}</programlisting>"
|
||||
def blockquote_open(self, token, tokens, i, options, env):
|
||||
return "<para><blockquote>"
|
||||
def blockquote_close(self, token, tokens, i, options, env):
|
||||
return "</blockquote></para>"
|
||||
def _note_open(self, token, tokens, i, options, env):
|
||||
return "<para><note>"
|
||||
def _note_close(self, token, tokens, i, options, env):
|
||||
return "</note></para>"
|
||||
def _important_open(self, token, tokens, i, options, env):
|
||||
return "<para><important>"
|
||||
def _important_close(self, token, tokens, i, options, env):
|
||||
return "</important></para>"
|
||||
def _warning_open(self, token, tokens, i, options, env):
|
||||
return "<para><warning>"
|
||||
def _warning_close(self, token, tokens, i, options, env):
|
||||
return "</warning></para>"
|
||||
# markdown-it emits tokens based on the html syntax tree, but docbook is
|
||||
# slightly different. html has <dl>{<dt/>{<dd/>}}</dl>,
|
||||
# docbook has <variablelist>{<varlistentry><term/><listitem/></varlistentry>}<variablelist>
|
||||
# we have to reject multiple definitions for the same term for time being.
|
||||
def dl_open(self, token, tokens, i, options, env):
|
||||
env['-deflist-stack'].append({})
|
||||
return "<para><variablelist>"
|
||||
def dl_close(self, token, tokens, i, options, env):
|
||||
env['-deflist-stack'].pop()
|
||||
return "</variablelist></para>"
|
||||
def dt_open(self, token, tokens, i, options, env):
|
||||
env['-deflist-stack'][-1]['has-dd'] = False
|
||||
return "<varlistentry><term>"
|
||||
def dt_close(self, token, tokens, i, options, env):
|
||||
return "</term>"
|
||||
def dd_open(self, token, tokens, i, options, env):
|
||||
if env['-deflist-stack'][-1]['has-dd']:
|
||||
raise Exception("multiple definitions per term not supported")
|
||||
env['-deflist-stack'][-1]['has-dd'] = True
|
||||
return "<listitem>"
|
||||
def dd_close(self, token, tokens, i, options, env):
|
||||
return "</listitem></varlistentry>"
|
||||
def myst_role(self, token, tokens, i, options, env):
|
||||
if token.meta['name'] == 'command':
|
||||
return f"<command>{escape(token.content)}</command>"
|
||||
if token.meta['name'] == 'file':
|
||||
return f"<filename>{escape(token.content)}</filename>"
|
||||
if token.meta['name'] == 'var':
|
||||
return f"<varname>{escape(token.content)}</varname>"
|
||||
if token.meta['name'] == 'env':
|
||||
return f"<envar>{escape(token.content)}</envar>"
|
||||
if token.meta['name'] == 'option':
|
||||
return f"<option>{escape(token.content)}</option>"
|
||||
if token.meta['name'] == 'manpage':
|
||||
[page, section] = [ s.strip() for s in token.content.rsplit('(', 1) ]
|
||||
section = section[:-1]
|
||||
man = f"{page}({section})"
|
||||
title = f"<refentrytitle>{escape(page)}</refentrytitle>"
|
||||
vol = f"<manvolnum>{escape(section)}</manvolnum>"
|
||||
ref = f"<citerefentry>{title}{vol}</citerefentry>"
|
||||
if man in manpage_urls:
|
||||
return f"<link xlink:href={quoteattr(manpage_urls[man])}>{ref}</link>"
|
||||
else:
|
||||
return ref
|
||||
raise NotImplementedError("md node not supported yet", token)
|
||||
|
||||
md = (
|
||||
markdown_it.MarkdownIt(renderer_cls=Renderer)
|
||||
# TODO maybe fork the plugin and have only a single rule for all?
|
||||
.use(container_plugin, name="{.note}")
|
||||
.use(container_plugin, name="{.important}")
|
||||
.use(container_plugin, name="{.warning}")
|
||||
.use(deflist_plugin)
|
||||
.use(myst_role_plugin)
|
||||
)
|
||||
|
||||
# converts in-place!
|
||||
def convertMD(options: Dict[str, Any]) -> str:
|
||||
def optionIs(option: Dict[str, Any], key: str, typ: str) -> bool:
|
||||
if key not in option: return False
|
||||
if type(option[key]) != dict: return False
|
||||
if '_type' not in option[key]: return False
|
||||
return option[key]['_type'] == typ
|
||||
|
||||
def convertCode(name: str, option: Dict[str, Any], key: str):
|
||||
if optionIs(option, key, 'literalMD'):
|
||||
option[key] = md.render(f"*{key.capitalize()}:*\n{option[key]['text']}")
|
||||
elif optionIs(option, key, 'literalExpression'):
|
||||
code = option[key]['text']
|
||||
# for multi-line code blocks we only have to count ` runs at the beginning
|
||||
# of a line, but this is much easier.
|
||||
multiline = '\n' in code
|
||||
longest, current = (0, 0)
|
||||
for c in code:
|
||||
current = current + 1 if c == '`' else 0
|
||||
longest = max(current, longest)
|
||||
# inline literals need a space to separate ticks from content, code blocks
|
||||
# need newlines. inline literals need one extra tick, code blocks need three.
|
||||
ticks, sep = ('`' * (longest + (3 if multiline else 1)), '\n' if multiline else ' ')
|
||||
code = f"{ticks}{sep}{code}{sep}{ticks}"
|
||||
option[key] = md.render(f"*{key.capitalize()}:*\n{code}")
|
||||
elif optionIs(option, key, 'literalDocBook'):
|
||||
option[key] = f"<para><emphasis>{key.capitalize()}:</emphasis> {option[key]['text']}</para>"
|
||||
elif key in option:
|
||||
raise Exception(f"{name} {key} has unrecognized type", option[key])
|
||||
|
||||
for (name, option) in options.items():
|
||||
try:
|
||||
if optionIs(option, 'description', 'mdDoc'):
|
||||
option['description'] = md.render(option['description']['text'])
|
||||
elif markdownByDefault:
|
||||
option['description'] = md.render(option['description'])
|
||||
else:
|
||||
option['description'] = ("<nixos:option-description><para>" +
|
||||
option['description'] +
|
||||
"</para></nixos:option-description>")
|
||||
|
||||
convertCode(name, option, 'example')
|
||||
convertCode(name, option, 'default')
|
||||
|
||||
if 'relatedPackages' in option:
|
||||
option['relatedPackages'] = md.render(option['relatedPackages'])
|
||||
except Exception as e:
|
||||
raise Exception(f"Failed to render option {name}") from e
|
||||
|
||||
return options
|
||||
|
||||
id_translate_table = {
|
||||
ord('*'): ord('_'),
|
||||
ord('<'): ord('_'),
|
||||
ord(' '): ord('_'),
|
||||
ord('>'): ord('_'),
|
||||
ord('['): ord('_'),
|
||||
ord(']'): ord('_'),
|
||||
ord(':'): ord('_'),
|
||||
ord('"'): ord('_'),
|
||||
}
|
||||
|
||||
def need_env(n):
|
||||
if n not in os.environ:
|
||||
raise RuntimeError("required environment variable not set", n)
|
||||
return os.environ[n]
|
||||
|
||||
OTD_REVISION = need_env('OTD_REVISION')
|
||||
OTD_DOCUMENT_TYPE = need_env('OTD_DOCUMENT_TYPE')
|
||||
OTD_VARIABLE_LIST_ID = need_env('OTD_VARIABLE_LIST_ID')
|
||||
OTD_OPTION_ID_PREFIX = need_env('OTD_OPTION_ID_PREFIX')
|
||||
|
||||
def print_decl_def(header, locs):
|
||||
print(f"""<para><emphasis>{header}:</emphasis></para>""")
|
||||
print(f"""<simplelist>""")
|
||||
for loc in locs:
|
||||
# locations can be either plain strings (specific to nixpkgs), or attrsets
|
||||
# { name = "foo/bar.nix"; url = "https://github.com/....."; }
|
||||
if isinstance(loc, str):
|
||||
# Hyperlink the filename either to the NixOS github
|
||||
# repository (if it’s a module and we have a revision number),
|
||||
# or to the local filesystem.
|
||||
if not loc.startswith('/'):
|
||||
if OTD_REVISION == 'local':
|
||||
href = f"https://github.com/NixOS/nixpkgs/blob/master/{loc}"
|
||||
else:
|
||||
href = f"https://github.com/NixOS/nixpkgs/blob/{OTD_REVISION}/{loc}"
|
||||
else:
|
||||
href = f"file://{loc}"
|
||||
# Print the filename and make it user-friendly by replacing the
|
||||
# /nix/store/<hash> prefix by the default location of nixos
|
||||
# sources.
|
||||
if not loc.startswith('/'):
|
||||
name = f"<nixpkgs/{loc}>"
|
||||
elif loc.contains('nixops') and loc.contains('/nix/'):
|
||||
name = f"<nixops/{loc[loc.find('/nix/') + 5:]}>"
|
||||
else:
|
||||
name = loc
|
||||
print(f"""<member><filename xlink:href={quoteattr(href)}>""")
|
||||
print(escape(name))
|
||||
print(f"""</filename></member>""")
|
||||
else:
|
||||
href = f" xlink:href={quoteattr(loc['url'])}" if 'url' in loc else ""
|
||||
print(f"""<member><filename{href}>{escape(loc['name'])}</filename></member>""")
|
||||
print(f"""</simplelist>""")
|
||||
|
||||
markdownByDefault = False
|
||||
optOffset = 0
|
||||
for arg in sys.argv[1:]:
|
||||
if arg == "--markdown-by-default":
|
||||
optOffset += 1
|
||||
markdownByDefault = True
|
||||
|
||||
options = convertMD(json.load(open(sys.argv[1 + optOffset], 'r')))
|
||||
|
||||
keys = list(options.keys())
|
||||
keys.sort(key=lambda opt: [ (0 if p.startswith("enable") else 1 if p.startswith("package") else 2, p)
|
||||
for p in options[opt]['loc'] ])
|
||||
|
||||
print(f"""<?xml version="1.0" encoding="UTF-8"?>""")
|
||||
if OTD_DOCUMENT_TYPE == 'appendix':
|
||||
print("""<appendix xmlns="http://docbook.org/ns/docbook" xml:id="appendix-configuration-options">""")
|
||||
print(""" <title>Configuration Options</title>""")
|
||||
print(f"""<variablelist xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
xmlns:nixos="tag:nixos.org"
|
||||
xmlns="http://docbook.org/ns/docbook"
|
||||
xml:id="{OTD_VARIABLE_LIST_ID}">""")
|
||||
|
||||
for name in keys:
|
||||
opt = options[name]
|
||||
id = OTD_OPTION_ID_PREFIX + name.translate(id_translate_table)
|
||||
print(f"""<varlistentry>""")
|
||||
# NOTE adding extra spaces here introduces spaces into xref link expansions
|
||||
print(f"""<term xlink:href={quoteattr("#" + id)} xml:id={quoteattr(id)}>""", end='')
|
||||
print(f"""<option>{escape(name)}</option>""", end='')
|
||||
print(f"""</term>""")
|
||||
print(f"""<listitem>""")
|
||||
print(opt['description'])
|
||||
if typ := opt.get('type'):
|
||||
ro = " <emphasis>(read only)</emphasis>" if opt.get('readOnly', False) else ""
|
||||
print(f"""<para><emphasis>Type:</emphasis> {escape(typ)}{ro}</para>""")
|
||||
if default := opt.get('default'):
|
||||
print(default)
|
||||
if example := opt.get('example'):
|
||||
print(example)
|
||||
if related := opt.get('relatedPackages'):
|
||||
print(f"""<para>""")
|
||||
print(f""" <emphasis>Related packages:</emphasis>""")
|
||||
print(f"""</para>""")
|
||||
print(related)
|
||||
if decl := opt.get('declarations'):
|
||||
print_decl_def("Declared by", decl)
|
||||
if defs := opt.get('definitions'):
|
||||
print_decl_def("Defined by", defs)
|
||||
print(f"""</listitem>""")
|
||||
print(f"""</varlistentry>""")
|
||||
|
||||
print("""</variablelist>""")
|
||||
if OTD_DOCUMENT_TYPE == 'appendix':
|
||||
print("""</appendix>""")
|
|
@ -12,9 +12,7 @@ let
|
|||
};
|
||||
|
||||
|
||||
vlans = map (m: (
|
||||
m.virtualisation.vlans ++
|
||||
(lib.mapAttrsToList (_: v: v.vlan) m.virtualisation.interfaces))) (lib.attrValues config.nodes);
|
||||
vlans = map (m: m.virtualisation.vlans) (lib.attrValues config.nodes);
|
||||
vms = map (m: m.system.build.vm) (lib.attrValues config.nodes);
|
||||
|
||||
nodeHostNames =
|
||||
|
|
|
@ -18,40 +18,24 @@ let
|
|||
|
||||
networkModule = { config, nodes, pkgs, ... }:
|
||||
let
|
||||
qemu-common = import ../qemu-common.nix { inherit lib pkgs; };
|
||||
|
||||
# Convert legacy VLANs to named interfaces and merge with explicit interfaces.
|
||||
vlansNumbered = forEach (zipLists config.virtualisation.vlans (range 1 255)) (v: {
|
||||
name = "eth${toString v.snd}";
|
||||
vlan = v.fst;
|
||||
assignIP = true;
|
||||
});
|
||||
explicitInterfaces = lib.mapAttrsToList (n: v: v // { name = n; }) config.virtualisation.interfaces;
|
||||
interfaces = vlansNumbered ++ explicitInterfaces;
|
||||
interfacesNumbered = zipLists interfaces (range 1 255);
|
||||
|
||||
# Automatically assign IP addresses to requested interfaces.
|
||||
assignIPs = lib.filter (i: i.assignIP) interfaces;
|
||||
ipInterfaces = forEach assignIPs (i:
|
||||
nameValuePair i.name { ipv4.addresses =
|
||||
[ { address = "192.168.${toString i.vlan}.${toString config.virtualisation.test.nodeNumber}";
|
||||
interfacesNumbered = zipLists config.virtualisation.vlans (range 1 255);
|
||||
interfaces = forEach interfacesNumbered ({ fst, snd }:
|
||||
nameValuePair "eth${toString snd}" {
|
||||
ipv4.addresses =
|
||||
[{
|
||||
address = "192.168.${toString fst}.${toString config.virtualisation.test.nodeNumber}";
|
||||
prefixLength = 24;
|
||||
}];
|
||||
});
|
||||
|
||||
qemuOptions = lib.flatten (forEach interfacesNumbered ({ fst, snd }:
|
||||
qemu-common.qemuNICFlags snd fst.vlan config.virtualisation.test.nodeNumber));
|
||||
udevRules = forEach interfacesNumbered ({ fst, snd }:
|
||||
"SUBSYSTEM==\"net\",ACTION==\"add\",ATTR{address}==\"${qemu-common.qemuNicMac fst.vlan config.virtualisation.test.nodeNumber}\",NAME=\"${fst.name}\"");
|
||||
|
||||
networkConfig =
|
||||
{
|
||||
networking.hostName = mkDefault config.virtualisation.test.nodeName;
|
||||
|
||||
networking.interfaces = listToAttrs ipInterfaces;
|
||||
networking.interfaces = listToAttrs interfaces;
|
||||
|
||||
networking.primaryIPAddress =
|
||||
optionalString (ipInterfaces != [ ]) (head (head ipInterfaces).value.ipv4.addresses).address;
|
||||
optionalString (interfaces != [ ]) (head (head interfaces).value.ipv4.addresses).address;
|
||||
|
||||
# Put the IP addresses of all VMs in this machine's
|
||||
# /etc/hosts file. If a machine has multiple
|
||||
|
@ -67,13 +51,16 @@ let
|
|||
"${config.networking.hostName}.${config.networking.domain} " +
|
||||
"${config.networking.hostName}\n"));
|
||||
|
||||
virtualisation.qemu.options = qemuOptions;
|
||||
boot.initrd.services.udev.rules = concatMapStrings (x: x + "\n") udevRules;
|
||||
virtualisation.qemu.options =
|
||||
let qemu-common = import ../qemu-common.nix { inherit lib pkgs; };
|
||||
in
|
||||
flip concatMap interfacesNumbered
|
||||
({ fst, snd }: qemu-common.qemuNICFlags snd fst config.virtualisation.test.nodeNumber);
|
||||
};
|
||||
|
||||
in
|
||||
{
|
||||
key = "network-interfaces";
|
||||
key = "ip-address";
|
||||
config = networkConfig // {
|
||||
# Expose the networkConfig items for tests like nixops
|
||||
# that need to recreate the network config.
|
||||
|
|
|
@ -90,7 +90,7 @@ let
|
|||
only has an effect if {option}`uid` is
|
||||
{option}`null`, in which case it determines whether
|
||||
the user's UID is allocated in the range for system users
|
||||
(below 500) or in the range for normal users (starting at
|
||||
(below 1000) or in the range for normal users (starting at
|
||||
1000).
|
||||
Exactly one of `isNormalUser` and
|
||||
`isSystemUser` must be true.
|
||||
|
@ -677,7 +677,7 @@ in {
|
|||
{
|
||||
assertion = let
|
||||
xor = a: b: a && !b || b && !a;
|
||||
isEffectivelySystemUser = user.isSystemUser || (user.uid != null && user.uid < 500);
|
||||
isEffectivelySystemUser = user.isSystemUser || (user.uid != null && user.uid < 1000);
|
||||
in xor isEffectivelySystemUser user.isNormalUser;
|
||||
message = ''
|
||||
Exactly one of users.users.${user.name}.isSystemUser and users.users.${user.name}.isNormalUser must be set.
|
||||
|
|
|
@ -66,7 +66,7 @@ in
|
|||
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ ericsagnes ];
|
||||
doc = ./default.xml;
|
||||
doc = ./default.md;
|
||||
};
|
||||
|
||||
}
|
||||
|
|
|
@ -1,275 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-input-methods">
|
||||
<title>Input Methods</title>
|
||||
<para>
|
||||
Input methods are an operating system component that allows any
|
||||
data, such as keyboard strokes or mouse movements, to be received as
|
||||
input. In this way users can enter characters and symbols not found
|
||||
on their input devices. Using an input method is obligatory for any
|
||||
language that has more graphemes than there are keys on the
|
||||
keyboard.
|
||||
</para>
|
||||
<para>
|
||||
The following input methods are available in NixOS:
|
||||
</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
IBus: The intelligent input bus.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Fcitx: A customizable lightweight input method.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Nabi: A Korean input method based on XIM.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Uim: The universal input method, is a library with a XIM bridge.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Hime: An extremely easy-to-use input method framework.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Kime: Korean IME
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<section xml:id="module-services-input-methods-ibus">
|
||||
<title>IBus</title>
|
||||
<para>
|
||||
IBus is an Intelligent Input Bus. It provides full featured and
|
||||
user friendly input method user interface.
|
||||
</para>
|
||||
<para>
|
||||
The following snippet can be used to configure IBus:
|
||||
</para>
|
||||
<programlisting>
|
||||
i18n.inputMethod = {
|
||||
enabled = "ibus";
|
||||
ibus.engines = with pkgs.ibus-engines; [ anthy hangul mozc ];
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
<literal>i18n.inputMethod.ibus.engines</literal> is optional and
|
||||
can be used to add extra IBus engines.
|
||||
</para>
|
||||
<para>
|
||||
Available extra IBus engines are:
|
||||
</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Anthy (<literal>ibus-engines.anthy</literal>): Anthy is a
|
||||
system for Japanese input method. It converts Hiragana text to
|
||||
Kana Kanji mixed text.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Hangul (<literal>ibus-engines.hangul</literal>): Korean input
|
||||
method.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
m17n (<literal>ibus-engines.m17n</literal>): m17n is an input
|
||||
method that uses input methods and corresponding icons in the
|
||||
m17n database.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
mozc (<literal>ibus-engines.mozc</literal>): A Japanese input
|
||||
method from Google.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Table (<literal>ibus-engines.table</literal>): An input method
|
||||
that load tables of input methods.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
table-others (<literal>ibus-engines.table-others</literal>):
|
||||
Various table-based input methods. To use this, and any other
|
||||
table-based input methods, it must appear in the list of
|
||||
engines along with <literal>table</literal>. For example:
|
||||
</para>
|
||||
<programlisting>
|
||||
ibus.engines = with pkgs.ibus-engines; [ table table-others ];
|
||||
</programlisting>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>
|
||||
To use any input method, the package must be added in the
|
||||
configuration, as shown above, and also (after running
|
||||
<literal>nixos-rebuild</literal>) the input method must be added
|
||||
from IBus’ preference dialog.
|
||||
</para>
|
||||
<section xml:id="module-services-input-methods-troubleshooting">
|
||||
<title>Troubleshooting</title>
|
||||
<para>
|
||||
If IBus works in some applications but not others, a likely
|
||||
cause of this is that IBus is depending on a different version
|
||||
of <literal>glib</literal> to what the applications are
|
||||
depending on. This can be checked by running
|
||||
<literal>nix-store -q --requisites <path> | grep glib</literal>,
|
||||
where <literal><path></literal> is the path of either IBus
|
||||
or an application in the Nix store. The <literal>glib</literal>
|
||||
packages must match exactly. If they do not, uninstalling and
|
||||
reinstalling the application is a likely fix.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="module-services-input-methods-fcitx">
|
||||
<title>Fcitx</title>
|
||||
<para>
|
||||
Fcitx is an input method framework with extension support. It has
|
||||
three built-in Input Method Engine, Pinyin, QuWei and Table-based
|
||||
input methods.
|
||||
</para>
|
||||
<para>
|
||||
The following snippet can be used to configure Fcitx:
|
||||
</para>
|
||||
<programlisting>
|
||||
i18n.inputMethod = {
|
||||
enabled = "fcitx";
|
||||
fcitx.engines = with pkgs.fcitx-engines; [ mozc hangul m17n ];
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
<literal>i18n.inputMethod.fcitx.engines</literal> is optional and
|
||||
can be used to add extra Fcitx engines.
|
||||
</para>
|
||||
<para>
|
||||
Available extra Fcitx engines are:
|
||||
</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
Anthy (<literal>fcitx-engines.anthy</literal>): Anthy is a
|
||||
system for Japanese input method. It converts Hiragana text to
|
||||
Kana Kanji mixed text.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Chewing (<literal>fcitx-engines.chewing</literal>): Chewing is
|
||||
an intelligent Zhuyin input method. It is one of the most
|
||||
popular input methods among Traditional Chinese Unix users.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Hangul (<literal>fcitx-engines.hangul</literal>): Korean input
|
||||
method.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Unikey (<literal>fcitx-engines.unikey</literal>): Vietnamese
|
||||
input method.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
m17n (<literal>fcitx-engines.m17n</literal>): m17n is an input
|
||||
method that uses input methods and corresponding icons in the
|
||||
m17n database.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
mozc (<literal>fcitx-engines.mozc</literal>): A Japanese input
|
||||
method from Google.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
table-others (<literal>fcitx-engines.table-others</literal>):
|
||||
Various table-based input methods.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="module-services-input-methods-nabi">
|
||||
<title>Nabi</title>
|
||||
<para>
|
||||
Nabi is an easy to use Korean X input method. It allows you to
|
||||
enter phonetic Korean characters (hangul) and pictographic Korean
|
||||
characters (hanja).
|
||||
</para>
|
||||
<para>
|
||||
The following snippet can be used to configure Nabi:
|
||||
</para>
|
||||
<programlisting>
|
||||
i18n.inputMethod = {
|
||||
enabled = "nabi";
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-input-methods-uim">
|
||||
<title>Uim</title>
|
||||
<para>
|
||||
Uim (short for <quote>universal input method</quote>) is a
|
||||
multilingual input method framework. Applications can use it
|
||||
through so-called bridges.
|
||||
</para>
|
||||
<para>
|
||||
The following snippet can be used to configure uim:
|
||||
</para>
|
||||
<programlisting>
|
||||
i18n.inputMethod = {
|
||||
enabled = "uim";
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
Note: The <xref linkend="opt-i18n.inputMethod.uim.toolbar" />
|
||||
option can be used to choose uim toolbar.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-input-methods-hime">
|
||||
<title>Hime</title>
|
||||
<para>
|
||||
Hime is an extremely easy-to-use input method framework. It is
|
||||
lightweight, stable, powerful and supports many commonly used
|
||||
input methods, including Cangjie, Zhuyin, Dayi, Rank, Shrimp,
|
||||
Greek, Korean Pinyin, Latin Alphabet, etc…
|
||||
</para>
|
||||
<para>
|
||||
The following snippet can be used to configure Hime:
|
||||
</para>
|
||||
<programlisting>
|
||||
i18n.inputMethod = {
|
||||
enabled = "hime";
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-input-methods-kime">
|
||||
<title>Kime</title>
|
||||
<para>
|
||||
Kime is Korean IME. it’s built with Rust language and let you get
|
||||
simple, safe, fast Korean typing
|
||||
</para>
|
||||
<para>
|
||||
The following snippet can be used to configure Kime:
|
||||
</para>
|
||||
<programlisting>
|
||||
i18n.inputMethod = {
|
||||
enabled = "kime";
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -47,7 +47,7 @@ in
|
|||
doc = mkOption {
|
||||
type = docFile;
|
||||
internal = true;
|
||||
example = "./meta.chapter.xml";
|
||||
example = "./meta.chapter.md";
|
||||
description = lib.mdDoc ''
|
||||
Documentation prologue for the set of options of each module. This
|
||||
option should be defined at most once per module.
|
||||
|
|
|
@ -33,7 +33,7 @@ in
|
|||
};
|
||||
|
||||
meta = {
|
||||
doc = ./default.xml;
|
||||
doc = ./default.md;
|
||||
maintainers = with lib.maintainers; [ vidbina ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,70 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-programs-digitalbitbox">
|
||||
<title>Digital Bitbox</title>
|
||||
<para>
|
||||
Digital Bitbox is a hardware wallet and second-factor authenticator.
|
||||
</para>
|
||||
<para>
|
||||
The <literal>digitalbitbox</literal> programs module may be
|
||||
installed by setting <literal>programs.digitalbitbox</literal> to
|
||||
<literal>true</literal> in a manner similar to
|
||||
</para>
|
||||
<programlisting>
|
||||
programs.digitalbitbox.enable = true;
|
||||
</programlisting>
|
||||
<para>
|
||||
and bundles the <literal>digitalbitbox</literal> package (see
|
||||
<xref linkend="sec-digitalbitbox-package" />), which contains the
|
||||
<literal>dbb-app</literal> and <literal>dbb-cli</literal> binaries,
|
||||
along with the hardware module (see
|
||||
<xref linkend="sec-digitalbitbox-hardware-module" />) which sets up
|
||||
the necessary udev rules to access the device.
|
||||
</para>
|
||||
<para>
|
||||
Enabling the digitalbitbox module is pretty much the easiest way to
|
||||
get a Digital Bitbox device working on your system.
|
||||
</para>
|
||||
<para>
|
||||
For more information, see
|
||||
<link xlink:href="https://digitalbitbox.com/start_linux">https://digitalbitbox.com/start_linux</link>.
|
||||
</para>
|
||||
<section xml:id="sec-digitalbitbox-package">
|
||||
<title>Package</title>
|
||||
<para>
|
||||
The binaries, <literal>dbb-app</literal> (a GUI tool) and
|
||||
<literal>dbb-cli</literal> (a CLI tool), are available through the
|
||||
<literal>digitalbitbox</literal> package which could be installed
|
||||
as follows:
|
||||
</para>
|
||||
<programlisting>
|
||||
environment.systemPackages = [
|
||||
pkgs.digitalbitbox
|
||||
];
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="sec-digitalbitbox-hardware-module">
|
||||
<title>Hardware</title>
|
||||
<para>
|
||||
The digitalbitbox hardware package enables the udev rules for
|
||||
Digital Bitbox devices and may be installed as follows:
|
||||
</para>
|
||||
<programlisting>
|
||||
hardware.digitalbitbox.enable = true;
|
||||
</programlisting>
|
||||
<para>
|
||||
In order to alter the udev rules, one may provide different values
|
||||
for the <literal>udevRule51</literal> and
|
||||
<literal>udevRule52</literal> attributes by means of overriding as
|
||||
follows:
|
||||
</para>
|
||||
<programlisting>
|
||||
programs.digitalbitbox = {
|
||||
enable = true;
|
||||
package = pkgs.digitalbitbox.override {
|
||||
udevRule51 = "something else";
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -8,7 +8,7 @@ in
|
|||
{
|
||||
meta = {
|
||||
maintainers = pkgs.plotinus.meta.maintainers;
|
||||
doc = ./plotinus.xml;
|
||||
doc = ./plotinus.md;
|
||||
};
|
||||
|
||||
###### interface
|
||||
|
|
|
@ -1,30 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-program-plotinus">
|
||||
<title>Plotinus</title>
|
||||
<para>
|
||||
<emphasis>Source:</emphasis>
|
||||
<filename>modules/programs/plotinus.nix</filename>
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Upstream documentation:</emphasis>
|
||||
<link xlink:href="https://github.com/p-e-w/plotinus">https://github.com/p-e-w/plotinus</link>
|
||||
</para>
|
||||
<para>
|
||||
Plotinus is a searchable command palette in every modern GTK
|
||||
application.
|
||||
</para>
|
||||
<para>
|
||||
When in a GTK 3 application and Plotinus is enabled, you can press
|
||||
<literal>Ctrl+Shift+P</literal> to open the command palette. The
|
||||
command palette provides a searchable list of of all menu items in
|
||||
the application.
|
||||
</para>
|
||||
<para>
|
||||
To enable Plotinus, add the following to your
|
||||
<filename>configuration.nix</filename>:
|
||||
</para>
|
||||
<programlisting>
|
||||
programs.plotinus.enable = true;
|
||||
</programlisting>
|
||||
</chapter>
|
|
@ -142,5 +142,5 @@ in
|
|||
|
||||
};
|
||||
|
||||
meta.doc = ./oh-my-zsh.xml;
|
||||
meta.doc = ./oh-my-zsh.md;
|
||||
}
|
||||
|
|
|
@ -1,154 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-programs-zsh-ohmyzsh">
|
||||
<title>Oh my ZSH</title>
|
||||
<para>
|
||||
<link xlink:href="https://ohmyz.sh/"><literal>oh-my-zsh</literal></link>
|
||||
is a framework to manage your
|
||||
<link xlink:href="https://www.zsh.org/">ZSH</link> configuration
|
||||
including completion scripts for several CLI tools or custom prompt
|
||||
themes.
|
||||
</para>
|
||||
<section xml:id="module-programs-oh-my-zsh-usage">
|
||||
<title>Basic usage</title>
|
||||
<para>
|
||||
The module uses the <literal>oh-my-zsh</literal> package with all
|
||||
available features. The initial setup using Nix expressions is
|
||||
fairly similar to the configuration format of
|
||||
<literal>oh-my-zsh</literal>.
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
programs.zsh.ohMyZsh = {
|
||||
enable = true;
|
||||
plugins = [ "git" "python" "man" ];
|
||||
theme = "agnoster";
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
For a detailed explanation of these arguments please refer to the
|
||||
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/wiki"><literal>oh-my-zsh</literal>
|
||||
docs</link>.
|
||||
</para>
|
||||
<para>
|
||||
The expression generates the needed configuration and writes it
|
||||
into your <literal>/etc/zshrc</literal>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-programs-oh-my-zsh-additions">
|
||||
<title>Custom additions</title>
|
||||
<para>
|
||||
Sometimes third-party or custom scripts such as a modified theme
|
||||
may be needed. <literal>oh-my-zsh</literal> provides the
|
||||
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/wiki/Customization#overriding-internals"><literal>ZSH_CUSTOM</literal></link>
|
||||
environment variable for this which points to a directory with
|
||||
additional scripts.
|
||||
</para>
|
||||
<para>
|
||||
The module can do this as well:
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
programs.zsh.ohMyZsh.custom = "~/path/to/custom/scripts";
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-programs-oh-my-zsh-environments">
|
||||
<title>Custom environments</title>
|
||||
<para>
|
||||
There are several extensions for <literal>oh-my-zsh</literal>
|
||||
packaged in <literal>nixpkgs</literal>. One of them is
|
||||
<link xlink:href="https://github.com/spwhitt/nix-zsh-completions">nix-zsh-completions</link>
|
||||
which bundles completion scripts and a plugin for
|
||||
<literal>oh-my-zsh</literal>.
|
||||
</para>
|
||||
<para>
|
||||
Rather than using a single mutable path for
|
||||
<literal>ZSH_CUSTOM</literal>, it’s also possible to generate this
|
||||
path from a list of Nix packages:
|
||||
</para>
|
||||
<programlisting>
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
programs.zsh.ohMyZsh.customPkgs = [
|
||||
pkgs.nix-zsh-completions
|
||||
# and even more...
|
||||
];
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
Internally a single store path will be created using
|
||||
<literal>buildEnv</literal>. Please refer to the docs of
|
||||
<link xlink:href="https://nixos.org/nixpkgs/manual/#sec-building-environment"><literal>buildEnv</literal></link>
|
||||
for further reference.
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Please keep in mind that this is not compatible with
|
||||
<literal>programs.zsh.ohMyZsh.custom</literal> as it requires an
|
||||
immutable store path while <literal>custom</literal> shall remain
|
||||
mutable! An evaluation failure will be thrown if both
|
||||
<literal>custom</literal> and <literal>customPkgs</literal> are
|
||||
set.</emphasis>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-programs-oh-my-zsh-packaging-customizations">
|
||||
<title>Package your own customizations</title>
|
||||
<para>
|
||||
If third-party customizations (e.g. new themes) are supposed to be
|
||||
added to <literal>oh-my-zsh</literal> there are several pitfalls
|
||||
to keep in mind:
|
||||
</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
To comply with the default structure of <literal>ZSH</literal>
|
||||
the entire output needs to be written to
|
||||
<literal>$out/share/zsh.</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Completion scripts are supposed to be stored at
|
||||
<literal>$out/share/zsh/site-functions</literal>. This
|
||||
directory is part of the
|
||||
<link xlink:href="http://zsh.sourceforge.net/Doc/Release/Functions.html"><literal>fpath</literal></link>
|
||||
and the package should be compatible with pure
|
||||
<literal>ZSH</literal> setups. The module will automatically
|
||||
link the contents of <literal>site-functions</literal> to
|
||||
completions directory in the proper store path.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>plugins</literal> directory needs the structure
|
||||
<literal>pluginname/pluginname.plugin.zsh</literal> as
|
||||
structured in the
|
||||
<link xlink:href="https://github.com/robbyrussell/oh-my-zsh/tree/91b771914bc7c43dd7c7a43b586c5de2c225ceb7/plugins">upstream
|
||||
repo.</link>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>
|
||||
A derivation for <literal>oh-my-zsh</literal> may look like this:
|
||||
</para>
|
||||
<programlisting>
|
||||
{ stdenv, fetchFromGitHub }:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
name = "exemplary-zsh-customization-${version}";
|
||||
version = "1.0.0";
|
||||
src = fetchFromGitHub {
|
||||
# path to the upstream repository
|
||||
};
|
||||
|
||||
dontBuild = true;
|
||||
installPhase = ''
|
||||
mkdir -p $out/share/zsh/site-functions
|
||||
cp {themes,plugins} $out/share/zsh
|
||||
cp completions $out/share/zsh/site-functions
|
||||
'';
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -916,6 +916,6 @@ in {
|
|||
|
||||
meta = {
|
||||
maintainers = lib.teams.acme.members;
|
||||
doc = ./default.xml;
|
||||
doc = ./default.md;
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,395 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-security-acme">
|
||||
<title>SSL/TLS Certificates with ACME</title>
|
||||
<para>
|
||||
NixOS supports automatic domain validation & certificate
|
||||
retrieval and renewal using the ACME protocol. Any provider can be
|
||||
used, but by default NixOS uses Let’s Encrypt. The alternative ACME
|
||||
client
|
||||
<link xlink:href="https://go-acme.github.io/lego/">lego</link> is
|
||||
used under the hood.
|
||||
</para>
|
||||
<para>
|
||||
Automatic cert validation and configuration for Apache and Nginx
|
||||
virtual hosts is included in NixOS, however if you would like to
|
||||
generate a wildcard cert or you are not using a web server you will
|
||||
have to configure DNS based validation.
|
||||
</para>
|
||||
<section xml:id="module-security-acme-prerequisites">
|
||||
<title>Prerequisites</title>
|
||||
<para>
|
||||
To use the ACME module, you must accept the provider’s terms of
|
||||
service by setting
|
||||
<xref linkend="opt-security.acme.acceptTerms" /> to
|
||||
<literal>true</literal>. The Let’s Encrypt ToS can be found
|
||||
<link xlink:href="https://letsencrypt.org/repository/">here</link>.
|
||||
</para>
|
||||
<para>
|
||||
You must also set an email address to be used when creating
|
||||
accounts with Let’s Encrypt. You can set this for all certs with
|
||||
<xref linkend="opt-security.acme.defaults.email" /> and/or on a
|
||||
per-cert basis with
|
||||
<xref linkend="opt-security.acme.certs._name_.email" />. This
|
||||
address is only used for registration and renewal reminders, and
|
||||
cannot be used to administer the certificates in any way.
|
||||
</para>
|
||||
<para>
|
||||
Alternatively, you can use a different ACME server by changing the
|
||||
<xref linkend="opt-security.acme.defaults.server" /> option to a
|
||||
provider of your choosing, or just change the server for one cert
|
||||
with <xref linkend="opt-security.acme.certs._name_.server" />.
|
||||
</para>
|
||||
<para>
|
||||
You will need an HTTP server or DNS server for verification. For
|
||||
HTTP, the server must have a webroot defined that can serve
|
||||
<filename>.well-known/acme-challenge</filename>. This directory
|
||||
must be writeable by the user that will run the ACME client. For
|
||||
DNS, you must set up credentials with your provider/server for use
|
||||
with lego.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-nginx">
|
||||
<title>Using ACME certificates in Nginx</title>
|
||||
<para>
|
||||
NixOS supports fetching ACME certificates for you by setting
|
||||
<literal>enableACME = true;</literal> in a virtualHost config. We
|
||||
first create self-signed placeholder certificates in place of the
|
||||
real ACME certs. The placeholder certs are overwritten when the
|
||||
ACME certs arrive. For <literal>foo.example.com</literal> the
|
||||
config would look like this:
|
||||
</para>
|
||||
<programlisting>
|
||||
security.acme.acceptTerms = true;
|
||||
security.acme.defaults.email = "admin+acme@example.com";
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
virtualHosts = {
|
||||
"foo.example.com" = {
|
||||
forceSSL = true;
|
||||
enableACME = true;
|
||||
# All serverAliases will be added as extra domain names on the certificate.
|
||||
serverAliases = [ "bar.example.com" ];
|
||||
locations."/" = {
|
||||
root = "/var/www";
|
||||
};
|
||||
};
|
||||
|
||||
# We can also add a different vhost and reuse the same certificate
|
||||
# but we have to append extraDomainNames manually beforehand:
|
||||
# security.acme.certs."foo.example.com".extraDomainNames = [ "baz.example.com" ];
|
||||
"baz.example.com" = {
|
||||
forceSSL = true;
|
||||
useACMEHost = "foo.example.com";
|
||||
locations."/" = {
|
||||
root = "/var/www";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-httpd">
|
||||
<title>Using ACME certificates in Apache/httpd</title>
|
||||
<para>
|
||||
Using ACME certificates with Apache virtual hosts is identical to
|
||||
using them with Nginx. The attribute names are all the same, just
|
||||
replace <quote>nginx</quote> with <quote>httpd</quote> where
|
||||
appropriate.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-configuring">
|
||||
<title>Manual configuration of HTTP-01 validation</title>
|
||||
<para>
|
||||
First off you will need to set up a virtual host to serve the
|
||||
challenges. This example uses a vhost called
|
||||
<literal>certs.example.com</literal>, with the intent that you
|
||||
will generate certs for all your vhosts and redirect everyone to
|
||||
HTTPS.
|
||||
</para>
|
||||
<programlisting>
|
||||
security.acme.acceptTerms = true;
|
||||
security.acme.defaults.email = "admin+acme@example.com";
|
||||
|
||||
# /var/lib/acme/.challenges must be writable by the ACME user
|
||||
# and readable by the Nginx user. The easiest way to achieve
|
||||
# this is to add the Nginx user to the ACME group.
|
||||
users.users.nginx.extraGroups = [ "acme" ];
|
||||
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
virtualHosts = {
|
||||
"acmechallenge.example.com" = {
|
||||
# Catchall vhost, will redirect users to HTTPS for all vhosts
|
||||
serverAliases = [ "*.example.com" ];
|
||||
locations."/.well-known/acme-challenge" = {
|
||||
root = "/var/lib/acme/.challenges";
|
||||
};
|
||||
locations."/" = {
|
||||
return = "301 https://$host$request_uri";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
# Alternative config for Apache
|
||||
users.users.wwwrun.extraGroups = [ "acme" ];
|
||||
services.httpd = {
|
||||
enable = true;
|
||||
virtualHosts = {
|
||||
"acmechallenge.example.com" = {
|
||||
# Catchall vhost, will redirect users to HTTPS for all vhosts
|
||||
serverAliases = [ "*.example.com" ];
|
||||
# /var/lib/acme/.challenges must be writable by the ACME user and readable by the Apache user.
|
||||
# By default, this is the case.
|
||||
documentRoot = "/var/lib/acme/.challenges";
|
||||
extraConfig = ''
|
||||
RewriteEngine On
|
||||
RewriteCond %{HTTPS} off
|
||||
RewriteCond %{REQUEST_URI} !^/\.well-known/acme-challenge [NC]
|
||||
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301]
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
Now you need to configure ACME to generate a certificate.
|
||||
</para>
|
||||
<programlisting>
|
||||
security.acme.certs."foo.example.com" = {
|
||||
webroot = "/var/lib/acme/.challenges";
|
||||
email = "foo@example.com";
|
||||
# Ensure that the web server you use can read the generated certs
|
||||
# Take a look at the group option for the web server you choose.
|
||||
group = "nginx";
|
||||
# Since we have a wildcard vhost to handle port 80,
|
||||
# we can generate certs for anything!
|
||||
# Just make sure your DNS resolves them.
|
||||
extraDomainNames = [ "mail.example.com" ];
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
The private key <filename>key.pem</filename> and certificate
|
||||
<filename>fullchain.pem</filename> will be put into
|
||||
<filename>/var/lib/acme/foo.example.com</filename>.
|
||||
</para>
|
||||
<para>
|
||||
Refer to <xref linkend="ch-options" /> for all available
|
||||
configuration options for the
|
||||
<link linkend="opt-security.acme.certs">security.acme</link>
|
||||
module.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-config-dns">
|
||||
<title>Configuring ACME for DNS validation</title>
|
||||
<para>
|
||||
This is useful if you want to generate a wildcard certificate,
|
||||
since ACME servers will only hand out wildcard certs over DNS
|
||||
validation. There are a number of supported DNS providers and
|
||||
servers you can utilise, see the
|
||||
<link xlink:href="https://go-acme.github.io/lego/dns/">lego
|
||||
docs</link> for provider/server specific configuration values. For
|
||||
the sake of these docs, we will provide a fully self-hosted
|
||||
example using bind.
|
||||
</para>
|
||||
<programlisting>
|
||||
services.bind = {
|
||||
enable = true;
|
||||
extraConfig = ''
|
||||
include "/var/lib/secrets/dnskeys.conf";
|
||||
'';
|
||||
zones = [
|
||||
rec {
|
||||
name = "example.com";
|
||||
file = "/var/db/bind/${name}";
|
||||
master = true;
|
||||
extraConfig = "allow-update { key rfc2136key.example.com.; };";
|
||||
}
|
||||
];
|
||||
}
|
||||
|
||||
# Now we can configure ACME
|
||||
security.acme.acceptTerms = true;
|
||||
security.acme.defaults.email = "admin+acme@example.com";
|
||||
security.acme.certs."example.com" = {
|
||||
domain = "*.example.com";
|
||||
dnsProvider = "rfc2136";
|
||||
credentialsFile = "/var/lib/secrets/certs.secret";
|
||||
# We don't need to wait for propagation since this is a local DNS server
|
||||
dnsPropagationCheck = false;
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
The <filename>dnskeys.conf</filename> and
|
||||
<filename>certs.secret</filename> must be kept secure and thus you
|
||||
should not keep their contents in your Nix config. Instead,
|
||||
generate them one time with a systemd service:
|
||||
</para>
|
||||
<programlisting>
|
||||
systemd.services.dns-rfc2136-conf = {
|
||||
requiredBy = ["acme-example.com.service" "bind.service"];
|
||||
before = ["acme-example.com.service" "bind.service"];
|
||||
unitConfig = {
|
||||
ConditionPathExists = "!/var/lib/secrets/dnskeys.conf";
|
||||
};
|
||||
serviceConfig = {
|
||||
Type = "oneshot";
|
||||
UMask = 0077;
|
||||
};
|
||||
path = [ pkgs.bind ];
|
||||
script = ''
|
||||
mkdir -p /var/lib/secrets
|
||||
chmod 755 /var/lib/secrets
|
||||
tsig-keygen rfc2136key.example.com > /var/lib/secrets/dnskeys.conf
|
||||
chown named:root /var/lib/secrets/dnskeys.conf
|
||||
chmod 400 /var/lib/secrets/dnskeys.conf
|
||||
|
||||
# extract secret value from the dnskeys.conf
|
||||
while read x y; do if [ "$x" = "secret" ]; then secret="''${y:1:''${#y}-3}"; fi; done < /var/lib/secrets/dnskeys.conf
|
||||
|
||||
cat > /var/lib/secrets/certs.secret << EOF
|
||||
RFC2136_NAMESERVER='127.0.0.1:53'
|
||||
RFC2136_TSIG_ALGORITHM='hmac-sha256.'
|
||||
RFC2136_TSIG_KEY='rfc2136key.example.com'
|
||||
RFC2136_TSIG_SECRET='$secret'
|
||||
EOF
|
||||
chmod 400 /var/lib/secrets/certs.secret
|
||||
'';
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
Now you’re all set to generate certs! You should monitor the first
|
||||
invocation by running
|
||||
<literal>systemctl start acme-example.com.service & journalctl -fu acme-example.com.service</literal>
|
||||
and watching its log output.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-config-dns-with-vhosts">
|
||||
<title>Using DNS validation with web server virtual hosts</title>
|
||||
<para>
|
||||
It is possible to use DNS-01 validation with all certificates,
|
||||
including those automatically configured via the Nginx/Apache
|
||||
<link linkend="opt-services.nginx.virtualHosts._name_.enableACME"><literal>enableACME</literal></link>
|
||||
option. This configuration pattern is fully supported and part of
|
||||
the module’s test suite for Nginx + Apache.
|
||||
</para>
|
||||
<para>
|
||||
You must follow the guide above on configuring DNS-01 validation
|
||||
first, however instead of setting the options for one certificate
|
||||
(e.g.
|
||||
<xref linkend="opt-security.acme.certs._name_.dnsProvider" />) you
|
||||
will set them as defaults (e.g.
|
||||
<xref linkend="opt-security.acme.defaults.dnsProvider" />).
|
||||
</para>
|
||||
<programlisting>
|
||||
# Configure ACME appropriately
|
||||
security.acme.acceptTerms = true;
|
||||
security.acme.defaults.email = "admin+acme@example.com";
|
||||
security.acme.defaults = {
|
||||
dnsProvider = "rfc2136";
|
||||
credentialsFile = "/var/lib/secrets/certs.secret";
|
||||
# We don't need to wait for propagation since this is a local DNS server
|
||||
dnsPropagationCheck = false;
|
||||
};
|
||||
|
||||
# For each virtual host you would like to use DNS-01 validation with,
|
||||
# set acmeRoot = null
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
virtualHosts = {
|
||||
"foo.example.com" = {
|
||||
enableACME = true;
|
||||
acmeRoot = null;
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
And that’s it! Next time your configuration is rebuilt, or when
|
||||
you add a new virtualHost, it will be DNS-01 validated.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-root-owned">
|
||||
<title>Using ACME with services demanding root owned
|
||||
certificates</title>
|
||||
<para>
|
||||
Some services refuse to start if the configured certificate files
|
||||
are not owned by root. PostgreSQL and OpenSMTPD are examples of
|
||||
these. There is no way to change the user the ACME module uses (it
|
||||
will always be <literal>acme</literal>), however you can use
|
||||
systemd’s <literal>LoadCredential</literal> feature to resolve
|
||||
this elegantly. Below is an example configuration for OpenSMTPD,
|
||||
but this pattern can be applied to any service.
|
||||
</para>
|
||||
<programlisting>
|
||||
# Configure ACME however you like (DNS or HTTP validation), adding
|
||||
# the following configuration for the relevant certificate.
|
||||
# Note: You cannot use `systemctl reload` here as that would mean
|
||||
# the LoadCredential configuration below would be skipped and
|
||||
# the service would continue to use old certificates.
|
||||
security.acme.certs."mail.example.com".postRun = ''
|
||||
systemctl restart opensmtpd
|
||||
'';
|
||||
|
||||
# Now you must augment OpenSMTPD's systemd service to load
|
||||
# the certificate files.
|
||||
systemd.services.opensmtpd.requires = ["acme-finished-mail.example.com.target"];
|
||||
systemd.services.opensmtpd.serviceConfig.LoadCredential = let
|
||||
certDir = config.security.acme.certs."mail.example.com".directory;
|
||||
in [
|
||||
"cert.pem:${certDir}/cert.pem"
|
||||
"key.pem:${certDir}/key.pem"
|
||||
];
|
||||
|
||||
# Finally, configure OpenSMTPD to use these certs.
|
||||
services.opensmtpd = let
|
||||
credsDir = "/run/credentials/opensmtpd.service";
|
||||
in {
|
||||
enable = true;
|
||||
setSendmail = false;
|
||||
serverConfiguration = ''
|
||||
pki mail.example.com cert "${credsDir}/cert.pem"
|
||||
pki mail.example.com key "${credsDir}/key.pem"
|
||||
listen on localhost tls pki mail.example.com
|
||||
action act1 relay host smtp://127.0.0.1:10027
|
||||
match for local action act1
|
||||
'';
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-regenerate">
|
||||
<title>Regenerating certificates</title>
|
||||
<para>
|
||||
Should you need to regenerate a particular certificate in a hurry,
|
||||
such as when a vulnerability is found in Let’s Encrypt, there is
|
||||
now a convenient mechanism for doing so. Running
|
||||
<literal>systemctl clean --what=state acme-example.com.service</literal>
|
||||
will remove all certificate files and the account data for the
|
||||
given domain, allowing you to then
|
||||
<literal>systemctl start acme-example.com.service</literal> to
|
||||
generate fresh ones.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-security-acme-fix-jws">
|
||||
<title>Fixing JWS Verification error</title>
|
||||
<para>
|
||||
It is possible that your account credentials file may become
|
||||
corrupt and need to be regenerated. In this scenario lego will
|
||||
produce the error <literal>JWS verification error</literal>. The
|
||||
solution is to simply delete the associated accounts file and
|
||||
re-run the affected service(s).
|
||||
</para>
|
||||
<programlisting>
|
||||
# Find the accounts folder for the certificate
|
||||
systemctl cat acme-example.com.service | grep -Po 'accounts/[^:]*'
|
||||
export accountdir="$(!!)"
|
||||
# Move this folder to some place else
|
||||
mv /var/lib/acme/.lego/$accountdir{,.bak}
|
||||
# Recreate the folder using systemd-tmpfiles
|
||||
systemd-tmpfiles --create
|
||||
# Get a new account and reissue certificates
|
||||
# Note: Do this for all certs that share the same account email address
|
||||
systemctl start acme-example.com.service
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -42,7 +42,7 @@ in {
|
|||
environment.ROON_DATAROOT = "/var/lib/${name}";
|
||||
|
||||
serviceConfig = {
|
||||
ExecStart = "${pkgs.roon-bridge}/start.sh";
|
||||
ExecStart = "${pkgs.roon-bridge}/bin/RoonBridge";
|
||||
LimitNOFILE = 8192;
|
||||
User = cfg.user;
|
||||
Group = cfg.group;
|
||||
|
|
|
@ -226,7 +226,7 @@ let
|
|||
|
||||
in {
|
||||
meta.maintainers = with maintainers; [ dotlambda ];
|
||||
meta.doc = ./borgbackup.xml;
|
||||
meta.doc = ./borgbackup.md;
|
||||
|
||||
###### interface
|
||||
|
||||
|
|
|
@ -1,215 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-borgbase">
|
||||
<title>BorgBackup</title>
|
||||
<para>
|
||||
<emphasis>Source:</emphasis>
|
||||
<filename>modules/services/backup/borgbackup.nix</filename>
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Upstream documentation:</emphasis>
|
||||
<link xlink:href="https://borgbackup.readthedocs.io/">https://borgbackup.readthedocs.io/</link>
|
||||
</para>
|
||||
<para>
|
||||
<link xlink:href="https://www.borgbackup.org/">BorgBackup</link>
|
||||
(short: Borg) is a deduplicating backup program. Optionally, it
|
||||
supports compression and authenticated encryption.
|
||||
</para>
|
||||
<para>
|
||||
The main goal of Borg is to provide an efficient and secure way to
|
||||
backup data. The data deduplication technique used makes Borg
|
||||
suitable for daily backups since only changes are stored. The
|
||||
authenticated encryption technique makes it suitable for backups to
|
||||
not fully trusted targets.
|
||||
</para>
|
||||
<section xml:id="module-services-backup-borgbackup-configuring">
|
||||
<title>Configuring</title>
|
||||
<para>
|
||||
A complete list of options for the Borgbase module may be found
|
||||
<link linkend="opt-services.borgbackup.jobs">here</link>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="opt-services-backup-borgbackup-local-directory">
|
||||
<title>Basic usage for a local backup</title>
|
||||
<para>
|
||||
A very basic configuration for backing up to a locally accessible
|
||||
directory is:
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
opt.services.borgbackup.jobs = {
|
||||
{ rootBackup = {
|
||||
paths = "/";
|
||||
exclude = [ "/nix" "/path/to/local/repo" ];
|
||||
repo = "/path/to/local/repo";
|
||||
doInit = true;
|
||||
encryption = {
|
||||
mode = "repokey";
|
||||
passphrase = "secret";
|
||||
};
|
||||
compression = "auto,lzma";
|
||||
startAt = "weekly";
|
||||
};
|
||||
}
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<warning>
|
||||
<para>
|
||||
If you do not want the passphrase to be stored in the
|
||||
world-readable Nix store, use passCommand. You find an example
|
||||
below.
|
||||
</para>
|
||||
</warning>
|
||||
</section>
|
||||
<section xml:id="opt-services-backup-create-server">
|
||||
<title>Create a borg backup server</title>
|
||||
<para>
|
||||
You should use a different SSH key for each repository you write
|
||||
to, because the specified keys are restricted to running borg
|
||||
serve and can only access this single repository. You need the
|
||||
output of the generate pub file.
|
||||
</para>
|
||||
<programlisting>
|
||||
# sudo ssh-keygen -N '' -t ed25519 -f /run/keys/id_ed25519_my_borg_repo
|
||||
# cat /run/keys/id_ed25519_my_borg_repo
|
||||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID78zmOyA+5uPG4Ot0hfAy+sLDPU1L4AiIoRYEIVbbQ/ root@nixos
|
||||
</programlisting>
|
||||
<para>
|
||||
Add the following snippet to your NixOS configuration:
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
services.borgbackup.repos = {
|
||||
my_borg_repo = {
|
||||
authorizedKeys = [
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID78zmOyA+5uPG4Ot0hfAy+sLDPU1L4AiIoRYEIVbbQ/ root@nixos"
|
||||
] ;
|
||||
path = "/var/lib/my_borg_repo" ;
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="opt-services-backup-borgbackup-remote-server">
|
||||
<title>Backup to the borg repository server</title>
|
||||
<para>
|
||||
The following NixOS snippet creates an hourly backup to the
|
||||
service (on the host nixos) as created in the section above. We
|
||||
assume that you have stored a secret passphrasse in the file
|
||||
<filename>/run/keys/borgbackup_passphrase</filename>, which should
|
||||
be only accessible by root
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
services.borgbackup.jobs = {
|
||||
backupToLocalServer = {
|
||||
paths = [ "/etc/nixos" ];
|
||||
doInit = true;
|
||||
repo = "borg@nixos:." ;
|
||||
encryption = {
|
||||
mode = "repokey-blake2";
|
||||
passCommand = "cat /run/keys/borgbackup_passphrase";
|
||||
};
|
||||
environment = { BORG_RSH = "ssh -i /run/keys/id_ed25519_my_borg_repo"; };
|
||||
compression = "auto,lzma";
|
||||
startAt = "hourly";
|
||||
};
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
The following few commands (run as root) let you test your backup.
|
||||
</para>
|
||||
<programlisting>
|
||||
> nixos-rebuild switch
|
||||
...restarting the following units: polkit.service
|
||||
> systemctl restart borgbackup-job-backupToLocalServer
|
||||
> sleep 10
|
||||
> systemctl restart borgbackup-job-backupToLocalServer
|
||||
> export BORG_PASSPHRASE=topSecrect
|
||||
> borg list --rsh='ssh -i /run/keys/id_ed25519_my_borg_repo' borg@nixos:.
|
||||
nixos-backupToLocalServer-2020-03-30T21:46:17 Mon, 2020-03-30 21:46:19 [84feb97710954931ca384182f5f3cb90665f35cef214760abd7350fb064786ac]
|
||||
nixos-backupToLocalServer-2020-03-30T21:46:30 Mon, 2020-03-30 21:46:32 [e77321694ecd160ca2228611747c6ad1be177d6e0d894538898de7a2621b6e68]
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="opt-services-backup-borgbackup-borgbase">
|
||||
<title>Backup to a hosting service</title>
|
||||
<para>
|
||||
Several companies offer
|
||||
<link xlink:href="https://www.borgbackup.org/support/commercial.html">(paid)
|
||||
hosting services</link> for Borg repositories.
|
||||
</para>
|
||||
<para>
|
||||
To backup your home directory to borgbase you have to:
|
||||
</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Generate a SSH key without a password, to access the remote
|
||||
server. E.g.
|
||||
</para>
|
||||
<programlisting>
|
||||
sudo ssh-keygen -N '' -t ed25519 -f /run/keys/id_ed25519_borgbase
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Create the repository on the server by following the
|
||||
instructions for your hosting server.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Initialize the repository on the server. Eg.
|
||||
</para>
|
||||
<programlisting>
|
||||
sudo borg init --encryption=repokey-blake2 \
|
||||
-rsh "ssh -i /run/keys/id_ed25519_borgbase" \
|
||||
zzz2aaaaa@zzz2aaaaa.repo.borgbase.com:repo
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Add it to your NixOS configuration, e.g.
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
services.borgbackup.jobs = {
|
||||
my_Remote_Backup = {
|
||||
paths = [ "/" ];
|
||||
exclude = [ "/nix" "'**/.cache'" ];
|
||||
repo = "zzz2aaaaa@zzz2aaaaa.repo.borgbase.com:repo";
|
||||
encryption = {
|
||||
mode = "repokey-blake2";
|
||||
passCommand = "cat /run/keys/borgbackup_passphrase";
|
||||
};
|
||||
environment = { BORG_RSH = "ssh -i /run/keys/id_ed25519_borgbase"; };
|
||||
compression = "auto,lzma";
|
||||
startAt = "daily";
|
||||
};
|
||||
};
|
||||
}}
|
||||
</programlisting>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="opt-services-backup-borgbackup-vorta">
|
||||
<title>Vorta backup client for the desktop</title>
|
||||
<para>
|
||||
Vorta is a backup client for macOS and Linux desktops. It
|
||||
integrates the mighty BorgBackup with your desktop environment to
|
||||
protect your data from disk failure, ransomware and theft.
|
||||
</para>
|
||||
<para>
|
||||
It can be installed in NixOS e.g. by adding
|
||||
<literal>pkgs.vorta</literal> to
|
||||
<xref linkend="opt-environment.systemPackages" />.
|
||||
</para>
|
||||
<para>
|
||||
Details about using Vorta can be found under
|
||||
<link xlink:href="https://vorta.borgbase.com/usage">https://vorta.borgbase.com</link>
|
||||
.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
|
@ -424,6 +424,6 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
meta.doc = ./foundationdb.xml;
|
||||
meta.doc = ./foundationdb.md;
|
||||
meta.maintainers = with lib.maintainers; [ thoughtpolice ];
|
||||
}
|
||||
|
|
|
@ -1,425 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-foundationdb">
|
||||
<title>FoundationDB</title>
|
||||
<para>
|
||||
<emphasis>Source:</emphasis>
|
||||
<filename>modules/services/databases/foundationdb.nix</filename>
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Upstream documentation:</emphasis>
|
||||
<link xlink:href="https://apple.github.io/foundationdb/">https://apple.github.io/foundationdb/</link>
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Maintainer:</emphasis> Austin Seipp
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Available version(s):</emphasis> 5.1.x, 5.2.x, 6.0.x
|
||||
</para>
|
||||
<para>
|
||||
FoundationDB (or <quote>FDB</quote>) is an open source, distributed,
|
||||
transactional key-value store.
|
||||
</para>
|
||||
<section xml:id="module-services-foundationdb-configuring">
|
||||
<title>Configuring and basic setup</title>
|
||||
<para>
|
||||
To enable FoundationDB, add the following to your
|
||||
<filename>configuration.nix</filename>:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.foundationdb.enable = true;
|
||||
services.foundationdb.package = pkgs.foundationdb52; # FoundationDB 5.2.x
|
||||
</programlisting>
|
||||
<para>
|
||||
The <option>services.foundationdb.package</option> option is
|
||||
required, and must always be specified. Due to the fact
|
||||
FoundationDB network protocols and on-disk storage formats may
|
||||
change between (major) versions, and upgrades must be explicitly
|
||||
handled by the user, you must always manually specify this
|
||||
yourself so that the NixOS module will use the proper version.
|
||||
Note that minor, bugfix releases are always compatible.
|
||||
</para>
|
||||
<para>
|
||||
After running <command>nixos-rebuild</command>, you can verify
|
||||
whether FoundationDB is running by executing
|
||||
<command>fdbcli</command> (which is added to
|
||||
<option>environment.systemPackages</option>):
|
||||
</para>
|
||||
<programlisting>
|
||||
$ sudo -u foundationdb fdbcli
|
||||
Using cluster file `/etc/foundationdb/fdb.cluster'.
|
||||
|
||||
The database is available.
|
||||
|
||||
Welcome to the fdbcli. For help, type `help'.
|
||||
fdb> status
|
||||
|
||||
Using cluster file `/etc/foundationdb/fdb.cluster'.
|
||||
|
||||
Configuration:
|
||||
Redundancy mode - single
|
||||
Storage engine - memory
|
||||
Coordinators - 1
|
||||
|
||||
Cluster:
|
||||
FoundationDB processes - 1
|
||||
Machines - 1
|
||||
Memory availability - 5.4 GB per process on machine with least available
|
||||
Fault Tolerance - 0 machines
|
||||
Server time - 04/20/18 15:21:14
|
||||
|
||||
...
|
||||
|
||||
fdb>
|
||||
</programlisting>
|
||||
<para>
|
||||
You can also write programs using the available client libraries.
|
||||
For example, the following Python program can be run in order to
|
||||
grab the cluster status, as a quick example. (This example uses
|
||||
<command>nix-shell</command> shebang support to automatically
|
||||
supply the necessary Python modules).
|
||||
</para>
|
||||
<programlisting>
|
||||
a@link> cat fdb-status.py
|
||||
#! /usr/bin/env nix-shell
|
||||
#! nix-shell -i python -p python pythonPackages.foundationdb52
|
||||
|
||||
import fdb
|
||||
import json
|
||||
|
||||
def main():
|
||||
fdb.api_version(520)
|
||||
db = fdb.open()
|
||||
|
||||
@fdb.transactional
|
||||
def get_status(tr):
|
||||
return str(tr['\xff\xff/status/json'])
|
||||
|
||||
obj = json.loads(get_status(db))
|
||||
print('FoundationDB available: %s' % obj['client']['database_status']['available'])
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
a@link> chmod +x fdb-status.py
|
||||
a@link> ./fdb-status.py
|
||||
FoundationDB available: True
|
||||
a@link>
|
||||
</programlisting>
|
||||
<para>
|
||||
FoundationDB is run under the <command>foundationdb</command> user
|
||||
and group by default, but this may be changed in the NixOS
|
||||
configuration. The systemd unit
|
||||
<command>foundationdb.service</command> controls the
|
||||
<command>fdbmonitor</command> process.
|
||||
</para>
|
||||
<para>
|
||||
By default, the NixOS module for FoundationDB creates a single
|
||||
SSD-storage based database for development and basic usage. This
|
||||
storage engine is designed for SSDs and will perform poorly on
|
||||
HDDs; however it can handle far more data than the alternative
|
||||
<quote>memory</quote> engine and is a better default choice for
|
||||
most deployments. (Note that you can change the storage backend
|
||||
on-the-fly for a given FoundationDB cluster using
|
||||
<command>fdbcli</command>.)
|
||||
</para>
|
||||
<para>
|
||||
Furthermore, only 1 server process and 1 backup agent are started
|
||||
in the default configuration. See below for more on scaling to
|
||||
increase this.
|
||||
</para>
|
||||
<para>
|
||||
FoundationDB stores all data for all server processes under
|
||||
<filename>/var/lib/foundationdb</filename>. You can override this
|
||||
using <option>services.foundationdb.dataDir</option>, e.g.
|
||||
</para>
|
||||
<programlisting>
|
||||
services.foundationdb.dataDir = "/data/fdb";
|
||||
</programlisting>
|
||||
<para>
|
||||
Similarly, logs are stored under
|
||||
<filename>/var/log/foundationdb</filename> by default, and there
|
||||
is a corresponding <option>services.foundationdb.logDir</option>
|
||||
as well.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-foundationdb-scaling">
|
||||
<title>Scaling processes and backup agents</title>
|
||||
<para>
|
||||
Scaling the number of server processes is quite easy; simply
|
||||
specify <option>services.foundationdb.serverProcesses</option> to
|
||||
be the number of FoundationDB worker processes that should be
|
||||
started on the machine.
|
||||
</para>
|
||||
<para>
|
||||
FoundationDB worker processes typically require 4GB of RAM
|
||||
per-process at minimum for good performance, so this option is set
|
||||
to 1 by default since the maximum amount of RAM is unknown. You’re
|
||||
advised to abide by this restriction, so pick a number of
|
||||
processes so that each has 4GB or more.
|
||||
</para>
|
||||
<para>
|
||||
A similar option exists in order to scale backup agent processes,
|
||||
<option>services.foundationdb.backupProcesses</option>. Backup
|
||||
agents are not as performance/RAM sensitive, so feel free to
|
||||
experiment with the number of available backup processes.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-foundationdb-clustering">
|
||||
<title>Clustering</title>
|
||||
<para>
|
||||
FoundationDB on NixOS works similarly to other Linux systems, so
|
||||
this section will be brief. Please refer to the full FoundationDB
|
||||
documentation for more on clustering.
|
||||
</para>
|
||||
<para>
|
||||
FoundationDB organizes clusters using a set of
|
||||
<emphasis>coordinators</emphasis>, which are just
|
||||
specially-designated worker processes. By default, every
|
||||
installation of FoundationDB on NixOS will start as its own
|
||||
individual cluster, with a single coordinator: the first worker
|
||||
process on <command>localhost</command>.
|
||||
</para>
|
||||
<para>
|
||||
Coordinators are specified globally using the
|
||||
<command>/etc/foundationdb/fdb.cluster</command> file, which all
|
||||
servers and client applications will use to find and join
|
||||
coordinators. Note that this file <emphasis>can not</emphasis> be
|
||||
managed by NixOS so easily: FoundationDB is designed so that it
|
||||
will rewrite the file at runtime for all clients and nodes when
|
||||
cluster coordinators change, with clients transparently handling
|
||||
this without intervention. It is fundamentally a mutable file, and
|
||||
you should not try to manage it in any way in NixOS.
|
||||
</para>
|
||||
<para>
|
||||
When dealing with a cluster, there are two main things you want to
|
||||
do:
|
||||
</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
Add a node to the cluster for storage/compute.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Promote an ordinary worker to a coordinator.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>
|
||||
A node must already be a member of the cluster in order to
|
||||
properly be promoted to a coordinator, so you must always add it
|
||||
first if you wish to promote it.
|
||||
</para>
|
||||
<para>
|
||||
To add a machine to a FoundationDB cluster:
|
||||
</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
Choose one of the servers to start as the initial coordinator.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Copy the <command>/etc/foundationdb/fdb.cluster</command> file
|
||||
from this server to all the other servers. Restart
|
||||
FoundationDB on all of these other servers, so they join the
|
||||
cluster.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
All of these servers are now connected and working together in
|
||||
the cluster, under the chosen coordinator.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>
|
||||
At this point, you can add as many nodes as you want by just
|
||||
repeating the above steps. By default there will still be a single
|
||||
coordinator: you can use <command>fdbcli</command> to change this
|
||||
and add new coordinators.
|
||||
</para>
|
||||
<para>
|
||||
As a convenience, FoundationDB can automatically assign
|
||||
coordinators based on the redundancy mode you wish to achieve for
|
||||
the cluster. Once all the nodes have been joined, simply set the
|
||||
replication policy, and then issue the
|
||||
<command>coordinators auto</command> command
|
||||
</para>
|
||||
<para>
|
||||
For example, assuming we have 3 nodes available, we can enable
|
||||
double redundancy mode, then auto-select coordinators. For double
|
||||
redundancy, 3 coordinators is ideal: therefore FoundationDB will
|
||||
make <emphasis>every</emphasis> node a coordinator automatically:
|
||||
</para>
|
||||
<programlisting>
|
||||
fdbcli> configure double ssd
|
||||
fdbcli> coordinators auto
|
||||
</programlisting>
|
||||
<para>
|
||||
This will transparently update all the servers within seconds, and
|
||||
appropriately rewrite the <command>fdb.cluster</command> file, as
|
||||
well as informing all client processes to do the same.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-foundationdb-connectivity">
|
||||
<title>Client connectivity</title>
|
||||
<para>
|
||||
By default, all clients must use the current
|
||||
<command>fdb.cluster</command> file to access a given FoundationDB
|
||||
cluster. This file is located by default in
|
||||
<command>/etc/foundationdb/fdb.cluster</command> on all machines
|
||||
with the FoundationDB service enabled, so you may copy the active
|
||||
one from your cluster to a new node in order to connect, if it is
|
||||
not part of the cluster.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-foundationdb-authorization">
|
||||
<title>Client authorization and TLS</title>
|
||||
<para>
|
||||
By default, any user who can connect to a FoundationDB process
|
||||
with the correct cluster configuration can access anything.
|
||||
FoundationDB uses a pluggable design to transport security, and
|
||||
out of the box it supports a LibreSSL-based plugin for TLS
|
||||
support. This plugin not only does in-flight encryption, but also
|
||||
performs client authorization based on the given endpoint’s
|
||||
certificate chain. For example, a FoundationDB server may be
|
||||
configured to only accept client connections over TLS, where the
|
||||
client TLS certificate is from organization <emphasis>Acme
|
||||
Co</emphasis> in the <emphasis>Research and Development</emphasis>
|
||||
unit.
|
||||
</para>
|
||||
<para>
|
||||
Configuring TLS with FoundationDB is done using the
|
||||
<option>services.foundationdb.tls</option> options in order to
|
||||
control the peer verification string, as well as the certificate
|
||||
and its private key.
|
||||
</para>
|
||||
<para>
|
||||
Note that the certificate and its private key must be accessible
|
||||
to the FoundationDB user account that the server runs under. These
|
||||
files are also NOT managed by NixOS, as putting them into the
|
||||
store may reveal private information.
|
||||
</para>
|
||||
<para>
|
||||
After you have a key and certificate file in place, it is not
|
||||
enough to simply set the NixOS module options – you must also
|
||||
configure the <command>fdb.cluster</command> file to specify that
|
||||
a given set of coordinators use TLS. This is as simple as adding
|
||||
the suffix <command>:tls</command> to your cluster coordinator
|
||||
configuration, after the port number. For example, assuming you
|
||||
have a coordinator on localhost with the default configuration,
|
||||
simply specifying:
|
||||
</para>
|
||||
<programlisting>
|
||||
XXXXXX:XXXXXX@127.0.0.1:4500:tls
|
||||
</programlisting>
|
||||
<para>
|
||||
will configure all clients and server processes to use TLS from
|
||||
now on.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-foundationdb-disaster-recovery">
|
||||
<title>Backups and Disaster Recovery</title>
|
||||
<para>
|
||||
The usual rules for doing FoundationDB backups apply on NixOS as
|
||||
written in the FoundationDB manual. However, one important
|
||||
difference is the security profile for NixOS: by default, the
|
||||
<command>foundationdb</command> systemd unit uses <emphasis>Linux
|
||||
namespaces</emphasis> to restrict write access to the system,
|
||||
except for the log directory, data directory, and the
|
||||
<command>/etc/foundationdb/</command> directory. This is enforced
|
||||
by default and cannot be disabled.
|
||||
</para>
|
||||
<para>
|
||||
However, a side effect of this is that the
|
||||
<command>fdbbackup</command> command doesn’t work properly for
|
||||
local filesystem backups: FoundationDB uses a server process
|
||||
alongside the database processes to perform backups and copy the
|
||||
backups to the filesystem. As a result, this process is put under
|
||||
the restricted namespaces above: the backup process can only write
|
||||
to a limited number of paths.
|
||||
</para>
|
||||
<para>
|
||||
In order to allow flexible backup locations on local disks, the
|
||||
FoundationDB NixOS module supports a
|
||||
<option>services.foundationdb.extraReadWritePaths</option> option.
|
||||
This option takes a list of paths, and adds them to the systemd
|
||||
unit, allowing the processes inside the service to write (and
|
||||
read) the specified directories.
|
||||
</para>
|
||||
<para>
|
||||
For example, to create backups in
|
||||
<command>/opt/fdb-backups</command>, first set up the paths in the
|
||||
module options:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.foundationdb.extraReadWritePaths = [ "/opt/fdb-backups" ];
|
||||
</programlisting>
|
||||
<para>
|
||||
Restart the FoundationDB service, and it will now be able to write
|
||||
to this directory (even if it does not yet exist.) Note: this path
|
||||
<emphasis>must</emphasis> exist before restarting the unit.
|
||||
Otherwise, systemd will not include it in the private FoundationDB
|
||||
namespace (and it will not add it dynamically at runtime).
|
||||
</para>
|
||||
<para>
|
||||
You can now perform a backup:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ sudo -u foundationdb fdbbackup start -t default -d file:///opt/fdb-backups
|
||||
$ sudo -u foundationdb fdbbackup status -t default
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-foundationdb-limitations">
|
||||
<title>Known limitations</title>
|
||||
<para>
|
||||
The FoundationDB setup for NixOS should currently be considered
|
||||
beta. FoundationDB is not new software, but the NixOS compilation
|
||||
and integration has only undergone fairly basic testing of all the
|
||||
available functionality.
|
||||
</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
There is no way to specify individual parameters for
|
||||
individual <command>fdbserver</command> processes. Currently,
|
||||
all server processes inherit all the global
|
||||
<command>fdbmonitor</command> settings.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Ruby bindings are not currently installed.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Go bindings are not currently installed.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="module-services-foundationdb-options">
|
||||
<title>Options</title>
|
||||
<para>
|
||||
NixOS’s FoundationDB module allows you to configure all of the
|
||||
most relevant configuration options for
|
||||
<command>fdbmonitor</command>, matching it quite closely. A
|
||||
complete list of options for the FoundationDB module may be found
|
||||
<link linkend="opt-services.foundationdb.enable">here</link>. You
|
||||
should also read the FoundationDB documentation as well.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-foundationdb-full-docs">
|
||||
<title>Full documentation</title>
|
||||
<para>
|
||||
FoundationDB is a complex piece of software, and requires careful
|
||||
administration to properly use. Full documentation for
|
||||
administration can be found here:
|
||||
<link xlink:href="https://apple.github.io/foundationdb/">https://apple.github.io/foundationdb/</link>.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
|
@ -585,6 +585,6 @@ in
|
|||
|
||||
};
|
||||
|
||||
meta.doc = ./postgresql.xml;
|
||||
meta.doc = ./postgresql.md;
|
||||
meta.maintainers = with lib.maintainers; [ thoughtpolice danbst ];
|
||||
}
|
||||
|
|
|
@ -1,250 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-postgresql">
|
||||
<title>PostgreSQL</title>
|
||||
<para>
|
||||
<emphasis>Source:</emphasis>
|
||||
<filename>modules/services/databases/postgresql.nix</filename>
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Upstream documentation:</emphasis>
|
||||
<link xlink:href="http://www.postgresql.org/docs/">http://www.postgresql.org/docs/</link>
|
||||
</para>
|
||||
<para>
|
||||
PostgreSQL is an advanced, free relational database.
|
||||
</para>
|
||||
<section xml:id="module-services-postgres-configuring">
|
||||
<title>Configuring</title>
|
||||
<para>
|
||||
To enable PostgreSQL, add the following to your
|
||||
<filename>configuration.nix</filename>:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.postgresql.enable = true;
|
||||
services.postgresql.package = pkgs.postgresql_11;
|
||||
</programlisting>
|
||||
<para>
|
||||
Note that you are required to specify the desired version of
|
||||
PostgreSQL (e.g. <literal>pkgs.postgresql_11</literal>). Since
|
||||
upgrading your PostgreSQL version requires a database dump and
|
||||
reload (see below), NixOS cannot provide a default value for
|
||||
<xref linkend="opt-services.postgresql.package" /> such as the
|
||||
most recent release of PostgreSQL.
|
||||
</para>
|
||||
<para>
|
||||
By default, PostgreSQL stores its databases in
|
||||
<filename>/var/lib/postgresql/$psqlSchema</filename>. You can
|
||||
override this using
|
||||
<xref linkend="opt-services.postgresql.dataDir" />, e.g.
|
||||
</para>
|
||||
<programlisting>
|
||||
services.postgresql.dataDir = "/data/postgresql";
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-postgres-upgrading">
|
||||
<title>Upgrading</title>
|
||||
<note>
|
||||
<para>
|
||||
The steps below demonstrate how to upgrade from an older version
|
||||
to <literal>pkgs.postgresql_13</literal>. These instructions are
|
||||
also applicable to other versions.
|
||||
</para>
|
||||
</note>
|
||||
<para>
|
||||
Major PostgreSQL upgrades require a downtime and a few imperative
|
||||
steps to be called. This is the case because each major version
|
||||
has some internal changes in the databases’ state during major
|
||||
releases. Because of that, NixOS places the state into
|
||||
<filename>/var/lib/postgresql/<version></filename> where
|
||||
each <literal>version</literal> can be obtained like this:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ nix-instantiate --eval -A postgresql_13.psqlSchema
|
||||
"13"
|
||||
</programlisting>
|
||||
<para>
|
||||
For an upgrade, a script like this can be used to simplify the
|
||||
process:
|
||||
</para>
|
||||
<programlisting>
|
||||
{ config, pkgs, ... }:
|
||||
{
|
||||
environment.systemPackages = [
|
||||
(let
|
||||
# XXX specify the postgresql package you'd like to upgrade to.
|
||||
# Do not forget to list the extensions you need.
|
||||
newPostgres = pkgs.postgresql_13.withPackages (pp: [
|
||||
# pp.plv8
|
||||
]);
|
||||
in pkgs.writeScriptBin "upgrade-pg-cluster" ''
|
||||
set -eux
|
||||
# XXX it's perhaps advisable to stop all services that depend on postgresql
|
||||
systemctl stop postgresql
|
||||
|
||||
export NEWDATA="/var/lib/postgresql/${newPostgres.psqlSchema}"
|
||||
|
||||
export NEWBIN="${newPostgres}/bin"
|
||||
|
||||
export OLDDATA="${config.services.postgresql.dataDir}"
|
||||
export OLDBIN="${config.services.postgresql.package}/bin"
|
||||
|
||||
install -d -m 0700 -o postgres -g postgres "$NEWDATA"
|
||||
cd "$NEWDATA"
|
||||
sudo -u postgres $NEWBIN/initdb -D "$NEWDATA"
|
||||
|
||||
sudo -u postgres $NEWBIN/pg_upgrade \
|
||||
--old-datadir "$OLDDATA" --new-datadir "$NEWDATA" \
|
||||
--old-bindir $OLDBIN --new-bindir $NEWBIN \
|
||||
"$@"
|
||||
'')
|
||||
];
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
The upgrade process is:
|
||||
</para>
|
||||
<orderedlist numeration="arabic">
|
||||
<listitem>
|
||||
<para>
|
||||
Rebuild nixos configuration with the configuration above added
|
||||
to your <filename>configuration.nix</filename>. Alternatively,
|
||||
add that into separate file and reference it in
|
||||
<literal>imports</literal> list.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Login as root (<literal>sudo su -</literal>)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Run <literal>upgrade-pg-cluster</literal>. It will stop old
|
||||
postgresql, initialize a new one and migrate the old one to
|
||||
the new one. You may supply arguments like
|
||||
<literal>--jobs 4</literal> and <literal>--link</literal> to
|
||||
speedup migration process. See
|
||||
<link xlink:href="https://www.postgresql.org/docs/current/pgupgrade.html">https://www.postgresql.org/docs/current/pgupgrade.html</link>
|
||||
for details.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Change postgresql package in NixOS configuration to the one
|
||||
you were upgrading to via
|
||||
<xref linkend="opt-services.postgresql.package" />. Rebuild
|
||||
NixOS. This should start new postgres using upgraded data
|
||||
directory and all services you stopped during the upgrade.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
After the upgrade it’s advisable to analyze the new cluster.
|
||||
</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
For PostgreSQL ≥ 14, use the <literal>vacuumdb</literal>
|
||||
command printed by the upgrades script.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
For PostgreSQL < 14, run (as
|
||||
<literal>su -l postgres</literal> in the
|
||||
<xref linkend="opt-services.postgresql.dataDir" />, in
|
||||
this example <filename>/var/lib/postgresql/13</filename>):
|
||||
</para>
|
||||
<programlisting>
|
||||
$ ./analyze_new_cluster.sh
|
||||
</programlisting>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<warning>
|
||||
<para>
|
||||
The next step removes the old state-directory!
|
||||
</para>
|
||||
</warning>
|
||||
<programlisting>
|
||||
$ ./delete_old_cluster.sh
|
||||
</programlisting>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
</section>
|
||||
<section xml:id="module-services-postgres-options">
|
||||
<title>Options</title>
|
||||
<para>
|
||||
A complete list of options for the PostgreSQL module may be found
|
||||
<link linkend="opt-services.postgresql.enable">here</link>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-postgres-plugins">
|
||||
<title>Plugins</title>
|
||||
<para>
|
||||
Plugins collection for each PostgreSQL version can be accessed
|
||||
with <literal>.pkgs</literal>. For example, for
|
||||
<literal>pkgs.postgresql_11</literal> package, its plugin
|
||||
collection is accessed by
|
||||
<literal>pkgs.postgresql_11.pkgs</literal>:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ nix repl '<nixpkgs>'
|
||||
|
||||
Loading '<nixpkgs>'...
|
||||
Added 10574 variables.
|
||||
|
||||
nix-repl> postgresql_11.pkgs.<TAB><TAB>
|
||||
postgresql_11.pkgs.cstore_fdw postgresql_11.pkgs.pg_repack
|
||||
postgresql_11.pkgs.pg_auto_failover postgresql_11.pkgs.pg_safeupdate
|
||||
postgresql_11.pkgs.pg_bigm postgresql_11.pkgs.pg_similarity
|
||||
postgresql_11.pkgs.pg_cron postgresql_11.pkgs.pg_topn
|
||||
postgresql_11.pkgs.pg_hll postgresql_11.pkgs.pgjwt
|
||||
postgresql_11.pkgs.pg_partman postgresql_11.pkgs.pgroonga
|
||||
...
|
||||
</programlisting>
|
||||
<para>
|
||||
To add plugins via NixOS configuration, set
|
||||
<literal>services.postgresql.extraPlugins</literal>:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.postgresql.package = pkgs.postgresql_11;
|
||||
services.postgresql.extraPlugins = with pkgs.postgresql_11.pkgs; [
|
||||
pg_repack
|
||||
postgis
|
||||
];
|
||||
</programlisting>
|
||||
<para>
|
||||
You can build custom PostgreSQL-with-plugins (to be used outside
|
||||
of NixOS) using function <literal>.withPackages</literal>. For
|
||||
example, creating a custom PostgreSQL package in an overlay can
|
||||
look like:
|
||||
</para>
|
||||
<programlisting>
|
||||
self: super: {
|
||||
postgresql_custom = self.postgresql_11.withPackages (ps: [
|
||||
ps.pg_repack
|
||||
ps.postgis
|
||||
]);
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
Here’s a recipe on how to override a particular plugin through an
|
||||
overlay:
|
||||
</para>
|
||||
<programlisting>
|
||||
self: super: {
|
||||
postgresql_11 = super.postgresql_11.override { this = self.postgresql_11; } // {
|
||||
pkgs = super.postgresql_11.pkgs // {
|
||||
pg_repack = super.postgresql_11.pkgs.pg_repack.overrideAttrs (_: {
|
||||
name = "pg_repack-v20181024";
|
||||
src = self.fetchzip {
|
||||
url = "https://github.com/reorg/pg_repack/archive/923fa2f3c709a506e111cc963034bf2fd127aa00.tar.gz";
|
||||
sha256 = "17k6hq9xaax87yz79j773qyigm4fwk8z4zh5cyp6z0sxnwfqxxw5";
|
||||
};
|
||||
});
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -7,7 +7,7 @@ let
|
|||
cfg = config.services.flatpak;
|
||||
in {
|
||||
meta = {
|
||||
doc = ./flatpak.xml;
|
||||
doc = ./flatpak.md;
|
||||
maintainers = pkgs.flatpak.meta.maintainers;
|
||||
};
|
||||
|
||||
|
|
|
@ -1,59 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-flatpak">
|
||||
<title>Flatpak</title>
|
||||
<para>
|
||||
<emphasis>Source:</emphasis>
|
||||
<filename>modules/services/desktop/flatpak.nix</filename>
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Upstream documentation:</emphasis>
|
||||
<link xlink:href="https://github.com/flatpak/flatpak/wiki">https://github.com/flatpak/flatpak/wiki</link>
|
||||
</para>
|
||||
<para>
|
||||
Flatpak is a system for building, distributing, and running
|
||||
sandboxed desktop applications on Linux.
|
||||
</para>
|
||||
<para>
|
||||
To enable Flatpak, add the following to your
|
||||
<filename>configuration.nix</filename>:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.flatpak.enable = true;
|
||||
</programlisting>
|
||||
<para>
|
||||
For the sandboxed apps to work correctly, desktop integration
|
||||
portals need to be installed. If you run GNOME, this will be handled
|
||||
automatically for you; in other cases, you will need to add
|
||||
something like the following to your
|
||||
<filename>configuration.nix</filename>:
|
||||
</para>
|
||||
<programlisting>
|
||||
xdg.portal.extraPortals = [ pkgs.xdg-desktop-portal-gtk ];
|
||||
</programlisting>
|
||||
<para>
|
||||
Then, you will need to add a repository, for example,
|
||||
<link xlink:href="https://github.com/flatpak/flatpak/wiki">Flathub</link>,
|
||||
either using the following commands:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
|
||||
$ flatpak update
|
||||
</programlisting>
|
||||
<para>
|
||||
or by opening the
|
||||
<link xlink:href="https://flathub.org/repo/flathub.flatpakrepo">repository
|
||||
file</link> in GNOME Software.
|
||||
</para>
|
||||
<para>
|
||||
Finally, you can search and install programs:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ flatpak search bustle
|
||||
$ flatpak install flathub org.freedesktop.Bustle
|
||||
$ flatpak run org.freedesktop.Bustle
|
||||
</programlisting>
|
||||
<para>
|
||||
Again, GNOME Software offers graphical interface for these tasks.
|
||||
</para>
|
||||
</chapter>
|
|
@ -35,5 +35,20 @@
|
|||
}
|
||||
],
|
||||
"filter.properties": {},
|
||||
"stream.properties": {}
|
||||
"stream.properties": {},
|
||||
"alsa.properties": {},
|
||||
"alsa.rules": [
|
||||
{
|
||||
"matches": [
|
||||
{
|
||||
"application.process.binary": "resolve"
|
||||
}
|
||||
],
|
||||
"actions": {
|
||||
"update-props": {
|
||||
"alsa.buffer-bytes": 131072
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -58,6 +58,18 @@
|
|||
"node.passive": true
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"matches": [
|
||||
{
|
||||
"client.name": "Mixxx"
|
||||
}
|
||||
],
|
||||
"actions": {
|
||||
"update-props": {
|
||||
"jack.merge-monitor": false
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
@ -42,7 +42,7 @@ let
|
|||
in {
|
||||
|
||||
meta = {
|
||||
maintainers = teams.freedesktop.members;
|
||||
maintainers = teams.freedesktop.members ++ [ lib.maintainers.k900 ];
|
||||
# uses attributes of the linked package
|
||||
buildDocsInSandbox = false;
|
||||
};
|
||||
|
|
|
@ -11,7 +11,7 @@ let
|
|||
in {
|
||||
meta = {
|
||||
maintainers = pkgs.blackfire.meta.maintainers;
|
||||
doc = ./blackfire.xml;
|
||||
doc = ./blackfire.md;
|
||||
};
|
||||
|
||||
options = {
|
||||
|
|
|
@ -1,61 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-blackfire">
|
||||
<title>Blackfire profiler</title>
|
||||
<para>
|
||||
<emphasis>Source:</emphasis>
|
||||
<filename>modules/services/development/blackfire.nix</filename>
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Upstream documentation:</emphasis>
|
||||
<link xlink:href="https://blackfire.io/docs/introduction">https://blackfire.io/docs/introduction</link>
|
||||
</para>
|
||||
<para>
|
||||
<link xlink:href="https://blackfire.io">Blackfire</link> is a
|
||||
proprietary tool for profiling applications. There are several
|
||||
languages supported by the product but currently only PHP support is
|
||||
packaged in Nixpkgs. The back-end consists of a module that is
|
||||
loaded into the language runtime (called <emphasis>probe</emphasis>)
|
||||
and a service (<emphasis>agent</emphasis>) that the probe connects
|
||||
to and that sends the profiles to the server.
|
||||
</para>
|
||||
<para>
|
||||
To use it, you will need to enable the agent and the probe on your
|
||||
server. The exact method will depend on the way you use PHP but here
|
||||
is an example of NixOS configuration for PHP-FPM:
|
||||
</para>
|
||||
<programlisting>
|
||||
let
|
||||
php = pkgs.php.withExtensions ({ enabled, all }: enabled ++ (with all; [
|
||||
blackfire
|
||||
]));
|
||||
in {
|
||||
# Enable the probe extension for PHP-FPM.
|
||||
services.phpfpm = {
|
||||
phpPackage = php;
|
||||
};
|
||||
|
||||
# Enable and configure the agent.
|
||||
services.blackfire-agent = {
|
||||
enable = true;
|
||||
settings = {
|
||||
# You will need to get credentials at https://blackfire.io/my/settings/credentials
|
||||
# You can also use other options described in https://blackfire.io/docs/up-and-running/configuration/agent
|
||||
server-id = "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX";
|
||||
server-token = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX";
|
||||
};
|
||||
};
|
||||
|
||||
# Make the agent run on start-up.
|
||||
# (WantedBy= from the upstream unit not respected: https://github.com/NixOS/nixpkgs/issues/81138)
|
||||
# Alternately, you can start it manually with `systemctl start blackfire-agent`.
|
||||
systemd.services.blackfire-agent.wantedBy = [ "phpfpm-foo.service" ];
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
On your developer machine, you will also want to install
|
||||
<link xlink:href="https://blackfire.io/docs/up-and-running/installation#install-a-profiling-client">the
|
||||
client</link> (see <literal>blackfire</literal> package) or the
|
||||
browser extension to actually trigger the profiling.
|
||||
</para>
|
||||
</chapter>
|
|
@ -99,5 +99,5 @@ in
|
|||
environment.variables.EDITOR = mkIf cfg.defaultEditor (mkOverride 900 "${editorScript}/bin/emacseditor");
|
||||
};
|
||||
|
||||
meta.doc = ./emacs.xml;
|
||||
meta.doc = ./emacs.md;
|
||||
}
|
||||
|
|
|
@ -1,490 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-emacs">
|
||||
<title>Emacs</title>
|
||||
<para>
|
||||
<link xlink:href="https://www.gnu.org/software/emacs/">Emacs</link>
|
||||
is an extensible, customizable, self-documenting real-time display
|
||||
editor — and more. At its core is an interpreter for Emacs Lisp, a
|
||||
dialect of the Lisp programming language with extensions to support
|
||||
text editing.
|
||||
</para>
|
||||
<para>
|
||||
Emacs runs within a graphical desktop environment using the X Window
|
||||
System, but works equally well on a text terminal. Under macOS, a
|
||||
<quote>Mac port</quote> edition is available, which uses Apple’s
|
||||
native GUI frameworks.
|
||||
</para>
|
||||
<para>
|
||||
Nixpkgs provides a superior environment for running Emacs. It’s
|
||||
simple to create custom builds by overriding the default packages.
|
||||
Chaotic collections of Emacs Lisp code and extensions can be brought
|
||||
under control using declarative package management. NixOS even
|
||||
provides a <command>systemd</command> user service for automatically
|
||||
starting the Emacs daemon.
|
||||
</para>
|
||||
<section xml:id="module-services-emacs-installing">
|
||||
<title>Installing Emacs</title>
|
||||
<para>
|
||||
Emacs can be installed in the normal way for Nix (see
|
||||
<xref linkend="sec-package-management" />). In addition, a NixOS
|
||||
<emphasis>service</emphasis> can be enabled.
|
||||
</para>
|
||||
<section xml:id="module-services-emacs-releases">
|
||||
<title>The Different Releases of Emacs</title>
|
||||
<para>
|
||||
Nixpkgs defines several basic Emacs packages. The following are
|
||||
attributes belonging to the <varname>pkgs</varname> set:
|
||||
</para>
|
||||
<variablelist spacing="compact">
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>emacs</varname>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
The latest stable version of Emacs using the
|
||||
<link xlink:href="http://www.gtk.org">GTK 2</link> widget
|
||||
toolkit.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>emacs-nox</varname>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Emacs built without any dependency on X11 libraries.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>
|
||||
<varname>emacsMacport</varname>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
Emacs with the <quote>Mac port</quote> patches, providing
|
||||
a more native look and feel under macOS.
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
</variablelist>
|
||||
<para>
|
||||
If those aren’t suitable, then the following imitation Emacs
|
||||
editors are also available in Nixpkgs:
|
||||
<link xlink:href="https://www.gnu.org/software/zile/">Zile</link>,
|
||||
<link xlink:href="http://homepage.boetes.org/software/mg/">mg</link>,
|
||||
<link xlink:href="http://yi-editor.github.io/">Yi</link>,
|
||||
<link xlink:href="https://joe-editor.sourceforge.io/">jmacs</link>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-emacs-adding-packages">
|
||||
<title>Adding Packages to Emacs</title>
|
||||
<para>
|
||||
Emacs includes an entire ecosystem of functionality beyond text
|
||||
editing, including a project planner, mail and news reader,
|
||||
debugger interface, calendar, and more.
|
||||
</para>
|
||||
<para>
|
||||
Most extensions are gotten with the Emacs packaging system
|
||||
(<filename>package.el</filename>) from
|
||||
<link xlink:href="https://elpa.gnu.org/">Emacs Lisp Package
|
||||
Archive (ELPA)</link>,
|
||||
<link xlink:href="https://melpa.org/">MELPA</link>,
|
||||
<link xlink:href="https://stable.melpa.org/">MELPA
|
||||
Stable</link>, and
|
||||
<link xlink:href="http://orgmode.org/elpa.html">Org ELPA</link>.
|
||||
Nixpkgs is regularly updated to mirror all these archives.
|
||||
</para>
|
||||
<para>
|
||||
Under NixOS, you can continue to use
|
||||
<literal>package-list-packages</literal> and
|
||||
<literal>package-install</literal> to install packages. You can
|
||||
also declare the set of Emacs packages you need using the
|
||||
derivations from Nixpkgs. The rest of this section discusses
|
||||
declarative installation of Emacs packages through nixpkgs.
|
||||
</para>
|
||||
<para>
|
||||
The first step to declare the list of packages you want in your
|
||||
Emacs installation is to create a dedicated derivation. This can
|
||||
be done in a dedicated <filename>emacs.nix</filename> file such
|
||||
as:
|
||||
</para>
|
||||
<para>
|
||||
<anchor xml:id="ex-emacsNix" />
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
/*
|
||||
This is a nix expression to build Emacs and some Emacs packages I like
|
||||
from source on any distribution where Nix is installed. This will install
|
||||
all the dependencies from the nixpkgs repository and build the binary files
|
||||
without interfering with the host distribution.
|
||||
|
||||
To build the project, type the following from the current directory:
|
||||
|
||||
$ nix-build emacs.nix
|
||||
|
||||
To run the newly compiled executable:
|
||||
|
||||
$ ./result/bin/emacs
|
||||
*/
|
||||
|
||||
# The first non-comment line in this file indicates that
|
||||
# the whole file represents a function.
|
||||
{ pkgs ? import <nixpkgs> {} }:
|
||||
|
||||
let
|
||||
# The let expression below defines a myEmacs binding pointing to the
|
||||
# current stable version of Emacs. This binding is here to separate
|
||||
# the choice of the Emacs binary from the specification of the
|
||||
# required packages.
|
||||
myEmacs = pkgs.emacs;
|
||||
# This generates an emacsWithPackages function. It takes a single
|
||||
# argument: a function from a package set to a list of packages
|
||||
# (the packages that will be available in Emacs).
|
||||
emacsWithPackages = (pkgs.emacsPackagesFor myEmacs).emacsWithPackages;
|
||||
in
|
||||
# The rest of the file specifies the list of packages to install. In the
|
||||
# example, two packages (magit and zerodark-theme) are taken from
|
||||
# MELPA stable.
|
||||
emacsWithPackages (epkgs: (with epkgs.melpaStablePackages; [
|
||||
magit # ; Integrate git <C-x g>
|
||||
zerodark-theme # ; Nicolas' theme
|
||||
])
|
||||
# Two packages (undo-tree and zoom-frm) are taken from MELPA.
|
||||
++ (with epkgs.melpaPackages; [
|
||||
undo-tree # ; <C-x u> to show the undo tree
|
||||
zoom-frm # ; increase/decrease font size for all buffers %lt;C-x C-+>
|
||||
])
|
||||
# Three packages are taken from GNU ELPA.
|
||||
++ (with epkgs.elpaPackages; [
|
||||
auctex # ; LaTeX mode
|
||||
beacon # ; highlight my cursor when scrolling
|
||||
nameless # ; hide current package name everywhere in elisp code
|
||||
])
|
||||
# notmuch is taken from a nixpkgs derivation which contains an Emacs mode.
|
||||
++ [
|
||||
pkgs.notmuch # From main packages set
|
||||
])
|
||||
</programlisting>
|
||||
<para>
|
||||
The result of this configuration will be an
|
||||
<command>emacs</command> command which launches Emacs with all
|
||||
of your chosen packages in the <varname>load-path</varname>.
|
||||
</para>
|
||||
<para>
|
||||
You can check that it works by executing this in a terminal:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ nix-build emacs.nix
|
||||
$ ./result/bin/emacs -q
|
||||
</programlisting>
|
||||
<para>
|
||||
and then typing <literal>M-x package-initialize</literal>. Check
|
||||
that you can use all the packages you want in this Emacs
|
||||
instance. For example, try switching to the zerodark theme
|
||||
through
|
||||
<literal>M-x load-theme <RET> zerodark <RET> y</literal>.
|
||||
</para>
|
||||
<tip>
|
||||
<para>
|
||||
A few popular extensions worth checking out are: auctex,
|
||||
company, edit-server, flycheck, helm, iedit, magit,
|
||||
multiple-cursors, projectile, and yasnippet.
|
||||
</para>
|
||||
</tip>
|
||||
<para>
|
||||
The list of available packages in the various ELPA repositories
|
||||
can be seen with the following commands:
|
||||
<anchor xml:id="module-services-emacs-querying-packages" />
|
||||
</para>
|
||||
<programlisting>
|
||||
nix-env -f "<nixpkgs>" -qaP -A emacs.pkgs.elpaPackages
|
||||
nix-env -f "<nixpkgs>" -qaP -A emacs.pkgs.melpaPackages
|
||||
nix-env -f "<nixpkgs>" -qaP -A emacs.pkgs.melpaStablePackages
|
||||
nix-env -f "<nixpkgs>" -qaP -A emacs.pkgs.orgPackages
|
||||
</programlisting>
|
||||
<para>
|
||||
If you are on NixOS, you can install this particular Emacs for
|
||||
all users by adding it to the list of system packages (see
|
||||
<xref linkend="sec-declarative-package-mgmt" />). Simply modify
|
||||
your file <filename>configuration.nix</filename> to make it
|
||||
contain:
|
||||
<anchor xml:id="module-services-emacs-configuration-nix" />
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
environment.systemPackages = [
|
||||
# [...]
|
||||
(import /path/to/emacs.nix { inherit pkgs; })
|
||||
];
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
In this case, the next <command>nixos-rebuild switch</command>
|
||||
will take care of adding your <command>emacs</command> to the
|
||||
<varname>PATH</varname> environment variable (see
|
||||
<xref linkend="sec-changing-config" />).
|
||||
</para>
|
||||
<para>
|
||||
If you are not on NixOS or want to install this particular Emacs
|
||||
only for yourself, you can do so by adding it to your
|
||||
<filename>~/.config/nixpkgs/config.nix</filename> (see
|
||||
<link xlink:href="https://nixos.org/nixpkgs/manual/#sec-modify-via-packageOverrides">Nixpkgs
|
||||
manual</link>):
|
||||
<anchor xml:id="module-services-emacs-config-nix" />
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
packageOverrides = super: let self = super.pkgs; in {
|
||||
myemacs = import /path/to/emacs.nix { pkgs = self; };
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
In this case, the next
|
||||
<literal>nix-env -f '<nixpkgs>' -iA myemacs</literal> will
|
||||
take care of adding your emacs to the <varname>PATH</varname>
|
||||
environment variable.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-emacs-advanced">
|
||||
<title>Advanced Emacs Configuration</title>
|
||||
<para>
|
||||
If you want, you can tweak the Emacs package itself from your
|
||||
<filename>emacs.nix</filename>. For example, if you want to have
|
||||
a GTK 3-based Emacs instead of the default GTK 2-based binary
|
||||
and remove the automatically generated
|
||||
<filename>emacs.desktop</filename> (useful if you only use
|
||||
<command>emacsclient</command>), you can change your file
|
||||
<filename>emacs.nix</filename> in this way:
|
||||
</para>
|
||||
<para>
|
||||
<anchor xml:id="ex-emacsGtk3Nix" />
|
||||
</para>
|
||||
<programlisting>
|
||||
{ pkgs ? import <nixpkgs> {} }:
|
||||
let
|
||||
myEmacs = (pkgs.emacs.override {
|
||||
# Use gtk3 instead of the default gtk2
|
||||
withGTK3 = true;
|
||||
withGTK2 = false;
|
||||
}).overrideAttrs (attrs: {
|
||||
# I don't want emacs.desktop file because I only use
|
||||
# emacsclient.
|
||||
postInstall = (attrs.postInstall or "") + ''
|
||||
rm $out/share/applications/emacs.desktop
|
||||
'';
|
||||
});
|
||||
in [...]
|
||||
</programlisting>
|
||||
<para>
|
||||
After building this file as shown in
|
||||
<link linkend="ex-emacsNix">the example above</link>, you will
|
||||
get an GTK 3-based Emacs binary pre-loaded with your favorite
|
||||
packages.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="module-services-emacs-running">
|
||||
<title>Running Emacs as a Service</title>
|
||||
<para>
|
||||
NixOS provides an optional <command>systemd</command> service
|
||||
which launches
|
||||
<link xlink:href="https://www.gnu.org/software/emacs/manual/html_node/emacs/Emacs-Server.html">Emacs
|
||||
daemon</link> with the user’s login session.
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Source:</emphasis>
|
||||
<filename>modules/services/editors/emacs.nix</filename>
|
||||
</para>
|
||||
<section xml:id="module-services-emacs-enabling">
|
||||
<title>Enabling the Service</title>
|
||||
<para>
|
||||
To install and enable the <command>systemd</command> user
|
||||
service for Emacs daemon, add the following to your
|
||||
<filename>configuration.nix</filename>:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.emacs.enable = true;
|
||||
services.emacs.package = import /home/cassou/.emacs.d { pkgs = pkgs; };
|
||||
</programlisting>
|
||||
<para>
|
||||
The <varname>services.emacs.package</varname> option allows a
|
||||
custom derivation to be used, for example, one created by
|
||||
<literal>emacsWithPackages</literal>.
|
||||
</para>
|
||||
<para>
|
||||
Ensure that the Emacs server is enabled for your user’s Emacs
|
||||
configuration, either by customizing the
|
||||
<varname>server-mode</varname> variable, or by adding
|
||||
<literal>(server-start)</literal> to
|
||||
<filename>~/.emacs.d/init.el</filename>.
|
||||
</para>
|
||||
<para>
|
||||
To start the daemon, execute the following:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ nixos-rebuild switch # to activate the new configuration.nix
|
||||
$ systemctl --user daemon-reload # to force systemd reload
|
||||
$ systemctl --user start emacs.service # to start the Emacs daemon
|
||||
</programlisting>
|
||||
<para>
|
||||
The server should now be ready to serve Emacs clients.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-emacs-starting-client">
|
||||
<title>Starting the client</title>
|
||||
<para>
|
||||
Ensure that the emacs server is enabled, either by customizing
|
||||
the <varname>server-mode</varname> variable, or by adding
|
||||
<literal>(server-start)</literal> to
|
||||
<filename>~/.emacs</filename>.
|
||||
</para>
|
||||
<para>
|
||||
To connect to the emacs daemon, run one of the following:
|
||||
</para>
|
||||
<programlisting>
|
||||
emacsclient FILENAME
|
||||
emacsclient --create-frame # opens a new frame (window)
|
||||
emacsclient --create-frame --tty # opens a new frame on the current terminal
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-emacs-editor-variable">
|
||||
<title>Configuring the <varname>EDITOR</varname> variable</title>
|
||||
<para>
|
||||
If <xref linkend="opt-services.emacs.defaultEditor" /> is
|
||||
<literal>true</literal>, the <varname>EDITOR</varname> variable
|
||||
will be set to a wrapper script which launches
|
||||
<command>emacsclient</command>.
|
||||
</para>
|
||||
<para>
|
||||
Any setting of <varname>EDITOR</varname> in the shell config
|
||||
files will override
|
||||
<varname>services.emacs.defaultEditor</varname>. To make sure
|
||||
<varname>EDITOR</varname> refers to the Emacs wrapper script,
|
||||
remove any existing <varname>EDITOR</varname> assignment from
|
||||
<filename>.profile</filename>, <filename>.bashrc</filename>,
|
||||
<filename>.zshenv</filename> or any other shell config file.
|
||||
</para>
|
||||
<para>
|
||||
If you have formed certain bad habits when editing files, these
|
||||
can be corrected with a shell alias to the wrapper script:
|
||||
</para>
|
||||
<programlisting>
|
||||
alias vi=$EDITOR
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-emacs-per-user">
|
||||
<title>Per-User Enabling of the Service</title>
|
||||
<para>
|
||||
In general, <command>systemd</command> user services are
|
||||
globally enabled by symlinks in
|
||||
<filename>/etc/systemd/user</filename>. In the case where Emacs
|
||||
daemon is not wanted for all users, it is possible to install
|
||||
the service but not globally enable it:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.emacs.enable = false;
|
||||
services.emacs.install = true;
|
||||
</programlisting>
|
||||
<para>
|
||||
To enable the <command>systemd</command> user service for just
|
||||
the currently logged in user, run:
|
||||
</para>
|
||||
<programlisting>
|
||||
systemctl --user enable emacs
|
||||
</programlisting>
|
||||
<para>
|
||||
This will add the symlink
|
||||
<filename>~/.config/systemd/user/emacs.service</filename>.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="module-services-emacs-configuring">
|
||||
<title>Configuring Emacs</title>
|
||||
<para>
|
||||
The Emacs init file should be changed to load the extension
|
||||
packages at startup:
|
||||
<anchor xml:id="module-services-emacs-package-initialisation" />
|
||||
</para>
|
||||
<programlisting>
|
||||
(require 'package)
|
||||
|
||||
;; optional. makes unpure packages archives unavailable
|
||||
(setq package-archives nil)
|
||||
|
||||
(setq package-enable-at-startup nil)
|
||||
(package-initialize)
|
||||
</programlisting>
|
||||
<para>
|
||||
After the declarative emacs package configuration has been tested,
|
||||
previously downloaded packages can be cleaned up by removing
|
||||
<filename>~/.emacs.d/elpa</filename> (do make a backup first, in
|
||||
case you forgot a package).
|
||||
</para>
|
||||
<section xml:id="module-services-emacs-major-mode">
|
||||
<title>A Major Mode for Nix Expressions</title>
|
||||
<para>
|
||||
Of interest may be <varname>melpaPackages.nix-mode</varname>,
|
||||
which provides syntax highlighting for the Nix language. This is
|
||||
particularly convenient if you regularly edit Nix files.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-emacs-man-pages">
|
||||
<title>Accessing man pages</title>
|
||||
<para>
|
||||
You can use <literal>woman</literal> to get completion of all
|
||||
available man pages. For example, type
|
||||
<literal>M-x woman <RET> nixos-rebuild <RET>.</literal>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="sec-emacs-docbook-xml">
|
||||
<title>Editing DocBook 5 XML Documents</title>
|
||||
<para>
|
||||
Emacs includes
|
||||
<link xlink:href="https://www.gnu.org/software/emacs/manual/html_node/nxml-mode/Introduction.html">nXML</link>,
|
||||
a major-mode for validating and editing XML documents. When
|
||||
editing DocBook 5.0 documents, such as
|
||||
<link linkend="book-nixos-manual">this one</link>, nXML needs to
|
||||
be configured with the relevant schema, which is not included.
|
||||
</para>
|
||||
<para>
|
||||
To install the DocBook 5.0 schemas, either add
|
||||
<varname>pkgs.docbook5</varname> to
|
||||
<xref linkend="opt-environment.systemPackages" />
|
||||
(<link linkend="sec-declarative-package-mgmt">NixOS</link>), or
|
||||
run <literal>nix-env -f '<nixpkgs>' -iA docbook5</literal>
|
||||
(<link linkend="sec-ad-hoc-packages">Nix</link>).
|
||||
</para>
|
||||
<para>
|
||||
Then customize the variable
|
||||
<varname>rng-schema-locating-files</varname> to include
|
||||
<filename>~/.emacs.d/schemas.xml</filename> and put the
|
||||
following text into that file:
|
||||
<anchor xml:id="ex-emacs-docbook-xml" />
|
||||
</para>
|
||||
<programlisting language="xml">
|
||||
<?xml version="1.0"?>
|
||||
<!--
|
||||
To let emacs find this file, evaluate:
|
||||
(add-to-list 'rng-schema-locating-files "~/.emacs.d/schemas.xml")
|
||||
-->
|
||||
<locatingRules xmlns="http://thaiopensource.com/ns/locating-rules/1.0">
|
||||
<!--
|
||||
Use this variation if pkgs.docbook5 is added to environment.systemPackages
|
||||
-->
|
||||
<namespace ns="http://docbook.org/ns/docbook"
|
||||
uri="/run/current-system/sw/share/xml/docbook-5.0/rng/docbookxi.rnc"/>
|
||||
<!--
|
||||
Use this variation if installing schema with "nix-env -iA pkgs.docbook5".
|
||||
<namespace ns="http://docbook.org/ns/docbook"
|
||||
uri="../.nix-profile/share/xml/docbook-5.0/rng/docbookxi.rnc"/>
|
||||
-->
|
||||
</locatingRules>
|
||||
</programlisting>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
|
@ -71,6 +71,29 @@ in
|
|||
};
|
||||
description = lib.mdDoc "Set configuration for system-wide bluetooth (/etc/bluetooth/main.conf).";
|
||||
};
|
||||
|
||||
input = mkOption {
|
||||
type = cfgFmt.type;
|
||||
default = { };
|
||||
example = {
|
||||
General = {
|
||||
IdleTimeout = 30;
|
||||
ClassicBondedOnly = true;
|
||||
};
|
||||
};
|
||||
description = lib.mdDoc "Set configuration for the input service (/etc/bluetooth/input.conf).";
|
||||
};
|
||||
|
||||
network = mkOption {
|
||||
type = cfgFmt.type;
|
||||
default = { };
|
||||
example = {
|
||||
General = {
|
||||
DisableSecurity = true;
|
||||
};
|
||||
};
|
||||
description = lib.mdDoc "Set configuration for the network service (/etc/bluetooth/network.conf).";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -80,6 +103,10 @@ in
|
|||
environment.systemPackages = [ package ]
|
||||
++ optional cfg.hsphfpd.enable pkgs.hsphfpd;
|
||||
|
||||
environment.etc."bluetooth/input.conf".source =
|
||||
cfgFmt.generate "input.conf" cfg.input;
|
||||
environment.etc."bluetooth/network.conf".source =
|
||||
cfgFmt.generate "network.conf" cfg.network;
|
||||
environment.etc."bluetooth/main.conf".source =
|
||||
cfgFmt.generate "main.conf" (recursiveUpdate defaults cfg.settings);
|
||||
services.udev.packages = [ package ];
|
||||
|
|
|
@ -8,7 +8,7 @@ in {
|
|||
### docs
|
||||
|
||||
meta = {
|
||||
doc = ./trezord.xml;
|
||||
doc = ./trezord.md;
|
||||
};
|
||||
|
||||
### interface
|
||||
|
|
|
@ -1,29 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="trezor">
|
||||
<title>Trezor</title>
|
||||
<para>
|
||||
Trezor is an open-source cryptocurrency hardware wallet and security
|
||||
token allowing secure storage of private keys.
|
||||
</para>
|
||||
<para>
|
||||
It offers advanced features such U2F two-factor authorization, SSH
|
||||
login through
|
||||
<link xlink:href="https://wiki.trezor.io/Apps:SSH_agent">Trezor SSH
|
||||
agent</link>,
|
||||
<link xlink:href="https://wiki.trezor.io/GPG">GPG</link> and a
|
||||
<link xlink:href="https://wiki.trezor.io/Trezor_Password_Manager">password
|
||||
manager</link>. For more information, guides and documentation, see
|
||||
<link xlink:href="https://wiki.trezor.io">https://wiki.trezor.io</link>.
|
||||
</para>
|
||||
<para>
|
||||
To enable Trezor support, add the following to your
|
||||
<filename>configuration.nix</filename>:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.trezord.enable = true;
|
||||
</programlisting>
|
||||
<para>
|
||||
This will add all necessary udev rules and start Trezor Bridge.
|
||||
</para>
|
||||
</chapter>
|
|
@ -642,7 +642,7 @@ in {
|
|||
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ lheckemann qyliss ma27 ];
|
||||
doc = ./mailman.xml;
|
||||
doc = ./mailman.md;
|
||||
};
|
||||
|
||||
}
|
||||
|
|
|
@ -1,112 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-mailman">
|
||||
<title>Mailman</title>
|
||||
<para>
|
||||
<link xlink:href="https://www.list.org">Mailman</link> is free
|
||||
software for managing electronic mail discussion and e-newsletter
|
||||
lists. Mailman and its web interface can be configured using the
|
||||
corresponding NixOS module. Note that this service is best used with
|
||||
an existing, securely configured Postfix setup, as it does not
|
||||
automatically configure this.
|
||||
</para>
|
||||
<section xml:id="module-services-mailman-basic-usage">
|
||||
<title>Basic usage with Postfix</title>
|
||||
<para>
|
||||
For a basic configuration with Postfix as the MTA, the following
|
||||
settings are suggested:
|
||||
</para>
|
||||
<programlisting>
|
||||
{ config, ... }: {
|
||||
services.postfix = {
|
||||
enable = true;
|
||||
relayDomains = ["hash:/var/lib/mailman/data/postfix_domains"];
|
||||
sslCert = config.security.acme.certs."lists.example.org".directory + "/full.pem";
|
||||
sslKey = config.security.acme.certs."lists.example.org".directory + "/key.pem";
|
||||
config = {
|
||||
transport_maps = ["hash:/var/lib/mailman/data/postfix_lmtp"];
|
||||
local_recipient_maps = ["hash:/var/lib/mailman/data/postfix_lmtp"];
|
||||
};
|
||||
};
|
||||
services.mailman = {
|
||||
enable = true;
|
||||
serve.enable = true;
|
||||
hyperkitty.enable = true;
|
||||
webHosts = ["lists.example.org"];
|
||||
siteOwner = "mailman@example.org";
|
||||
};
|
||||
services.nginx.virtualHosts."lists.example.org".enableACME = true;
|
||||
networking.firewall.allowedTCPPorts = [ 25 80 443 ];
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
DNS records will also be required:
|
||||
</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>AAAA</literal> and <literal>A</literal> records
|
||||
pointing to the host in question, in order for browsers to be
|
||||
able to discover the address of the web server;
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
An <literal>MX</literal> record pointing to a domain name at
|
||||
which the host is reachable, in order for other mail servers
|
||||
to be able to deliver emails to the mailing lists it hosts.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>
|
||||
After this has been done and appropriate DNS records have been set
|
||||
up, the Postorius mailing list manager and the Hyperkitty archive
|
||||
browser will be available at https://lists.example.org/. Note that
|
||||
this setup is not sufficient to deliver emails to most email
|
||||
providers nor to avoid spam – a number of additional measures for
|
||||
authenticating incoming and outgoing mails, such as SPF, DMARC and
|
||||
DKIM are necessary, but outside the scope of the Mailman module.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-mailman-other-mtas">
|
||||
<title>Using with other MTAs</title>
|
||||
<para>
|
||||
Mailman also supports other MTA, though with a little bit more
|
||||
configuration. For example, to use Mailman with Exim, you can use
|
||||
the following settings:
|
||||
</para>
|
||||
<programlisting>
|
||||
{ config, ... }: {
|
||||
services = {
|
||||
mailman = {
|
||||
enable = true;
|
||||
siteOwner = "mailman@example.org";
|
||||
enablePostfix = false;
|
||||
settings.mta = {
|
||||
incoming = "mailman.mta.exim4.LMTP";
|
||||
outgoing = "mailman.mta.deliver.deliver";
|
||||
lmtp_host = "localhost";
|
||||
lmtp_port = "8024";
|
||||
smtp_host = "localhost";
|
||||
smtp_port = "25";
|
||||
configuration = "python:mailman.config.exim4";
|
||||
};
|
||||
};
|
||||
exim = {
|
||||
enable = true;
|
||||
# You can configure Exim in a separate file to reduce configuration.nix clutter
|
||||
config = builtins.readFile ./exim.conf;
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
The exim config needs some special additions to work with Mailman.
|
||||
Currently NixOS can’t manage Exim config with such granularity.
|
||||
Please refer to
|
||||
<link xlink:href="https://mailman.readthedocs.io/en/latest/src/mailman/docs/mta.html">Mailman
|
||||
documentation</link> for more info on configuring Mailman for
|
||||
working with Exim.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
|
@ -236,7 +236,7 @@ in
|
|||
};
|
||||
|
||||
meta = {
|
||||
doc = ./mjolnir.xml;
|
||||
doc = ./mjolnir.md;
|
||||
maintainers = with maintainers; [ jojosch ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,148 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-mjolnir">
|
||||
<title>Mjolnir (Matrix Moderation Tool)</title>
|
||||
<para>
|
||||
This chapter will show you how to set up your own, self-hosted
|
||||
<link xlink:href="https://github.com/matrix-org/mjolnir">Mjolnir</link>
|
||||
instance.
|
||||
</para>
|
||||
<para>
|
||||
As an all-in-one moderation tool, it can protect your server from
|
||||
malicious invites, spam messages, and whatever else you don’t want.
|
||||
In addition to server-level protection, Mjolnir is great for
|
||||
communities wanting to protect their rooms without having to use
|
||||
their personal accounts for moderation.
|
||||
</para>
|
||||
<para>
|
||||
The bot by default includes support for bans, redactions, anti-spam,
|
||||
server ACLs, room directory changes, room alias transfers, account
|
||||
deactivation, room shutdown, and more.
|
||||
</para>
|
||||
<para>
|
||||
See the
|
||||
<link xlink:href="https://github.com/matrix-org/mjolnir#readme">README</link>
|
||||
page and the
|
||||
<link xlink:href="https://github.com/matrix-org/mjolnir/blob/main/docs/moderators.md">Moderator’s
|
||||
guide</link> for additional instructions on how to setup and use
|
||||
Mjolnir.
|
||||
</para>
|
||||
<para>
|
||||
For <link linkend="opt-services.mjolnir.settings">additional
|
||||
settings</link> see
|
||||
<link xlink:href="https://github.com/matrix-org/mjolnir/blob/main/config/default.yaml">the
|
||||
default configuration</link>.
|
||||
</para>
|
||||
<section xml:id="module-services-mjolnir-setup">
|
||||
<title>Mjolnir Setup</title>
|
||||
<para>
|
||||
First create a new Room which will be used as a management room
|
||||
for Mjolnir. In this room, Mjolnir will log possible errors and
|
||||
debugging information. You’ll need to set this Room-ID in
|
||||
<link linkend="opt-services.mjolnir.managementRoom">services.mjolnir.managementRoom</link>.
|
||||
</para>
|
||||
<para>
|
||||
Next, create a new user for Mjolnir on your homeserver, if not
|
||||
present already.
|
||||
</para>
|
||||
<para>
|
||||
The Mjolnir Matrix user expects to be free of any rate limiting.
|
||||
See
|
||||
<link xlink:href="https://github.com/matrix-org/synapse/issues/6286">Synapse
|
||||
#6286</link> for an example on how to achieve this.
|
||||
</para>
|
||||
<para>
|
||||
If you want Mjolnir to be able to deactivate users, move room
|
||||
aliases, shutdown rooms, etc. you’ll need to make the Mjolnir user
|
||||
a Matrix server admin.
|
||||
</para>
|
||||
<para>
|
||||
Now invite the Mjolnir user to the management room.
|
||||
</para>
|
||||
<para>
|
||||
It is recommended to use
|
||||
<link xlink:href="https://github.com/matrix-org/pantalaimon">Pantalaimon</link>,
|
||||
so your management room can be encrypted. This also applies if you
|
||||
are looking to moderate an encrypted room.
|
||||
</para>
|
||||
<para>
|
||||
To enable the Pantalaimon E2E Proxy for mjolnir, enable
|
||||
<link linkend="opt-services.mjolnir.pantalaimon.enable">services.mjolnir.pantalaimon</link>.
|
||||
This will autoconfigure a new Pantalaimon instance, which will
|
||||
connect to the homeserver set in
|
||||
<link linkend="opt-services.mjolnir.homeserverUrl">services.mjolnir.homeserverUrl</link>
|
||||
and Mjolnir itself will be configured to connect to the new
|
||||
Pantalaimon instance.
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
services.mjolnir = {
|
||||
enable = true;
|
||||
homeserverUrl = "https://matrix.domain.tld";
|
||||
pantalaimon = {
|
||||
enable = true;
|
||||
username = "mjolnir";
|
||||
passwordFile = "/run/secrets/mjolnir-password";
|
||||
};
|
||||
protectedRooms = [
|
||||
"https://matrix.to/#/!xxx:domain.tld"
|
||||
];
|
||||
managementRoom = "!yyy:domain.tld";
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<section xml:id="module-services-mjolnir-setup-ems">
|
||||
<title>Element Matrix Services (EMS)</title>
|
||||
<para>
|
||||
If you are using a managed
|
||||
<link xlink:href="https://ems.element.io/"><quote>Element Matrix
|
||||
Services (EMS)</quote></link> server, you will need to consent
|
||||
to the terms and conditions. Upon startup, an error log entry
|
||||
with a URL to the consent page will be generated.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="module-services-mjolnir-matrix-synapse-antispam">
|
||||
<title>Synapse Antispam Module</title>
|
||||
<para>
|
||||
A Synapse module is also available to apply the same rulesets the
|
||||
bot uses across an entire homeserver.
|
||||
</para>
|
||||
<para>
|
||||
To use the Antispam Module, add
|
||||
<literal>matrix-synapse-plugins.matrix-synapse-mjolnir-antispam</literal>
|
||||
to the Synapse plugin list and enable the
|
||||
<literal>mjolnir.Module</literal> module.
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
services.matrix-synapse = {
|
||||
plugins = with pkgs; [
|
||||
matrix-synapse-plugins.matrix-synapse-mjolnir-antispam
|
||||
];
|
||||
extraConfig = ''
|
||||
modules:
|
||||
- module: mjolnir.Module
|
||||
config:
|
||||
# Prevent servers/users in the ban lists from inviting users on this
|
||||
# server to rooms. Default true.
|
||||
block_invites: true
|
||||
# Flag messages sent by servers/users in the ban lists as spam. Currently
|
||||
# this means that spammy messages will appear as empty to users. Default
|
||||
# false.
|
||||
block_messages: false
|
||||
# Remove users from the user directory search by filtering matrix IDs and
|
||||
# display names by the entries in the user ban list. Default false.
|
||||
block_usernames: false
|
||||
# The room IDs of the ban lists to honour. Unlike other parts of Mjolnir,
|
||||
# this list cannot be room aliases or permalinks. This server is expected
|
||||
# to already be joined to the room - Mjolnir will not automatically join
|
||||
# these rooms.
|
||||
ban_lists:
|
||||
- "!roomid:example.org"
|
||||
'';
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -801,7 +801,7 @@ in {
|
|||
|
||||
meta = {
|
||||
buildDocsInSandbox = false;
|
||||
doc = ./synapse.xml;
|
||||
doc = ./synapse.md;
|
||||
maintainers = teams.matrix.members;
|
||||
};
|
||||
|
||||
|
|
|
@ -1,263 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-matrix">
|
||||
<title>Matrix</title>
|
||||
<para>
|
||||
<link xlink:href="https://matrix.org/">Matrix</link> is an open
|
||||
standard for interoperable, decentralised, real-time communication
|
||||
over IP. It can be used to power Instant Messaging, VoIP/WebRTC
|
||||
signalling, Internet of Things communication - or anywhere you need
|
||||
a standard HTTP API for publishing and subscribing to data whilst
|
||||
tracking the conversation history.
|
||||
</para>
|
||||
<para>
|
||||
This chapter will show you how to set up your own, self-hosted
|
||||
Matrix homeserver using the Synapse reference homeserver, and how to
|
||||
serve your own copy of the Element web client. See the
|
||||
<link xlink:href="https://matrix.org/docs/projects/try-matrix-now.html">Try
|
||||
Matrix Now!</link> overview page for links to Element Apps for
|
||||
Android and iOS, desktop clients, as well as bridges to other
|
||||
networks and other projects around Matrix.
|
||||
</para>
|
||||
<section xml:id="module-services-matrix-synapse">
|
||||
<title>Synapse Homeserver</title>
|
||||
<para>
|
||||
<link xlink:href="https://github.com/matrix-org/synapse">Synapse</link>
|
||||
is the reference homeserver implementation of Matrix from the core
|
||||
development team at matrix.org. The following configuration
|
||||
example will set up a synapse server for the
|
||||
<literal>example.org</literal> domain, served from the host
|
||||
<literal>myhostname.example.org</literal>. For more information,
|
||||
please refer to the
|
||||
<link xlink:href="https://matrix-org.github.io/synapse/latest/setup/installation.html">installation
|
||||
instructions of Synapse</link> .
|
||||
</para>
|
||||
<programlisting>
|
||||
{ pkgs, lib, config, ... }:
|
||||
let
|
||||
fqdn = "${config.networking.hostName}.${config.networking.domain}";
|
||||
clientConfig = {
|
||||
"m.homeserver".base_url = "https://${fqdn}";
|
||||
"m.identity_server" = {};
|
||||
};
|
||||
serverConfig."m.server" = "${config.services.matrix-synapse.settings.server_name}:443";
|
||||
mkWellKnown = data: ''
|
||||
add_header Content-Type application/json;
|
||||
add_header Access-Control-Allow-Origin *;
|
||||
return 200 '${builtins.toJSON data}';
|
||||
'';
|
||||
in {
|
||||
networking.hostName = "myhostname";
|
||||
networking.domain = "example.org";
|
||||
networking.firewall.allowedTCPPorts = [ 80 443 ];
|
||||
|
||||
services.postgresql.enable = true;
|
||||
services.postgresql.initialScript = pkgs.writeText "synapse-init.sql" ''
|
||||
CREATE ROLE "matrix-synapse" WITH LOGIN PASSWORD 'synapse';
|
||||
CREATE DATABASE "matrix-synapse" WITH OWNER "matrix-synapse"
|
||||
TEMPLATE template0
|
||||
LC_COLLATE = "C"
|
||||
LC_CTYPE = "C";
|
||||
'';
|
||||
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
recommendedTlsSettings = true;
|
||||
recommendedOptimisation = true;
|
||||
recommendedGzipSettings = true;
|
||||
recommendedProxySettings = true;
|
||||
virtualHosts = {
|
||||
# If the A and AAAA DNS records on example.org do not point on the same host as the
|
||||
# records for myhostname.example.org, you can easily move the /.well-known
|
||||
# virtualHost section of the code to the host that is serving example.org, while
|
||||
# the rest stays on myhostname.example.org with no other changes required.
|
||||
# This pattern also allows to seamlessly move the homeserver from
|
||||
# myhostname.example.org to myotherhost.example.org by only changing the
|
||||
# /.well-known redirection target.
|
||||
"${config.networking.domain}" = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
# This section is not needed if the server_name of matrix-synapse is equal to
|
||||
# the domain (i.e. example.org from @foo:example.org) and the federation port
|
||||
# is 8448.
|
||||
# Further reference can be found in the docs about delegation under
|
||||
# https://matrix-org.github.io/synapse/latest/delegate.html
|
||||
locations."= /.well-known/matrix/server".extraConfig = mkWellKnown serverConfig;
|
||||
# This is usually needed for homeserver discovery (from e.g. other Matrix clients).
|
||||
# Further reference can be found in the upstream docs at
|
||||
# https://spec.matrix.org/latest/client-server-api/#getwell-knownmatrixclient
|
||||
locations."= /.well-known/matrix/client".extraConfig = mkWellKnown clientConfig;
|
||||
};
|
||||
"${fqdn}" = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
# It's also possible to do a redirect here or something else, this vhost is not
|
||||
# needed for Matrix. It's recommended though to *not put* element
|
||||
# here, see also the section about Element.
|
||||
locations."/".extraConfig = ''
|
||||
return 404;
|
||||
'';
|
||||
# Forward all Matrix API calls to the synapse Matrix homeserver. A trailing slash
|
||||
# *must not* be used here.
|
||||
locations."/_matrix".proxyPass = "http://[::1]:8008";
|
||||
# Forward requests for e.g. SSO and password-resets.
|
||||
locations."/_synapse/client".proxyPass = "http://[::1]:8008";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
services.matrix-synapse = {
|
||||
enable = true;
|
||||
settings.server_name = config.networking.domain;
|
||||
settings.listeners = [
|
||||
{ port = 8008;
|
||||
bind_addresses = [ "::1" ];
|
||||
type = "http";
|
||||
tls = false;
|
||||
x_forwarded = true;
|
||||
resources = [ {
|
||||
names = [ "client" "federation" ];
|
||||
compress = true;
|
||||
} ];
|
||||
}
|
||||
];
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-matrix-register-users">
|
||||
<title>Registering Matrix users</title>
|
||||
<para>
|
||||
If you want to run a server with public registration by anybody,
|
||||
you can then enable
|
||||
<literal>services.matrix-synapse.settings.enable_registration = true;</literal>.
|
||||
Otherwise, or you can generate a registration secret with
|
||||
<command>pwgen -s 64 1</command> and set it with
|
||||
<xref linkend="opt-services.matrix-synapse.settings.registration_shared_secret" />.
|
||||
To create a new user or admin, run the following after you have
|
||||
set the secret and have rebuilt NixOS:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ nix-shell -p matrix-synapse
|
||||
$ register_new_matrix_user -k your-registration-shared-secret http://localhost:8008
|
||||
New user localpart: your-username
|
||||
Password:
|
||||
Confirm password:
|
||||
Make admin [no]:
|
||||
Success!
|
||||
</programlisting>
|
||||
<para>
|
||||
In the example, this would create a user with the Matrix
|
||||
Identifier <literal>@your-username:example.org</literal>.
|
||||
</para>
|
||||
<warning>
|
||||
<para>
|
||||
When using
|
||||
<xref linkend="opt-services.matrix-synapse.settings.registration_shared_secret" />,
|
||||
the secret will end up in the world-readable store. Instead it’s
|
||||
recommended to deploy the secret in an additional file like
|
||||
this:
|
||||
</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Create a file with the following contents:
|
||||
</para>
|
||||
<programlisting>
|
||||
registration_shared_secret: your-very-secret-secret
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Deploy the file with a secret-manager such as
|
||||
<link xlink:href="https://nixops.readthedocs.io/en/latest/overview.html#managing-keys"><option>deployment.keys</option></link>
|
||||
from
|
||||
<citerefentry><refentrytitle>nixops</refentrytitle><manvolnum>1</manvolnum></citerefentry>
|
||||
or
|
||||
<link xlink:href="https://github.com/Mic92/sops-nix/">sops-nix</link>
|
||||
to e.g.
|
||||
<filename>/run/secrets/matrix-shared-secret</filename> and
|
||||
ensure that it’s readable by
|
||||
<literal>matrix-synapse</literal>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Include the file like this in your configuration:
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
services.matrix-synapse.extraConfigFiles = [
|
||||
"/run/secrets/matrix-shared-secret"
|
||||
];
|
||||
}
|
||||
</programlisting>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</warning>
|
||||
<note>
|
||||
<para>
|
||||
It’s also possible to user alternative authentication mechanism
|
||||
such as
|
||||
<link xlink:href="https://github.com/matrix-org/matrix-synapse-ldap3">LDAP
|
||||
(via <literal>matrix-synapse-ldap3</literal>)</link> or
|
||||
<link xlink:href="https://matrix-org.github.io/synapse/latest/openid.html">OpenID</link>.
|
||||
</para>
|
||||
</note>
|
||||
</section>
|
||||
<section xml:id="module-services-matrix-element-web">
|
||||
<title>Element (formerly known as Riot) Web Client</title>
|
||||
<para>
|
||||
<link xlink:href="https://github.com/vector-im/riot-web/">Element
|
||||
Web</link> is the reference web client for Matrix and developed by
|
||||
the core team at matrix.org. Element was formerly known as
|
||||
Riot.im, see the
|
||||
<link xlink:href="https://element.io/blog/welcome-to-element/">Element
|
||||
introductory blog post</link> for more information. The following
|
||||
snippet can be optionally added to the code before to complete the
|
||||
synapse installation with a web client served at
|
||||
<literal>https://element.myhostname.example.org</literal> and
|
||||
<literal>https://element.example.org</literal>. Alternatively, you
|
||||
can use the hosted copy at
|
||||
<link xlink:href="https://app.element.io/">https://app.element.io/</link>,
|
||||
or use other web clients or native client applications. Due to the
|
||||
<literal>/.well-known</literal> urls set up done above, many
|
||||
clients should fill in the required connection details
|
||||
automatically when you enter your Matrix Identifier. See
|
||||
<link xlink:href="https://matrix.org/docs/projects/try-matrix-now.html">Try
|
||||
Matrix Now!</link> for a list of existing clients and their
|
||||
supported featureset.
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
services.nginx.virtualHosts."element.${fqdn}" = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
serverAliases = [
|
||||
"element.${config.networking.domain}"
|
||||
];
|
||||
|
||||
root = pkgs.element-web.override {
|
||||
conf = {
|
||||
default_server_config = clientConfig; # see `clientConfig` from the snippet above.
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<note>
|
||||
<para>
|
||||
The Element developers do not recommend running Element and your
|
||||
Matrix homeserver on the same fully-qualified domain name for
|
||||
security reasons. In the example, this means that you should not
|
||||
reuse the <literal>myhostname.example.org</literal> virtualHost
|
||||
to also serve Element, but instead serve it on a different
|
||||
subdomain, like <literal>element.example.org</literal> in the
|
||||
example. See the
|
||||
<link xlink:href="https://github.com/vector-im/element-web/tree/v1.10.0#important-security-notes">Element
|
||||
Important Security Notes</link> for more information on this
|
||||
subject.
|
||||
</para>
|
||||
</note>
|
||||
</section>
|
||||
</chapter>
|
|
@ -1504,6 +1504,6 @@ in {
|
|||
|
||||
};
|
||||
|
||||
meta.doc = ./gitlab.xml;
|
||||
meta.doc = ./gitlab.md;
|
||||
|
||||
}
|
||||
|
|
|
@ -1,143 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-gitlab">
|
||||
<title>GitLab</title>
|
||||
<para>
|
||||
GitLab is a feature-rich git hosting service.
|
||||
</para>
|
||||
<section xml:id="module-services-gitlab-prerequisites">
|
||||
<title>Prerequisites</title>
|
||||
<para>
|
||||
The <literal>gitlab</literal> service exposes only an Unix socket
|
||||
at <literal>/run/gitlab/gitlab-workhorse.socket</literal>. You
|
||||
need to configure a webserver to proxy HTTP requests to the
|
||||
socket.
|
||||
</para>
|
||||
<para>
|
||||
For instance, the following configuration could be used to use
|
||||
nginx as frontend proxy:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
recommendedGzipSettings = true;
|
||||
recommendedOptimisation = true;
|
||||
recommendedProxySettings = true;
|
||||
recommendedTlsSettings = true;
|
||||
virtualHosts."git.example.com" = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/".proxyPass = "http://unix:/run/gitlab/gitlab-workhorse.socket";
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-gitlab-configuring">
|
||||
<title>Configuring</title>
|
||||
<para>
|
||||
GitLab depends on both PostgreSQL and Redis and will automatically
|
||||
enable both services. In the case of PostgreSQL, a database and a
|
||||
role will be created.
|
||||
</para>
|
||||
<para>
|
||||
The default state dir is <literal>/var/gitlab/state</literal>.
|
||||
This is where all data like the repositories and uploads will be
|
||||
stored.
|
||||
</para>
|
||||
<para>
|
||||
A basic configuration with some custom settings could look like
|
||||
this:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.gitlab = {
|
||||
enable = true;
|
||||
databasePasswordFile = "/var/keys/gitlab/db_password";
|
||||
initialRootPasswordFile = "/var/keys/gitlab/root_password";
|
||||
https = true;
|
||||
host = "git.example.com";
|
||||
port = 443;
|
||||
user = "git";
|
||||
group = "git";
|
||||
smtp = {
|
||||
enable = true;
|
||||
address = "localhost";
|
||||
port = 25;
|
||||
};
|
||||
secrets = {
|
||||
dbFile = "/var/keys/gitlab/db";
|
||||
secretFile = "/var/keys/gitlab/secret";
|
||||
otpFile = "/var/keys/gitlab/otp";
|
||||
jwsFile = "/var/keys/gitlab/jws";
|
||||
};
|
||||
extraConfig = {
|
||||
gitlab = {
|
||||
email_from = "gitlab-no-reply@example.com";
|
||||
email_display_name = "Example GitLab";
|
||||
email_reply_to = "gitlab-no-reply@example.com";
|
||||
default_projects_features = { builds = false; };
|
||||
};
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
If you’re setting up a new GitLab instance, generate new secrets.
|
||||
You for instance use
|
||||
<literal>tr -dc A-Za-z0-9 < /dev/urandom | head -c 128 > /var/keys/gitlab/db</literal>
|
||||
to generate a new db secret. Make sure the files can be read by,
|
||||
and only by, the user specified by
|
||||
<link linkend="opt-services.gitlab.user">services.gitlab.user</link>.
|
||||
GitLab encrypts sensitive data stored in the database. If you’re
|
||||
restoring an existing GitLab instance, you must specify the
|
||||
secrets secret from <literal>config/secrets.yml</literal> located
|
||||
in your GitLab state folder.
|
||||
</para>
|
||||
<para>
|
||||
When <literal>incoming_mail.enabled</literal> is set to
|
||||
<literal>true</literal> in
|
||||
<link linkend="opt-services.gitlab.extraConfig">extraConfig</link>
|
||||
an additional service called <literal>gitlab-mailroom</literal> is
|
||||
enabled for fetching incoming mail.
|
||||
</para>
|
||||
<para>
|
||||
Refer to <xref linkend="ch-options" /> for all available
|
||||
configuration options for the
|
||||
<link linkend="opt-services.gitlab.enable">services.gitlab</link>
|
||||
module.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-gitlab-maintenance">
|
||||
<title>Maintenance</title>
|
||||
<section xml:id="module-services-gitlab-maintenance-backups">
|
||||
<title>Backups</title>
|
||||
<para>
|
||||
Backups can be configured with the options in
|
||||
<link linkend="opt-services.gitlab.backup.keepTime">services.gitlab.backup</link>.
|
||||
Use the
|
||||
<link linkend="opt-services.gitlab.backup.startAt">services.gitlab.backup.startAt</link>
|
||||
option to configure regular backups.
|
||||
</para>
|
||||
<para>
|
||||
To run a manual backup, start the
|
||||
<literal>gitlab-backup</literal> service:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ systemctl start gitlab-backup.service
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-gitlab-maintenance-rake">
|
||||
<title>Rake tasks</title>
|
||||
<para>
|
||||
You can run GitLab’s rake tasks with
|
||||
<literal>gitlab-rake</literal> which will be available on the
|
||||
system when GitLab is enabled. You will have to run the command
|
||||
as the user that you configured to run GitLab with.
|
||||
</para>
|
||||
<para>
|
||||
A list of all available rake tasks can be obtained by running:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ sudo -u git -H gitlab-rake -T
|
||||
</programlisting>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
|
@ -226,9 +226,26 @@ in
|
|||
|
||||
# Auto-migrate on first run or if the package has changed
|
||||
versionFile="${cfg.dataDir}/src-version"
|
||||
if [[ $(cat "$versionFile" 2>/dev/null) != ${pkg} ]]; then
|
||||
version=$(cat "$versionFile" 2>/dev/null || echo 0)
|
||||
|
||||
if [[ $version != ${pkg.version} ]]; then
|
||||
${pkg}/bin/paperless-ngx migrate
|
||||
echo ${pkg} > "$versionFile"
|
||||
|
||||
# Parse old version string format for backwards compatibility
|
||||
version=$(echo "$version" | grep -ohP '[^-]+$')
|
||||
|
||||
versionLessThan() {
|
||||
target=$1
|
||||
[[ $({ echo "$version"; echo "$target"; } | sort -V | head -1) != "$target" ]]
|
||||
}
|
||||
|
||||
if versionLessThan 1.12.0; then
|
||||
# Reindex documents as mentioned in https://github.com/paperless-ngx/paperless-ngx/releases/tag/v1.12.1
|
||||
echo "Reindexing documents, to allow searching old comments. Required after the 1.12.x upgrade."
|
||||
${pkg}/bin/paperless-ngx document_index reindex
|
||||
fi
|
||||
|
||||
echo ${pkg.version} > "$versionFile"
|
||||
fi
|
||||
''
|
||||
+ optionalString (cfg.passwordFile != null) ''
|
||||
|
|
|
@ -1390,6 +1390,6 @@ in
|
|||
'')
|
||||
];
|
||||
|
||||
meta.doc = ./default.xml;
|
||||
meta.doc = ./default.md;
|
||||
meta.maintainers = with maintainers; [ tomberek ];
|
||||
}
|
||||
|
|
|
@ -1,113 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-sourcehut">
|
||||
<title>Sourcehut</title>
|
||||
<para>
|
||||
<link xlink:href="https://sr.ht.com/">Sourcehut</link> is an
|
||||
open-source, self-hostable software development platform. The server
|
||||
setup can be automated using
|
||||
<link linkend="opt-services.sourcehut.enable">services.sourcehut</link>.
|
||||
</para>
|
||||
<section xml:id="module-services-sourcehut-basic-usage">
|
||||
<title>Basic usage</title>
|
||||
<para>
|
||||
Sourcehut is a Python and Go based set of applications. This NixOS
|
||||
module also provides basic configuration integrating Sourcehut
|
||||
into locally running <literal>services.nginx</literal>,
|
||||
<literal>services.redis.servers.sourcehut</literal>,
|
||||
<literal>services.postfix</literal> and
|
||||
<literal>services.postgresql</literal> services.
|
||||
</para>
|
||||
<para>
|
||||
A very basic configuration may look like this:
|
||||
</para>
|
||||
<programlisting>
|
||||
{ pkgs, ... }:
|
||||
let
|
||||
fqdn =
|
||||
let
|
||||
join = hostName: domain: hostName + optionalString (domain != null) ".${domain}";
|
||||
in join config.networking.hostName config.networking.domain;
|
||||
in {
|
||||
|
||||
networking = {
|
||||
hostName = "srht";
|
||||
domain = "tld";
|
||||
firewall.allowedTCPPorts = [ 22 80 443 ];
|
||||
};
|
||||
|
||||
services.sourcehut = {
|
||||
enable = true;
|
||||
git.enable = true;
|
||||
man.enable = true;
|
||||
meta.enable = true;
|
||||
nginx.enable = true;
|
||||
postfix.enable = true;
|
||||
postgresql.enable = true;
|
||||
redis.enable = true;
|
||||
settings = {
|
||||
"sr.ht" = {
|
||||
environment = "production";
|
||||
global-domain = fqdn;
|
||||
origin = "https://${fqdn}";
|
||||
# Produce keys with srht-keygen from sourcehut.coresrht.
|
||||
network-key = "/run/keys/path/to/network-key";
|
||||
service-key = "/run/keys/path/to/service-key";
|
||||
};
|
||||
webhooks.private-key= "/run/keys/path/to/webhook-key";
|
||||
};
|
||||
};
|
||||
|
||||
security.acme.certs."${fqdn}".extraDomainNames = [
|
||||
"meta.${fqdn}"
|
||||
"man.${fqdn}"
|
||||
"git.${fqdn}"
|
||||
];
|
||||
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
# only recommendedProxySettings are strictly required, but the rest make sense as well.
|
||||
recommendedTlsSettings = true;
|
||||
recommendedOptimisation = true;
|
||||
recommendedGzipSettings = true;
|
||||
recommendedProxySettings = true;
|
||||
|
||||
# Settings to setup what certificates are used for which endpoint.
|
||||
virtualHosts = {
|
||||
"${fqdn}".enableACME = true;
|
||||
"meta.${fqdn}".useACMEHost = fqdn:
|
||||
"man.${fqdn}".useACMEHost = fqdn:
|
||||
"git.${fqdn}".useACMEHost = fqdn:
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
The <literal>hostName</literal> option is used internally to
|
||||
configure the nginx reverse-proxy. The <literal>settings</literal>
|
||||
attribute set is used by the configuration generator and the
|
||||
result is placed in <literal>/etc/sr.ht/config.ini</literal>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-sourcehut-configuration">
|
||||
<title>Configuration</title>
|
||||
<para>
|
||||
All configuration parameters are also stored in
|
||||
<literal>/etc/sr.ht/config.ini</literal> which is generated by the
|
||||
module and linked from the store to ensure that all values from
|
||||
<literal>config.ini</literal> can be modified by the module.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-sourcehut-httpd">
|
||||
<title>Using an alternative webserver as reverse-proxy (e.g.
|
||||
<literal>httpd</literal>)</title>
|
||||
<para>
|
||||
By default, <literal>nginx</literal> is used as reverse-proxy for
|
||||
<literal>sourcehut</literal>. However, it’s possible to use e.g.
|
||||
<literal>httpd</literal> by explicitly disabling
|
||||
<literal>nginx</literal> using
|
||||
<xref linkend="opt-services.nginx.enable" /> and fixing the
|
||||
<literal>settings</literal>.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
|
@ -566,5 +566,5 @@ in {
|
|||
})
|
||||
];
|
||||
|
||||
meta.doc = ./default.xml;
|
||||
meta.doc = ./default.md;
|
||||
}
|
||||
|
|
|
@ -1,130 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-taskserver">
|
||||
<title>Taskserver</title>
|
||||
<para>
|
||||
Taskserver is the server component of
|
||||
<link xlink:href="https://taskwarrior.org/">Taskwarrior</link>, a
|
||||
free and open source todo list application.
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Upstream documentation:</emphasis>
|
||||
<link xlink:href="https://taskwarrior.org/docs/#taskd">https://taskwarrior.org/docs/#taskd</link>
|
||||
</para>
|
||||
<section xml:id="module-services-taskserver-configuration">
|
||||
<title>Configuration</title>
|
||||
<para>
|
||||
Taskserver does all of its authentication via TLS using client
|
||||
certificates, so you either need to roll your own CA or purchase a
|
||||
certificate from a known CA, which allows creation of client
|
||||
certificates. These certificates are usually advertised as
|
||||
<quote>server certificates</quote>.
|
||||
</para>
|
||||
<para>
|
||||
So in order to make it easier to handle your own CA, there is a
|
||||
helper tool called <command>nixos-taskserver</command> which
|
||||
manages the custom CA along with Taskserver organisations, users
|
||||
and groups.
|
||||
</para>
|
||||
<para>
|
||||
While the client certificates in Taskserver only authenticate
|
||||
whether a user is allowed to connect, every user has its own UUID
|
||||
which identifies it as an entity.
|
||||
</para>
|
||||
<para>
|
||||
With <command>nixos-taskserver</command> the client certificate is
|
||||
created along with the UUID of the user, so it handles all of the
|
||||
credentials needed in order to setup the Taskwarrior client to
|
||||
work with a Taskserver.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-taskserver-nixos-taskserver-tool">
|
||||
<title>The nixos-taskserver tool</title>
|
||||
<para>
|
||||
Because Taskserver by default only provides scripts to setup users
|
||||
imperatively, the <command>nixos-taskserver</command> tool is used
|
||||
for addition and deletion of organisations along with users and
|
||||
groups defined by
|
||||
<xref linkend="opt-services.taskserver.organisations" /> and as
|
||||
well for imperative set up.
|
||||
</para>
|
||||
<para>
|
||||
The tool is designed to not interfere if the command is used to
|
||||
manually set up some organisations, users or groups.
|
||||
</para>
|
||||
<para>
|
||||
For example if you add a new organisation using
|
||||
<command>nixos-taskserver org add foo</command>, the organisation
|
||||
is not modified and deleted no matter what you define in
|
||||
<option>services.taskserver.organisations</option>, even if you’re
|
||||
adding the same organisation in that option.
|
||||
</para>
|
||||
<para>
|
||||
The tool is modelled to imitate the official
|
||||
<command>taskd</command> command, documentation for each
|
||||
subcommand can be shown by using the <option>--help</option>
|
||||
switch.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-taskserver-declarative-ca-management">
|
||||
<title>Declarative/automatic CA management</title>
|
||||
<para>
|
||||
Everything is done according to what you specify in the module
|
||||
options, however in order to set up a Taskwarrior client for
|
||||
synchronisation with a Taskserver instance, you have to transfer
|
||||
the keys and certificates to the client machine.
|
||||
</para>
|
||||
<para>
|
||||
This is done using
|
||||
<command>nixos-taskserver user export $orgname $username</command>
|
||||
which is printing a shell script fragment to stdout which can
|
||||
either be used verbatim or adjusted to import the user on the
|
||||
client machine.
|
||||
</para>
|
||||
<para>
|
||||
For example, let’s say you have the following configuration:
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
services.taskserver.enable = true;
|
||||
services.taskserver.fqdn = "server";
|
||||
services.taskserver.listenHost = "::";
|
||||
services.taskserver.organisations.my-company.users = [ "alice" ];
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
This creates an organisation called <literal>my-company</literal>
|
||||
with the user <literal>alice</literal>.
|
||||
</para>
|
||||
<para>
|
||||
Now in order to import the <literal>alice</literal> user to
|
||||
another machine <literal>alicebox</literal>, all we need to do is
|
||||
something like this:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ ssh server nixos-taskserver user export my-company alice | sh
|
||||
</programlisting>
|
||||
<para>
|
||||
Of course, if no SSH daemon is available on the server you can
|
||||
also copy & paste it directly into a shell.
|
||||
</para>
|
||||
<para>
|
||||
After this step the user should be set up and you can start
|
||||
synchronising your tasks for the first time with
|
||||
<command>task sync init</command> on <literal>alicebox</literal>.
|
||||
</para>
|
||||
<para>
|
||||
Subsequent synchronisation requests merely require the command
|
||||
<command>task sync</command> after that stage.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-taskserver-manual-ca-management">
|
||||
<title>Manual CA management</title>
|
||||
<para>
|
||||
If you set any options within
|
||||
<link linkend="opt-services.taskserver.pki.manual.ca.cert">service.taskserver.pki.manual</link>.*,
|
||||
<command>nixos-taskserver</command> won’t issue certificates, but
|
||||
you can still use it for adding or removing user accounts.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
|
@ -59,5 +59,5 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
meta.doc = ./weechat.xml;
|
||||
meta.doc = ./weechat.md;
|
||||
}
|
||||
|
|
|
@ -1,63 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-weechat">
|
||||
<title>WeeChat</title>
|
||||
<para>
|
||||
<link xlink:href="https://weechat.org/">WeeChat</link> is a fast and
|
||||
extensible IRC client.
|
||||
</para>
|
||||
<section xml:id="module-services-weechat-basic-usage">
|
||||
<title>Basic Usage</title>
|
||||
<para>
|
||||
By default, the module creates a
|
||||
<link xlink:href="https://www.freedesktop.org/wiki/Software/systemd/"><literal>systemd</literal></link>
|
||||
unit which runs the chat client in a detached
|
||||
<link xlink:href="https://www.gnu.org/software/screen/"><literal>screen</literal></link>
|
||||
session.
|
||||
</para>
|
||||
<para>
|
||||
This can be done by enabling the <literal>weechat</literal>
|
||||
service:
|
||||
</para>
|
||||
<programlisting>
|
||||
{ ... }:
|
||||
|
||||
{
|
||||
services.weechat.enable = true;
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
The service is managed by a dedicated user named
|
||||
<literal>weechat</literal> in the state directory
|
||||
<literal>/var/lib/weechat</literal>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-weechat-reattach">
|
||||
<title>Re-attaching to WeeChat</title>
|
||||
<para>
|
||||
WeeChat runs in a screen session owned by a dedicated user. To
|
||||
explicitly allow your another user to attach to this session, the
|
||||
<literal>screenrc</literal> needs to be tweaked by adding
|
||||
<link xlink:href="https://www.gnu.org/software/screen/manual/html_node/Multiuser.html#Multiuser">multiuser</link>
|
||||
support:
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
programs.screen.screenrc = ''
|
||||
multiuser on
|
||||
acladd normal_user
|
||||
'';
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
Now, the session can be re-attached like this:
|
||||
</para>
|
||||
<programlisting>
|
||||
screen -x weechat/weechat-screen
|
||||
</programlisting>
|
||||
<para>
|
||||
<emphasis>The session name can be changed using
|
||||
<link xlink:href="options.html#opt-services.weechat.sessionName">services.weechat.sessionName.</link></emphasis>
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
|
@ -539,6 +539,6 @@ in
|
|||
};
|
||||
};
|
||||
|
||||
meta.doc = ./parsedmarc.xml;
|
||||
meta.doc = ./parsedmarc.md;
|
||||
meta.maintainers = [ lib.maintainers.talyz ];
|
||||
}
|
||||
|
|
|
@ -1,126 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-parsedmarc">
|
||||
<title>parsedmarc</title>
|
||||
<para>
|
||||
<link xlink:href="https://domainaware.github.io/parsedmarc/">parsedmarc</link>
|
||||
is a service which parses incoming
|
||||
<link xlink:href="https://dmarc.org/">DMARC</link> reports and
|
||||
stores or sends them to a downstream service for further analysis.
|
||||
In combination with Elasticsearch, Grafana and the included Grafana
|
||||
dashboard, it provides a handy overview of DMARC reports over time.
|
||||
</para>
|
||||
<section xml:id="module-services-parsedmarc-basic-usage">
|
||||
<title>Basic usage</title>
|
||||
<para>
|
||||
A very minimal setup which reads incoming reports from an external
|
||||
email address and saves them to a local Elasticsearch instance
|
||||
looks like this:
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.parsedmarc = {
|
||||
enable = true;
|
||||
settings.imap = {
|
||||
host = "imap.example.com";
|
||||
user = "alice@example.com";
|
||||
password = "/path/to/imap_password_file";
|
||||
};
|
||||
provision.geoIp = false; # Not recommended!
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
Note that GeoIP provisioning is disabled in the example for
|
||||
simplicity, but should be turned on for fully functional reports.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-parsedmarc-local-mail">
|
||||
<title>Local mail</title>
|
||||
<para>
|
||||
Instead of watching an external inbox, a local inbox can be
|
||||
automatically provisioned. The recipient’s name is by default set
|
||||
to <literal>dmarc</literal>, but can be configured in
|
||||
<link xlink:href="options.html#opt-services.parsedmarc.provision.localMail.recipientName">services.parsedmarc.provision.localMail.recipientName</link>.
|
||||
You need to add an MX record pointing to the host. More
|
||||
concretely: for the example to work, an MX record needs to be set
|
||||
up for <literal>monitoring.example.com</literal> and the complete
|
||||
email address that should be configured in the domain’s dmarc
|
||||
policy is <literal>dmarc@monitoring.example.com</literal>.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.parsedmarc = {
|
||||
enable = true;
|
||||
provision = {
|
||||
localMail = {
|
||||
enable = true;
|
||||
hostname = monitoring.example.com;
|
||||
};
|
||||
geoIp = false; # Not recommended!
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-parsedmarc-grafana-geoip">
|
||||
<title>Grafana and GeoIP</title>
|
||||
<para>
|
||||
The reports can be visualized and summarized with parsedmarc’s
|
||||
official Grafana dashboard. For all views to work, and for the
|
||||
data to be complete, GeoIP databases are also required. The
|
||||
following example shows a basic deployment where the provisioned
|
||||
Elasticsearch instance is automatically added as a Grafana
|
||||
datasource, and the dashboard is added to Grafana as well.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.parsedmarc = {
|
||||
enable = true;
|
||||
provision = {
|
||||
localMail = {
|
||||
enable = true;
|
||||
hostname = url;
|
||||
};
|
||||
grafana = {
|
||||
datasource = true;
|
||||
dashboard = true;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
# Not required, but recommended for full functionality
|
||||
services.geoipupdate = {
|
||||
settings = {
|
||||
AccountID = 000000;
|
||||
LicenseKey = "/path/to/license_key_file";
|
||||
};
|
||||
};
|
||||
|
||||
services.grafana = {
|
||||
enable = true;
|
||||
addr = "0.0.0.0";
|
||||
domain = url;
|
||||
rootUrl = "https://" + url;
|
||||
protocol = "socket";
|
||||
security = {
|
||||
adminUser = "admin";
|
||||
adminPasswordFile = "/path/to/admin_password_file";
|
||||
secretKeyFile = "/path/to/secret_key_file";
|
||||
};
|
||||
};
|
||||
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
recommendedTlsSettings = true;
|
||||
recommendedOptimisation = true;
|
||||
recommendedGzipSettings = true;
|
||||
recommendedProxySettings = true;
|
||||
upstreams.grafana.servers."unix:/${config.services.grafana.socket}" = {};
|
||||
virtualHosts.${url} = {
|
||||
root = config.services.grafana.staticRootPath;
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
locations."/".tryFiles = "$uri @grafana";
|
||||
locations."@grafana".proxyPass = "http://grafana";
|
||||
};
|
||||
};
|
||||
users.users.nginx.extraGroups = [ "grafana" ];
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -323,7 +323,7 @@ in
|
|||
);
|
||||
|
||||
meta = {
|
||||
doc = ./exporters.xml;
|
||||
doc = ./exporters.md;
|
||||
maintainers = [ maintainers.willibutz ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,245 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-prometheus-exporters">
|
||||
<title>Prometheus exporters</title>
|
||||
<para>
|
||||
Prometheus exporters provide metrics for the
|
||||
<link xlink:href="https://prometheus.io">prometheus monitoring
|
||||
system</link>.
|
||||
</para>
|
||||
<section xml:id="module-services-prometheus-exporters-configuration">
|
||||
<title>Configuration</title>
|
||||
<para>
|
||||
One of the most common exporters is the
|
||||
<link xlink:href="https://github.com/prometheus/node_exporter">node
|
||||
exporter</link>, it provides hardware and OS metrics from the host
|
||||
it’s running on. The exporter could be configured as follows:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.prometheus.exporters.node = {
|
||||
enable = true;
|
||||
port = 9100;
|
||||
enabledCollectors = [
|
||||
"logind"
|
||||
"systemd"
|
||||
];
|
||||
disabledCollectors = [
|
||||
"textfile"
|
||||
];
|
||||
openFirewall = true;
|
||||
firewallFilter = "-i br0 -p tcp -m tcp --dport 9100";
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
It should now serve all metrics from the collectors that are
|
||||
explicitly enabled and the ones that are
|
||||
<link xlink:href="https://github.com/prometheus/node_exporter#enabled-by-default">enabled
|
||||
by default</link>, via http under <literal>/metrics</literal>. In
|
||||
this example the firewall should just allow incoming connections
|
||||
to the exporter’s port on the bridge interface
|
||||
<literal>br0</literal> (this would have to be configured
|
||||
separately of course). For more information about configuration
|
||||
see <literal>man configuration.nix</literal> or search through the
|
||||
<link xlink:href="https://nixos.org/nixos/options.html#prometheus.exporters">available
|
||||
options</link>.
|
||||
</para>
|
||||
<para>
|
||||
Prometheus can now be configured to consume the metrics produced
|
||||
by the exporter:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.prometheus = {
|
||||
# ...
|
||||
|
||||
scrapeConfigs = [
|
||||
{
|
||||
job_name = "node";
|
||||
static_configs = [{
|
||||
targets = [ "localhost:${toString config.services.prometheus.exporters.node.port}" ];
|
||||
}];
|
||||
}
|
||||
];
|
||||
|
||||
# ...
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-prometheus-exporters-new-exporter">
|
||||
<title>Adding a new exporter</title>
|
||||
<para>
|
||||
To add a new exporter, it has to be packaged first (see
|
||||
<literal>nixpkgs/pkgs/servers/monitoring/prometheus/</literal> for
|
||||
examples), then a module can be added. The postfix exporter is
|
||||
used in this example:
|
||||
</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Some default options for all exporters are provided by
|
||||
<literal>nixpkgs/nixos/modules/services/monitoring/prometheus/exporters.nix</literal>:
|
||||
</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>enable</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>port</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>listenAddress</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>extraFlags</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>openFirewall</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>firewallFilter</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>user</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>group</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
As there is already a package available, the module can now be
|
||||
added. This is accomplished by adding a new file to the
|
||||
<literal>nixos/modules/services/monitoring/prometheus/exporters/</literal>
|
||||
directory, which will be called postfix.nix and contains all
|
||||
exporter specific options and configuration:
|
||||
</para>
|
||||
<programlisting>
|
||||
# nixpgs/nixos/modules/services/prometheus/exporters/postfix.nix
|
||||
{ config, lib, pkgs, options }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
# for convenience we define cfg here
|
||||
cfg = config.services.prometheus.exporters.postfix;
|
||||
in
|
||||
{
|
||||
port = 9154; # The postfix exporter listens on this port by default
|
||||
|
||||
# `extraOpts` is an attribute set which contains additional options
|
||||
# (and optional overrides for default options).
|
||||
# Note that this attribute is optional.
|
||||
extraOpts = {
|
||||
telemetryPath = mkOption {
|
||||
type = types.str;
|
||||
default = "/metrics";
|
||||
description = ''
|
||||
Path under which to expose metrics.
|
||||
'';
|
||||
};
|
||||
logfilePath = mkOption {
|
||||
type = types.path;
|
||||
default = /var/log/postfix_exporter_input.log;
|
||||
example = /var/log/mail.log;
|
||||
description = ''
|
||||
Path where Postfix writes log entries.
|
||||
This file will be truncated by this exporter!
|
||||
'';
|
||||
};
|
||||
showqPath = mkOption {
|
||||
type = types.path;
|
||||
default = /var/spool/postfix/public/showq;
|
||||
example = /var/lib/postfix/queue/public/showq;
|
||||
description = ''
|
||||
Path at which Postfix places its showq socket.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
# `serviceOpts` is an attribute set which contains configuration
|
||||
# for the exporter's systemd service. One of
|
||||
# `serviceOpts.script` and `serviceOpts.serviceConfig.ExecStart`
|
||||
# has to be specified here. This will be merged with the default
|
||||
# service configuration.
|
||||
# Note that by default 'DynamicUser' is 'true'.
|
||||
serviceOpts = {
|
||||
serviceConfig = {
|
||||
DynamicUser = false;
|
||||
ExecStart = ''
|
||||
${pkgs.prometheus-postfix-exporter}/bin/postfix_exporter \
|
||||
--web.listen-address ${cfg.listenAddress}:${toString cfg.port} \
|
||||
--web.telemetry-path ${cfg.telemetryPath} \
|
||||
${concatStringsSep " \\\n " cfg.extraFlags}
|
||||
'';
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
This should already be enough for the postfix exporter.
|
||||
Additionally one could now add assertions and conditional
|
||||
default values. This can be done in the
|
||||
<quote>meta-module</quote> that combines all exporter
|
||||
definitions and generates the submodules:
|
||||
<literal>nixpkgs/nixos/modules/services/prometheus/exporters.nix</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="module-services-prometheus-exporters-update-exporter-module">
|
||||
<title>Updating an exporter module</title>
|
||||
<para>
|
||||
Should an exporter option change at some point, it is possible to
|
||||
add information about the change to the exporter definition
|
||||
similar to <literal>nixpkgs/nixos/modules/rename.nix</literal>:
|
||||
</para>
|
||||
<programlisting>
|
||||
{ config, lib, pkgs, options }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.services.prometheus.exporters.nginx;
|
||||
in
|
||||
{
|
||||
port = 9113;
|
||||
extraOpts = {
|
||||
# additional module options
|
||||
# ...
|
||||
};
|
||||
serviceOpts = {
|
||||
# service configuration
|
||||
# ...
|
||||
};
|
||||
imports = [
|
||||
# 'services.prometheus.exporters.nginx.telemetryEndpoint' -> 'services.prometheus.exporters.nginx.telemetryPath'
|
||||
(mkRenamedOptionModule [ "telemetryEndpoint" ] [ "telemetryPath" ])
|
||||
|
||||
# removed option 'services.prometheus.exporters.nginx.insecure'
|
||||
(mkRemovedOptionModule [ "insecure" ] ''
|
||||
This option was replaced by 'prometheus.exporters.nginx.sslVerify' which defaults to true.
|
||||
'')
|
||||
({ options.warnings = options.warnings; })
|
||||
];
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -95,5 +95,5 @@ in
|
|||
users.groups.litestream = {};
|
||||
};
|
||||
|
||||
meta.doc = ./default.xml;
|
||||
meta.doc = ./default.md;
|
||||
}
|
||||
|
|
|
@ -1,62 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-litestream">
|
||||
<title>Litestream</title>
|
||||
<para>
|
||||
<link xlink:href="https://litestream.io/">Litestream</link> is a
|
||||
standalone streaming replication tool for SQLite.
|
||||
</para>
|
||||
<section xml:id="module-services-litestream-configuration">
|
||||
<title>Configuration</title>
|
||||
<para>
|
||||
Litestream service is managed by a dedicated user named
|
||||
<literal>litestream</literal> which needs permission to the
|
||||
database file. Here’s an example config which gives required
|
||||
permissions to access
|
||||
<link linkend="opt-services.grafana.settings.database.path">grafana
|
||||
database</link>:
|
||||
</para>
|
||||
<programlisting>
|
||||
{ pkgs, ... }:
|
||||
{
|
||||
users.users.litestream.extraGroups = [ "grafana" ];
|
||||
|
||||
systemd.services.grafana.serviceConfig.ExecStartPost = "+" + pkgs.writeShellScript "grant-grafana-permissions" ''
|
||||
timeout=10
|
||||
|
||||
while [ ! -f /var/lib/grafana/data/grafana.db ];
|
||||
do
|
||||
if [ "$timeout" == 0 ]; then
|
||||
echo "ERROR: Timeout while waiting for /var/lib/grafana/data/grafana.db."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
sleep 1
|
||||
|
||||
((timeout--))
|
||||
done
|
||||
|
||||
find /var/lib/grafana -type d -exec chmod -v 775 {} \;
|
||||
find /var/lib/grafana -type f -exec chmod -v 660 {} \;
|
||||
'';
|
||||
|
||||
services.litestream = {
|
||||
enable = true;
|
||||
|
||||
environmentFile = "/run/secrets/litestream";
|
||||
|
||||
settings = {
|
||||
dbs = [
|
||||
{
|
||||
path = "/var/lib/grafana/data/grafana.db";
|
||||
replicas = [{
|
||||
url = "s3://mybkt.litestream.io/grafana";
|
||||
}];
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -311,6 +311,6 @@ in
|
|||
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ pennae ];
|
||||
doc = ./firefox-syncserver.xml;
|
||||
doc = ./firefox-syncserver.md;
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,79 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-firefox-syncserver">
|
||||
<title>Firefox Sync server</title>
|
||||
<para>
|
||||
A storage server for Firefox Sync that you can easily host yourself.
|
||||
</para>
|
||||
<section xml:id="module-services-firefox-syncserver-quickstart">
|
||||
<title>Quickstart</title>
|
||||
<para>
|
||||
The absolute minimal configuration for the sync server looks like
|
||||
this:
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.mysql.package = pkgs.mariadb;
|
||||
|
||||
services.firefox-syncserver = {
|
||||
enable = true;
|
||||
secrets = builtins.toFile "sync-secrets" ''
|
||||
SYNC_MASTER_SECRET=this-secret-is-actually-leaked-to-/nix/store
|
||||
'';
|
||||
singleNode = {
|
||||
enable = true;
|
||||
hostname = "localhost";
|
||||
url = "http://localhost:5000";
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
This will start a sync server that is only accessible locally.
|
||||
Once the services is running you can navigate to
|
||||
<literal>about:config</literal> in your Firefox profile and set
|
||||
<literal>identity.sync.tokenserver.uri</literal> to
|
||||
<literal>http://localhost:5000/1.0/sync/1.5</literal>. Your
|
||||
browser will now use your local sync server for data storage.
|
||||
</para>
|
||||
<warning>
|
||||
<para>
|
||||
This configuration should never be used in production. It is not
|
||||
encrypted and stores its secrets in a world-readable location.
|
||||
</para>
|
||||
</warning>
|
||||
</section>
|
||||
<section xml:id="module-services-firefox-syncserver-configuration">
|
||||
<title>More detailed setup</title>
|
||||
<para>
|
||||
The <literal>firefox-syncserver</literal> service provides a
|
||||
number of options to make setting up small deployment easier.
|
||||
These are grouped under the <literal>singleNode</literal> element
|
||||
of the option tree and allow simple configuration of the most
|
||||
important parameters.
|
||||
</para>
|
||||
<para>
|
||||
Single node setup is split into two kinds of options: those that
|
||||
affect the sync server itself, and those that affect its
|
||||
surroundings. Options that affect the sync server are
|
||||
<literal>capacity</literal>, which configures how many accounts
|
||||
may be active on this instance, and <literal>url</literal>, which
|
||||
holds the URL under which the sync server can be accessed. The
|
||||
<literal>url</literal> can be configured automatically when using
|
||||
nginx.
|
||||
</para>
|
||||
<para>
|
||||
Options that affect the surroundings of the sync server are
|
||||
<literal>enableNginx</literal>, <literal>enableTLS</literal> and
|
||||
<literal>hostnam</literal>. If <literal>enableNginx</literal> is
|
||||
set the sync server module will automatically add an nginx virtual
|
||||
host to the system using <literal>hostname</literal> as the domain
|
||||
and set <literal>url</literal> accordingly. If
|
||||
<literal>enableTLS</literal> is set the module will also enable
|
||||
ACME certificates on the new virtual host and force all
|
||||
connections to be made via TLS.
|
||||
</para>
|
||||
<para>
|
||||
For actual deployment it is also recommended to store the
|
||||
<literal>secrets</literal> file in a secure location.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
|
@ -671,6 +671,6 @@ in
|
|||
|
||||
meta = {
|
||||
maintainers = with lib.maintainers; [ pennae ];
|
||||
doc = ./mosquitto.xml;
|
||||
doc = ./mosquitto.md;
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,149 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-mosquitto">
|
||||
<title>Mosquitto</title>
|
||||
<para>
|
||||
Mosquitto is a MQTT broker often used for IoT or home automation
|
||||
data transport.
|
||||
</para>
|
||||
<section xml:id="module-services-mosquitto-quickstart">
|
||||
<title>Quickstart</title>
|
||||
<para>
|
||||
A minimal configuration for Mosquitto is
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.mosquitto = {
|
||||
enable = true;
|
||||
listeners = [ {
|
||||
acl = [ "pattern readwrite #" ];
|
||||
omitPasswordAuth = true;
|
||||
settings.allow_anonymous = true;
|
||||
} ];
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
This will start a broker on port 1883, listening on all interfaces
|
||||
of the machine, allowing read/write access to all topics to any
|
||||
user without password requirements.
|
||||
</para>
|
||||
<para>
|
||||
User authentication can be configured with the
|
||||
<literal>users</literal> key of listeners. A config that gives
|
||||
full read access to a user <literal>monitor</literal> and
|
||||
restricted write access to a user <literal>service</literal> could
|
||||
look like
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.mosquitto = {
|
||||
enable = true;
|
||||
listeners = [ {
|
||||
users = {
|
||||
monitor = {
|
||||
acl = [ "read #" ];
|
||||
password = "monitor";
|
||||
};
|
||||
service = {
|
||||
acl = [ "write service/#" ];
|
||||
password = "service";
|
||||
};
|
||||
};
|
||||
} ];
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
TLS authentication is configured by setting TLS-related options of
|
||||
the listener:
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.mosquitto = {
|
||||
enable = true;
|
||||
listeners = [ {
|
||||
port = 8883; # port change is not required, but helpful to avoid mistakes
|
||||
# ...
|
||||
settings = {
|
||||
cafile = "/path/to/mqtt.ca.pem";
|
||||
certfile = "/path/to/mqtt.pem";
|
||||
keyfile = "/path/to/mqtt.key";
|
||||
};
|
||||
} ];
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-mosquitto-config">
|
||||
<title>Configuration</title>
|
||||
<para>
|
||||
The Mosquitto configuration has four distinct types of settings:
|
||||
the global settings of the daemon, listeners, plugins, and
|
||||
bridges. Bridges and listeners are part of the global
|
||||
configuration, plugins are part of listeners. Users of the broker
|
||||
are configured as parts of listeners rather than globally,
|
||||
allowing configurations in which a given user is only allowed to
|
||||
log in to the broker using specific listeners (eg to configure an
|
||||
admin user with full access to all topics, but restricted to
|
||||
localhost).
|
||||
</para>
|
||||
<para>
|
||||
Almost all options of Mosquitto are available for configuration at
|
||||
their appropriate levels, some as NixOS options written in camel
|
||||
case, the remainders under <literal>settings</literal> with their
|
||||
exact names in the Mosquitto config file. The exceptions are
|
||||
<literal>acl_file</literal> (which is always set according to the
|
||||
<literal>acl</literal> attributes of a listener and its users) and
|
||||
<literal>per_listener_settings</literal> (which is always set to
|
||||
<literal>true</literal>).
|
||||
</para>
|
||||
<section xml:id="module-services-mosquitto-config-passwords">
|
||||
<title>Password authentication</title>
|
||||
<para>
|
||||
Mosquitto can be run in two modes, with a password file or
|
||||
without. Each listener has its own password file, and different
|
||||
listeners may use different password files. Password file
|
||||
generation can be disabled by setting
|
||||
<literal>omitPasswordAuth = true</literal> for a listener; in
|
||||
this case it is necessary to either set
|
||||
<literal>settings.allow_anonymous = true</literal> to allow all
|
||||
logins, or to configure other authentication methods like TLS
|
||||
client certificates with
|
||||
<literal>settings.use_identity_as_username = true</literal>.
|
||||
</para>
|
||||
<para>
|
||||
The default is to generate a password file for each listener
|
||||
from the users configured to that listener. Users with no
|
||||
configured password will not be added to the password file and
|
||||
thus will not be able to use the broker.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-mosquitto-config-acl">
|
||||
<title>ACL format</title>
|
||||
<para>
|
||||
Every listener has a Mosquitto <literal>acl_file</literal>
|
||||
attached to it. This ACL is configured via two attributes of the
|
||||
config:
|
||||
</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
the <literal>acl</literal> attribute of the listener
|
||||
configures pattern ACL entries and topic ACL entries for
|
||||
anonymous users. Each entry must be prefixed with
|
||||
<literal>pattern</literal> or <literal>topic</literal> to
|
||||
distinguish between these two cases.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
the <literal>acl</literal> attribute of every user
|
||||
configures in the listener configured the ACL for that given
|
||||
user. Only topic ACLs are supported by Mosquitto in this
|
||||
setting, so no prefix is required or allowed.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>
|
||||
The default ACL for a listener is empty, disallowing all
|
||||
accesses from all clients. To configure a completely open ACL,
|
||||
set <literal>acl = [ "pattern readwrite #" ]</literal>
|
||||
in the listener.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
|
@ -516,7 +516,6 @@ in {
|
|||
${optionalString (!isNull defaults) ''
|
||||
defaults {
|
||||
${indentLines 2 defaults}
|
||||
multipath_dir ${cfg.package}/lib/multipath
|
||||
}
|
||||
''}
|
||||
${optionalString (!isNull blacklist) ''
|
||||
|
|
|
@ -185,7 +185,7 @@ in
|
|||
ProtectSystem = "full";
|
||||
ProtectHome = true;
|
||||
PrivateTmp = true;
|
||||
PrivateDevices = true;
|
||||
PrivateDevices = false;
|
||||
PrivateUsers = false;
|
||||
ProtectHostname = true;
|
||||
ProtectClock = false;
|
||||
|
@ -203,7 +203,7 @@ in
|
|||
PrivateMounts = true;
|
||||
# System Call Filtering
|
||||
SystemCallArchitectures = "native";
|
||||
SystemCallFilter = [ "~@cpu-emulation @debug @keyring @mount @obsolete @privileged @resources" "@clock" "@setuid" "capset" "chown" ];
|
||||
SystemCallFilter = [ "~@cpu-emulation @debug @keyring @mount @obsolete @privileged @resources" "@clock" "@setuid" "capset" "chown" ] ++ lib.optional pkgs.stdenv.hostPlatform.isAarch64 "fchownat";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
|
|
@ -90,6 +90,7 @@ let
|
|||
generateConfig = name: icfg:
|
||||
pkgs.writeText "config" ''
|
||||
interface=${name}
|
||||
${optionalString (icfg.protocol != null) "protocol=${icfg.protocol}"}
|
||||
${optionalString (icfg.user != null) "user=${icfg.user}"}
|
||||
${optionalString (icfg.passwordFile != null) "passwd-on-stdin"}
|
||||
${optionalString (icfg.certificate != null)
|
||||
|
|
|
@ -147,5 +147,5 @@ in {
|
|||
|
||||
};
|
||||
meta.maintainers = with lib.maintainers; [ ninjatrappeur ];
|
||||
meta.doc = ./pleroma.xml;
|
||||
meta.doc = ./pleroma.md;
|
||||
}
|
||||
|
|
|
@ -1,244 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-pleroma">
|
||||
<title>Pleroma</title>
|
||||
<para>
|
||||
<link xlink:href="https://pleroma.social/">Pleroma</link> is a
|
||||
lightweight activity pub server.
|
||||
</para>
|
||||
<section xml:id="module-services-pleroma-generate-config">
|
||||
<title>Generating the Pleroma config</title>
|
||||
<para>
|
||||
The <literal>pleroma_ctl</literal> CLI utility will prompt you
|
||||
some questions and it will generate an initial config file. This
|
||||
is an example of usage
|
||||
</para>
|
||||
<programlisting>
|
||||
$ mkdir tmp-pleroma
|
||||
$ cd tmp-pleroma
|
||||
$ nix-shell -p pleroma-otp
|
||||
$ pleroma_ctl instance gen --output config.exs --output-psql setup.psql
|
||||
</programlisting>
|
||||
<para>
|
||||
The <literal>config.exs</literal> file can be further customized
|
||||
following the instructions on the
|
||||
<link xlink:href="https://docs-develop.pleroma.social/backend/configuration/cheatsheet/">upstream
|
||||
documentation</link>. Many refinements can be applied also after
|
||||
the service is running.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-pleroma-initialize-db">
|
||||
<title>Initializing the database</title>
|
||||
<para>
|
||||
First, the Postgresql service must be enabled in the NixOS
|
||||
configuration
|
||||
</para>
|
||||
<programlisting>
|
||||
services.postgresql = {
|
||||
enable = true;
|
||||
package = pkgs.postgresql_13;
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
and activated with the usual
|
||||
</para>
|
||||
<programlisting>
|
||||
$ nixos-rebuild switch
|
||||
</programlisting>
|
||||
<para>
|
||||
Then you can create and seed the database, using the
|
||||
<literal>setup.psql</literal> file that you generated in the
|
||||
previous section, by running
|
||||
</para>
|
||||
<programlisting>
|
||||
$ sudo -u postgres psql -f setup.psql
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-pleroma-enable">
|
||||
<title>Enabling the Pleroma service locally</title>
|
||||
<para>
|
||||
In this section we will enable the Pleroma service only locally,
|
||||
so its configurations can be improved incrementally.
|
||||
</para>
|
||||
<para>
|
||||
This is an example of configuration, where
|
||||
<xref linkend="opt-services.pleroma.configs" /> option contains
|
||||
the content of the file <literal>config.exs</literal>, generated
|
||||
<link linkend="module-services-pleroma-generate-config">in the
|
||||
first section</link>, but with the secrets (database password,
|
||||
endpoint secret key, salts, etc.) removed. Removing secrets is
|
||||
important, because otherwise they will be stored publicly in the
|
||||
Nix store.
|
||||
</para>
|
||||
<programlisting>
|
||||
services.pleroma = {
|
||||
enable = true;
|
||||
secretConfigFile = "/var/lib/pleroma/secrets.exs";
|
||||
configs = [
|
||||
''
|
||||
import Config
|
||||
|
||||
config :pleroma, Pleroma.Web.Endpoint,
|
||||
url: [host: "pleroma.example.net", scheme: "https", port: 443],
|
||||
http: [ip: {127, 0, 0, 1}, port: 4000]
|
||||
|
||||
config :pleroma, :instance,
|
||||
name: "Test",
|
||||
email: "admin@example.net",
|
||||
notify_email: "admin@example.net",
|
||||
limit: 5000,
|
||||
registrations_open: true
|
||||
|
||||
config :pleroma, :media_proxy,
|
||||
enabled: false,
|
||||
redirect_on_failure: true
|
||||
|
||||
config :pleroma, Pleroma.Repo,
|
||||
adapter: Ecto.Adapters.Postgres,
|
||||
username: "pleroma",
|
||||
database: "pleroma",
|
||||
hostname: "localhost"
|
||||
|
||||
# Configure web push notifications
|
||||
config :web_push_encryption, :vapid_details,
|
||||
subject: "mailto:admin@example.net"
|
||||
|
||||
# ... TO CONTINUE ...
|
||||
''
|
||||
];
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
Secrets must be moved into a file pointed by
|
||||
<xref linkend="opt-services.pleroma.secretConfigFile" />, in our
|
||||
case <literal>/var/lib/pleroma/secrets.exs</literal>. This file
|
||||
can be created copying the previously generated
|
||||
<literal>config.exs</literal> file and then removing all the
|
||||
settings, except the secrets. This is an example
|
||||
</para>
|
||||
<programlisting>
|
||||
# Pleroma instance passwords
|
||||
|
||||
import Config
|
||||
|
||||
config :pleroma, Pleroma.Web.Endpoint,
|
||||
secret_key_base: "<the secret generated by pleroma_ctl>",
|
||||
signing_salt: "<the secret generated by pleroma_ctl>"
|
||||
|
||||
config :pleroma, Pleroma.Repo,
|
||||
password: "<the secret generated by pleroma_ctl>"
|
||||
|
||||
# Configure web push notifications
|
||||
config :web_push_encryption, :vapid_details,
|
||||
public_key: "<the secret generated by pleroma_ctl>",
|
||||
private_key: "<the secret generated by pleroma_ctl>"
|
||||
|
||||
# ... TO CONTINUE ...
|
||||
</programlisting>
|
||||
<para>
|
||||
Note that the lines of the same configuration group are comma
|
||||
separated (i.e. all the lines end with a comma, except the last
|
||||
one), so when the lines with passwords are added or removed,
|
||||
commas must be adjusted accordingly.
|
||||
</para>
|
||||
<para>
|
||||
The service can be enabled with the usual
|
||||
</para>
|
||||
<programlisting>
|
||||
$ nixos-rebuild switch
|
||||
</programlisting>
|
||||
<para>
|
||||
The service is accessible only from the local
|
||||
<literal>127.0.0.1:4000</literal> port. It can be tested using a
|
||||
port forwarding like this
|
||||
</para>
|
||||
<programlisting>
|
||||
$ ssh -L 4000:localhost:4000 myuser@example.net
|
||||
</programlisting>
|
||||
<para>
|
||||
and then accessing
|
||||
<link xlink:href="http://localhost:4000">http://localhost:4000</link>
|
||||
from a web browser.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-pleroma-admin-user">
|
||||
<title>Creating the admin user</title>
|
||||
<para>
|
||||
After Pleroma service is running, all
|
||||
<link xlink:href="https://docs-develop.pleroma.social/">Pleroma
|
||||
administration utilities</link> can be used. In particular an
|
||||
admin user can be created with
|
||||
</para>
|
||||
<programlisting>
|
||||
$ pleroma_ctl user new <nickname> <email> --admin --moderator --password <password>
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-pleroma-nginx">
|
||||
<title>Configuring Nginx</title>
|
||||
<para>
|
||||
In this configuration, Pleroma is listening only on the local port
|
||||
4000. Nginx can be configured as a Reverse Proxy, for forwarding
|
||||
requests from public ports to the Pleroma service. This is an
|
||||
example of configuration, using
|
||||
<link xlink:href="https://letsencrypt.org/">Let’s Encrypt</link>
|
||||
for the TLS certificates
|
||||
</para>
|
||||
<programlisting>
|
||||
security.acme = {
|
||||
email = "root@example.net";
|
||||
acceptTerms = true;
|
||||
};
|
||||
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
addSSL = true;
|
||||
|
||||
recommendedTlsSettings = true;
|
||||
recommendedOptimisation = true;
|
||||
recommendedGzipSettings = true;
|
||||
|
||||
recommendedProxySettings = false;
|
||||
# NOTE: if enabled, the NixOS proxy optimizations will override the Pleroma
|
||||
# specific settings, and they will enter in conflict.
|
||||
|
||||
virtualHosts = {
|
||||
"pleroma.example.net" = {
|
||||
http2 = true;
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
|
||||
locations."/" = {
|
||||
proxyPass = "http://127.0.0.1:4000";
|
||||
|
||||
extraConfig = ''
|
||||
etag on;
|
||||
gzip on;
|
||||
|
||||
add_header 'Access-Control-Allow-Origin' '*' always;
|
||||
add_header 'Access-Control-Allow-Methods' 'POST, PUT, DELETE, GET, PATCH, OPTIONS' always;
|
||||
add_header 'Access-Control-Allow-Headers' 'Authorization, Content-Type, Idempotency-Key' always;
|
||||
add_header 'Access-Control-Expose-Headers' 'Link, X-RateLimit-Reset, X-RateLimit-Limit, X-RateLimit-Remaining, X-Request-Id' always;
|
||||
if ($request_method = OPTIONS) {
|
||||
return 204;
|
||||
}
|
||||
add_header X-XSS-Protection "1; mode=block";
|
||||
add_header X-Permitted-Cross-Domain-Policies none;
|
||||
add_header X-Frame-Options DENY;
|
||||
add_header X-Content-Type-Options nosniff;
|
||||
add_header Referrer-Policy same-origin;
|
||||
add_header X-Download-Options noopen;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
|
||||
client_max_body_size 16m;
|
||||
# NOTE: increase if users need to upload very big files
|
||||
'';
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -905,5 +905,5 @@ in
|
|||
|
||||
};
|
||||
|
||||
meta.doc = ./prosody.xml;
|
||||
meta.doc = ./prosody.md;
|
||||
}
|
||||
|
|
|
@ -1,92 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-prosody">
|
||||
<title>Prosody</title>
|
||||
<para>
|
||||
<link xlink:href="https://prosody.im/">Prosody</link> is an
|
||||
open-source, modern XMPP server.
|
||||
</para>
|
||||
<section xml:id="module-services-prosody-basic-usage">
|
||||
<title>Basic usage</title>
|
||||
<para>
|
||||
A common struggle for most XMPP newcomers is to find the right set
|
||||
of XMPP Extensions (XEPs) to setup. Forget to activate a few of
|
||||
those and your XMPP experience might turn into a nightmare!
|
||||
</para>
|
||||
<para>
|
||||
The XMPP community tackles this problem by creating a meta-XEP
|
||||
listing a decent set of XEPs you should implement. This meta-XEP
|
||||
is issued every year, the 2020 edition being
|
||||
<link xlink:href="https://xmpp.org/extensions/xep-0423.html">XEP-0423</link>.
|
||||
</para>
|
||||
<para>
|
||||
The NixOS Prosody module will implement most of these recommendend
|
||||
XEPs out of the box. That being said, two components still require
|
||||
some manual configuration: the
|
||||
<link xlink:href="https://xmpp.org/extensions/xep-0045.html">Multi
|
||||
User Chat (MUC)</link> and the
|
||||
<link xlink:href="https://xmpp.org/extensions/xep-0363.html">HTTP
|
||||
File Upload</link> ones. You’ll need to create a DNS subdomain for
|
||||
each of those. The current convention is to name your MUC endpoint
|
||||
<literal>conference.example.org</literal> and your HTTP upload
|
||||
domain <literal>upload.example.org</literal>.
|
||||
</para>
|
||||
<para>
|
||||
A good configuration to start with, including a
|
||||
<link xlink:href="https://xmpp.org/extensions/xep-0045.html">Multi
|
||||
User Chat (MUC)</link> endpoint as well as a
|
||||
<link xlink:href="https://xmpp.org/extensions/xep-0363.html">HTTP
|
||||
File Upload</link> endpoint will look like this:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.prosody = {
|
||||
enable = true;
|
||||
admins = [ "root@example.org" ];
|
||||
ssl.cert = "/var/lib/acme/example.org/fullchain.pem";
|
||||
ssl.key = "/var/lib/acme/example.org/key.pem";
|
||||
virtualHosts."example.org" = {
|
||||
enabled = true;
|
||||
domain = "example.org";
|
||||
ssl.cert = "/var/lib/acme/example.org/fullchain.pem";
|
||||
ssl.key = "/var/lib/acme/example.org/key.pem";
|
||||
};
|
||||
muc = [ {
|
||||
domain = "conference.example.org";
|
||||
} ];
|
||||
uploadHttp = {
|
||||
domain = "upload.example.org";
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-prosody-letsencrypt">
|
||||
<title>Let’s Encrypt Configuration</title>
|
||||
<para>
|
||||
As you can see in the code snippet from the
|
||||
<link linkend="module-services-prosody-basic-usage">previous
|
||||
section</link>, you’ll need a single TLS certificate covering your
|
||||
main endpoint, the MUC one as well as the HTTP Upload one. We can
|
||||
generate such a certificate by leveraging the ACME
|
||||
<link linkend="opt-security.acme.certs._name_.extraDomainNames">extraDomainNames</link>
|
||||
module option.
|
||||
</para>
|
||||
<para>
|
||||
Provided the setup detailed in the previous section, you’ll need
|
||||
the following acme configuration to generate a TLS certificate for
|
||||
the three endponits:
|
||||
</para>
|
||||
<programlisting>
|
||||
security.acme = {
|
||||
email = "root@example.org";
|
||||
acceptTerms = true;
|
||||
certs = {
|
||||
"example.org" = {
|
||||
webroot = "/var/www/example.org";
|
||||
email = "root@example.org";
|
||||
extraDomainNames = [ "conference.example.org" "upload.example.org" ];
|
||||
};
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -121,11 +121,15 @@ let
|
|||
''}
|
||||
|
||||
# substitute environment variables
|
||||
${pkgs.gawk}/bin/awk '{
|
||||
for(varname in ENVIRON)
|
||||
gsub("@"varname"@", ENVIRON[varname])
|
||||
print
|
||||
}' "${configFile}" > "${finalConfig}"
|
||||
if [ -f "${configFile}" ]; then
|
||||
${pkgs.gawk}/bin/awk '{
|
||||
for(varname in ENVIRON)
|
||||
gsub("@"varname"@", ENVIRON[varname])
|
||||
print
|
||||
}' "${configFile}" > "${finalConfig}"
|
||||
else
|
||||
touch "${finalConfig}"
|
||||
fi
|
||||
|
||||
iface_args="-s ${optionalString cfg.dbusControlled "-u"} -D${cfg.driver} ${configStr}"
|
||||
|
||||
|
|
|
@ -193,7 +193,7 @@ in {
|
|||
environment.systemPackages = [ cfg.package ];
|
||||
});
|
||||
meta = {
|
||||
doc = ./yggdrasil.xml;
|
||||
doc = ./yggdrasil.md;
|
||||
maintainers = with lib.maintainers; [ gazally ehmry ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,157 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-networking-yggdrasil">
|
||||
<title>Yggdrasil</title>
|
||||
<para>
|
||||
<emphasis>Source:</emphasis>
|
||||
<filename>modules/services/networking/yggdrasil/default.nix</filename>
|
||||
</para>
|
||||
<para>
|
||||
<emphasis>Upstream documentation:</emphasis>
|
||||
<link xlink:href="https://yggdrasil-network.github.io/">https://yggdrasil-network.github.io/</link>
|
||||
</para>
|
||||
<para>
|
||||
Yggdrasil is an early-stage implementation of a fully end-to-end
|
||||
encrypted, self-arranging IPv6 network.
|
||||
</para>
|
||||
<section xml:id="module-services-networking-yggdrasil-configuration">
|
||||
<title>Configuration</title>
|
||||
<section xml:id="module-services-networking-yggdrasil-configuration-simple">
|
||||
<title>Simple ephemeral node</title>
|
||||
<para>
|
||||
An annotated example of a simple configuration:
|
||||
</para>
|
||||
<programlisting>
|
||||
{
|
||||
services.yggdrasil = {
|
||||
enable = true;
|
||||
persistentKeys = false;
|
||||
# The NixOS module will generate new keys and a new IPv6 address each time
|
||||
# it is started if persistentKeys is not enabled.
|
||||
|
||||
settings = {
|
||||
Peers = [
|
||||
# Yggdrasil will automatically connect and "peer" with other nodes it
|
||||
# discovers via link-local multicast announcements. Unless this is the
|
||||
# case (it probably isn't) a node needs peers within the existing
|
||||
# network that it can tunnel to.
|
||||
"tcp://1.2.3.4:1024"
|
||||
"tcp://1.2.3.5:1024"
|
||||
# Public peers can be found at
|
||||
# https://github.com/yggdrasil-network/public-peers
|
||||
];
|
||||
};
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-networking-yggdrasil-configuration-prefix">
|
||||
<title>Persistent node with prefix</title>
|
||||
<para>
|
||||
A node with a fixed address that announces a prefix:
|
||||
</para>
|
||||
<programlisting>
|
||||
let
|
||||
address = "210:5217:69c0:9afc:1b95:b9f:8718:c3d2";
|
||||
prefix = "310:5217:69c0:9afc";
|
||||
# taken from the output of "yggdrasilctl getself".
|
||||
in {
|
||||
|
||||
services.yggdrasil = {
|
||||
enable = true;
|
||||
persistentKeys = true; # Maintain a fixed public key and IPv6 address.
|
||||
settings = {
|
||||
Peers = [ "tcp://1.2.3.4:1024" "tcp://1.2.3.5:1024" ];
|
||||
NodeInfo = {
|
||||
# This information is visible to the network.
|
||||
name = config.networking.hostName;
|
||||
location = "The North Pole";
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
boot.kernel.sysctl."net.ipv6.conf.all.forwarding" = 1;
|
||||
# Forward traffic under the prefix.
|
||||
|
||||
networking.interfaces.${eth0}.ipv6.addresses = [{
|
||||
# Set a 300::/8 address on the local physical device.
|
||||
address = prefix + "::1";
|
||||
prefixLength = 64;
|
||||
}];
|
||||
|
||||
services.radvd = {
|
||||
# Announce the 300::/8 prefix to eth0.
|
||||
enable = true;
|
||||
config = ''
|
||||
interface eth0
|
||||
{
|
||||
AdvSendAdvert on;
|
||||
prefix ${prefix}::/64 {
|
||||
AdvOnLink on;
|
||||
AdvAutonomous on;
|
||||
};
|
||||
route 200::/8 {};
|
||||
};
|
||||
'';
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-networking-yggdrasil-configuration-container">
|
||||
<title>Yggdrasil attached Container</title>
|
||||
<para>
|
||||
A NixOS container attached to the Yggdrasil network via a node
|
||||
running on the host:
|
||||
</para>
|
||||
<programlisting>
|
||||
let
|
||||
yggPrefix64 = "310:5217:69c0:9afc";
|
||||
# Again, taken from the output of "yggdrasilctl getself".
|
||||
in
|
||||
{
|
||||
boot.kernel.sysctl."net.ipv6.conf.all.forwarding" = 1;
|
||||
# Enable IPv6 forwarding.
|
||||
|
||||
networking = {
|
||||
bridges.br0.interfaces = [ ];
|
||||
# A bridge only to containers…
|
||||
|
||||
interfaces.br0 = {
|
||||
# … configured with a prefix address.
|
||||
ipv6.addresses = [{
|
||||
address = "${yggPrefix64}::1";
|
||||
prefixLength = 64;
|
||||
}];
|
||||
};
|
||||
};
|
||||
|
||||
containers.foo = {
|
||||
autoStart = true;
|
||||
privateNetwork = true;
|
||||
hostBridge = "br0";
|
||||
# Attach the container to the bridge only.
|
||||
config = { config, pkgs, ... }: {
|
||||
networking.interfaces.eth0.ipv6 = {
|
||||
addresses = [{
|
||||
# Configure a prefix address.
|
||||
address = "${yggPrefix64}::2";
|
||||
prefixLength = 64;
|
||||
}];
|
||||
routes = [{
|
||||
# Configure the prefix route.
|
||||
address = "200::";
|
||||
prefixLength = 7;
|
||||
via = "${yggPrefix64}::1";
|
||||
}];
|
||||
};
|
||||
|
||||
services.httpd.enable = true;
|
||||
networking.firewall.allowedTCPPorts = [ 80 ];
|
||||
};
|
||||
};
|
||||
|
||||
}
|
||||
</programlisting>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
|
@ -9,7 +9,7 @@ in
|
|||
{
|
||||
|
||||
meta.maintainers = with maintainers; [ Br1ght0ne happysalada ];
|
||||
meta.doc = ./meilisearch.xml;
|
||||
meta.doc = ./meilisearch.md;
|
||||
|
||||
###### interface
|
||||
|
||||
|
|
|
@ -1,87 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-meilisearch">
|
||||
<title>Meilisearch</title>
|
||||
<para>
|
||||
Meilisearch is a lightweight, fast and powerful search engine. Think
|
||||
elastic search with a much smaller footprint.
|
||||
</para>
|
||||
<section xml:id="module-services-meilisearch-quickstart">
|
||||
<title>Quickstart</title>
|
||||
<para>
|
||||
the minimum to start meilisearch is
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.meilisearch.enable = true;
|
||||
</programlisting>
|
||||
<para>
|
||||
this will start the http server included with meilisearch on port
|
||||
7700.
|
||||
</para>
|
||||
<para>
|
||||
test with
|
||||
<literal>curl -X GET 'http://localhost:7700/health'</literal>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-meilisearch-usage">
|
||||
<title>Usage</title>
|
||||
<para>
|
||||
you first need to add documents to an index before you can search
|
||||
for documents.
|
||||
</para>
|
||||
<section xml:id="module-services-meilisearch-quickstart-add">
|
||||
<title>Add a documents to the <literal>movies</literal>
|
||||
index</title>
|
||||
<para>
|
||||
<literal>curl -X POST 'http://127.0.0.1:7700/indexes/movies/documents' --data '[{"id": "123", "title": "Superman"}, {"id": 234, "title": "Batman"}]'</literal>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-meilisearch-quickstart-search">
|
||||
<title>Search documents in the <literal>movies</literal>
|
||||
index</title>
|
||||
<para>
|
||||
<literal>curl 'http://127.0.0.1:7700/indexes/movies/search' --data '{ "q": "botman" }'</literal>
|
||||
(note the typo is intentional and there to demonstrate the typo
|
||||
tolerant capabilities)
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="module-services-meilisearch-defaults">
|
||||
<title>Defaults</title>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
The default nixos package doesn’t come with the
|
||||
<link xlink:href="https://docs.meilisearch.com/learn/getting_started/quick_start.html#search">dashboard</link>,
|
||||
since the dashboard features makes some assets downloads at
|
||||
compile time.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Anonimized Analytics sent to meilisearch are disabled by
|
||||
default.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Default deployment is development mode. It doesn’t require a
|
||||
secret master key. All routes are not protected and
|
||||
accessible.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
<section xml:id="module-services-meilisearch-missing">
|
||||
<title>Missing</title>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>
|
||||
the snapshot feature is not yet configurable from the module,
|
||||
it’s just a matter of adding the relevant environment
|
||||
variables.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
</chapter>
|
|
@ -19,6 +19,15 @@ in {
|
|||
'';
|
||||
};
|
||||
|
||||
dataPermissions = mkOption {
|
||||
type = types.str;
|
||||
default = "0750";
|
||||
example = "0755";
|
||||
description = lib.mdDoc ''
|
||||
Unix Permissions in octal on the rtorrent directory.
|
||||
'';
|
||||
};
|
||||
|
||||
downloadDir = mkOption {
|
||||
type = types.str;
|
||||
default = "${cfg.dataDir}/download";
|
||||
|
@ -205,7 +214,7 @@ in {
|
|||
};
|
||||
};
|
||||
|
||||
tmpfiles.rules = [ "d '${cfg.dataDir}' 0750 ${cfg.user} ${cfg.group} -" ];
|
||||
tmpfiles.rules = [ "d '${cfg.dataDir}' ${cfg.dataPermissions} ${cfg.user} ${cfg.group} -" ];
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1082,5 +1082,5 @@ in {
|
|||
};
|
||||
|
||||
meta.maintainers = with maintainers; [ mvs ];
|
||||
meta.doc = ./akkoma.xml;
|
||||
meta.doc = ./akkoma.md;
|
||||
}
|
||||
|
|
|
@ -1,398 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-akkoma">
|
||||
<title>Akkoma</title>
|
||||
<para>
|
||||
<link xlink:href="https://akkoma.dev/">Akkoma</link> is a
|
||||
lightweight ActivityPub microblogging server forked from Pleroma.
|
||||
</para>
|
||||
<section xml:id="modules-services-akkoma-service-configuration">
|
||||
<title>Service configuration</title>
|
||||
<para>
|
||||
The Elixir configuration file required by Akkoma is generated
|
||||
automatically from
|
||||
<link xlink:href="options.html#opt-services.akkoma.config"><option>services.akkoma.config</option></link>.
|
||||
Secrets must be included from external files outside of the Nix
|
||||
store by setting the configuration option to an attribute set
|
||||
containing the attribute <option>_secret</option> – a string
|
||||
pointing to the file containing the actual value of the option.
|
||||
</para>
|
||||
<para>
|
||||
For the mandatory configuration settings these secrets will be
|
||||
generated automatically if the referenced file does not exist
|
||||
during startup, unless disabled through
|
||||
<link xlink:href="options.html#opt-services.akkoma.initSecrets"><option>services.akkoma.initSecrets</option></link>.
|
||||
</para>
|
||||
<para>
|
||||
The following configuration binds Akkoma to the Unix socket
|
||||
<literal>/run/akkoma/socket</literal>, expecting to be run behind
|
||||
a HTTP proxy on <literal>fediverse.example.com</literal>.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.akkoma.enable = true;
|
||||
services.akkoma.config = {
|
||||
":pleroma" = {
|
||||
":instance" = {
|
||||
name = "My Akkoma instance";
|
||||
description = "More detailed description";
|
||||
email = "admin@example.com";
|
||||
registration_open = false;
|
||||
};
|
||||
|
||||
"Pleroma.Web.Endpoint" = {
|
||||
url.host = "fediverse.example.com";
|
||||
};
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
Please refer to the
|
||||
<link xlink:href="https://docs.akkoma.dev/stable/configuration/cheatsheet/">configuration
|
||||
cheat sheet</link> for additional configuration options.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-user-management">
|
||||
<title>User management</title>
|
||||
<para>
|
||||
After the Akkoma service is running, the administration utility
|
||||
can be used to
|
||||
<link xlink:href="https://docs.akkoma.dev/stable/administration/CLI_tasks/user/">manage
|
||||
users</link>. In particular an administrative user can be created
|
||||
with
|
||||
</para>
|
||||
<programlisting>
|
||||
$ pleroma_ctl user new <nickname> <email> --admin --moderator --password <password>
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-proxy-configuration">
|
||||
<title>Proxy configuration</title>
|
||||
<para>
|
||||
Although it is possible to expose Akkoma directly, it is common
|
||||
practice to operate it behind an HTTP reverse proxy such as nginx.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.akkoma.nginx = {
|
||||
enableACME = true;
|
||||
forceSSL = true;
|
||||
};
|
||||
|
||||
services.nginx = {
|
||||
enable = true;
|
||||
|
||||
clientMaxBodySize = "16m";
|
||||
recommendedTlsSettings = true;
|
||||
recommendedOptimisation = true;
|
||||
recommendedGzipSettings = true;
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
Please refer to <xref linkend="module-security-acme" /> for
|
||||
details on how to provision an SSL/TLS certificate.
|
||||
</para>
|
||||
<section xml:id="modules-services-akkoma-media-proxy">
|
||||
<title>Media proxy</title>
|
||||
<para>
|
||||
Without the media proxy function, Akkoma does not store any
|
||||
remote media like pictures or video locally, and clients have to
|
||||
fetch them directly from the source server.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
# Enable nginx slice module distributed with Tengine
|
||||
services.nginx.package = pkgs.tengine;
|
||||
|
||||
# Enable media proxy
|
||||
services.akkoma.config.":pleroma".":media_proxy" = {
|
||||
enabled = true;
|
||||
proxy_opts.redirect_on_failure = true;
|
||||
};
|
||||
|
||||
# Adjust the persistent cache size as needed:
|
||||
# Assuming an average object size of 128 KiB, around 1 MiB
|
||||
# of memory is required for the key zone per GiB of cache.
|
||||
# Ensure that the cache directory exists and is writable by nginx.
|
||||
services.nginx.commonHttpConfig = ''
|
||||
proxy_cache_path /var/cache/nginx/cache/akkoma-media-cache
|
||||
levels= keys_zone=akkoma_media_cache:16m max_size=16g
|
||||
inactive=1y use_temp_path=off;
|
||||
'';
|
||||
|
||||
services.akkoma.nginx = {
|
||||
locations."/proxy" = {
|
||||
proxyPass = "http://unix:/run/akkoma/socket";
|
||||
|
||||
extraConfig = ''
|
||||
proxy_cache akkoma_media_cache;
|
||||
|
||||
# Cache objects in slices of 1 MiB
|
||||
slice 1m;
|
||||
proxy_cache_key $host$uri$is_args$args$slice_range;
|
||||
proxy_set_header Range $slice_range;
|
||||
|
||||
# Decouple proxy and upstream responses
|
||||
proxy_buffering on;
|
||||
proxy_cache_lock on;
|
||||
proxy_ignore_client_abort on;
|
||||
|
||||
# Default cache times for various responses
|
||||
proxy_cache_valid 200 1y;
|
||||
proxy_cache_valid 206 301 304 1h;
|
||||
|
||||
# Allow serving of stale items
|
||||
proxy_cache_use_stale error timeout invalid_header updating;
|
||||
'';
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
<section xml:id="modules-services-akkoma-prefetch-remote-media">
|
||||
<title>Prefetch remote media</title>
|
||||
<para>
|
||||
The following example enables the
|
||||
<literal>MediaProxyWarmingPolicy</literal> MRF policy which
|
||||
automatically fetches all media associated with a post through
|
||||
the media proxy, as soon as the post is received by the
|
||||
instance.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.akkoma.config.":pleroma".":mrf".policies =
|
||||
map (pkgs.formats.elixirConf { }).lib.mkRaw [
|
||||
"Pleroma.Web.ActivityPub.MRF.MediaProxyWarmingPolicy"
|
||||
];
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-media-previews">
|
||||
<title>Media previews</title>
|
||||
<para>
|
||||
Akkoma can generate previews for media.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.akkoma.config.":pleroma".":media_preview_proxy" = {
|
||||
enabled = true;
|
||||
thumbnail_max_width = 1920;
|
||||
thumbnail_max_height = 1080;
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-frontend-management">
|
||||
<title>Frontend management</title>
|
||||
<para>
|
||||
Akkoma will be deployed with the <literal>pleroma-fe</literal> and
|
||||
<literal>admin-fe</literal> frontends by default. These can be
|
||||
modified by setting
|
||||
<link xlink:href="options.html#opt-services.akkoma.frontends"><option>services.akkoma.frontends</option></link>.
|
||||
</para>
|
||||
<para>
|
||||
The following example overrides the primary frontend’s default
|
||||
configuration using a custom derivation.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.akkoma.frontends.primary.package = pkgs.runCommand "pleroma-fe" {
|
||||
config = builtins.toJSON {
|
||||
expertLevel = 1;
|
||||
collapseMessageWithSubject = false;
|
||||
stopGifs = false;
|
||||
replyVisibility = "following";
|
||||
webPushHideIfCW = true;
|
||||
hideScopeNotice = true;
|
||||
renderMisskeyMarkdown = false;
|
||||
hideSiteFavicon = true;
|
||||
postContentType = "text/markdown";
|
||||
showNavShortcuts = false;
|
||||
};
|
||||
nativeBuildInputs = with pkgs; [ jq xorg.lndir ];
|
||||
passAsFile = [ "config" ];
|
||||
} ''
|
||||
mkdir $out
|
||||
lndir ${pkgs.akkoma-frontends.pleroma-fe} $out
|
||||
|
||||
rm $out/static/config.json
|
||||
jq -s add ${pkgs.akkoma-frontends.pleroma-fe}/static/config.json ${config} \
|
||||
>$out/static/config.json
|
||||
'';
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-federation-policies">
|
||||
<title>Federation policies</title>
|
||||
<para>
|
||||
Akkoma comes with a number of modules to police federation with
|
||||
other ActivityPub instances. The most valuable for typical users
|
||||
is the
|
||||
<link xlink:href="https://docs.akkoma.dev/stable/configuration/cheatsheet/#mrf_simple"><literal>:mrf_simple</literal></link>
|
||||
module which allows limiting federation based on instance
|
||||
hostnames.
|
||||
</para>
|
||||
<para>
|
||||
This configuration snippet provides an example on how these can be
|
||||
used. Choosing an adequate federation policy is not trivial and
|
||||
entails finding a balance between connectivity to the rest of the
|
||||
fediverse and providing a pleasant experience to the users of an
|
||||
instance.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.akkoma.config.":pleroma" = with (pkgs.formats.elixirConf { }).lib; {
|
||||
":mrf".policies = map mkRaw [
|
||||
"Pleroma.Web.ActivityPub.MRF.SimplePolicy"
|
||||
];
|
||||
|
||||
":mrf_simple" = {
|
||||
# Tag all media as sensitive
|
||||
media_nsfw = mkMap {
|
||||
"nsfw.weird.kinky" = "Untagged NSFW content";
|
||||
};
|
||||
|
||||
# Reject all activities except deletes
|
||||
reject = mkMap {
|
||||
"kiwifarms.cc" = "Persistent harassment of users, no moderation";
|
||||
};
|
||||
|
||||
# Force posts to be visible by followers only
|
||||
followers_only = mkMap {
|
||||
"beta.birdsite.live" = "Avoid polluting timelines with Twitter posts";
|
||||
};
|
||||
};
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-upload-filters">
|
||||
<title>Upload filters</title>
|
||||
<para>
|
||||
This example strips GPS and location metadata from uploads,
|
||||
deduplicates them and anonymises the the file name.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.akkoma.config.":pleroma"."Pleroma.Upload".filters =
|
||||
map (pkgs.formats.elixirConf { }).lib.mkRaw [
|
||||
"Pleroma.Upload.Filter.Exiftool"
|
||||
"Pleroma.Upload.Filter.Dedupe"
|
||||
"Pleroma.Upload.Filter.AnonymizeFilename"
|
||||
];
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-migration-pleroma">
|
||||
<title>Migration from Pleroma</title>
|
||||
<para>
|
||||
Pleroma instances can be migrated to Akkoma either by copying the
|
||||
database and upload data or by pointing Akkoma to the existing
|
||||
data. The necessary database migrations are run automatically
|
||||
during startup of the service.
|
||||
</para>
|
||||
<para>
|
||||
The configuration has to be copy‐edited manually.
|
||||
</para>
|
||||
<para>
|
||||
Depending on the size of the database, the initial migration may
|
||||
take a long time and exceed the startup timeout of the system
|
||||
manager. To work around this issue one may adjust the startup
|
||||
timeout
|
||||
<option>systemd.services.akkoma.serviceConfig.TimeoutStartSec</option>
|
||||
or simply run the migrations manually:
|
||||
</para>
|
||||
<programlisting>
|
||||
pleroma_ctl migrate
|
||||
</programlisting>
|
||||
<section xml:id="modules-services-akkoma-migration-pleroma-copy">
|
||||
<title>Copying data</title>
|
||||
<para>
|
||||
Copying the Pleroma data instead of re‐using it in place may
|
||||
permit easier reversion to Pleroma, but allows the two data sets
|
||||
to diverge.
|
||||
</para>
|
||||
<para>
|
||||
First disable Pleroma and then copy its database and upload
|
||||
data:
|
||||
</para>
|
||||
<programlisting>
|
||||
# Create a copy of the database
|
||||
nix-shell -p postgresql --run 'createdb -T pleroma akkoma'
|
||||
|
||||
# Copy upload data
|
||||
mkdir /var/lib/akkoma
|
||||
cp -R --reflink=auto /var/lib/pleroma/uploads /var/lib/akkoma/
|
||||
</programlisting>
|
||||
<para>
|
||||
After the data has been copied, enable the Akkoma service and
|
||||
verify that the migration has been successful. If no longer
|
||||
required, the original data may then be deleted:
|
||||
</para>
|
||||
<programlisting>
|
||||
# Delete original database
|
||||
nix-shell -p postgresql --run 'dropdb pleroma'
|
||||
|
||||
# Delete original Pleroma state
|
||||
rm -r /var/lib/pleroma
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-migration-pleroma-reuse">
|
||||
<title>Re‐using data</title>
|
||||
<para>
|
||||
To re‐use the Pleroma data in place, disable Pleroma and enable
|
||||
Akkoma, pointing it to the Pleroma database and upload
|
||||
directory.
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
# Adjust these settings according to the database name and upload directory path used by Pleroma
|
||||
services.akkoma.config.":pleroma"."Pleroma.Repo".database = "pleroma";
|
||||
services.akkoma.config.":pleroma".":instance".upload_dir = "/var/lib/pleroma/uploads";
|
||||
</programlisting>
|
||||
<para>
|
||||
Please keep in mind that after the Akkoma service has been
|
||||
started, any migrations applied by Akkoma have to be rolled back
|
||||
before the database can be used again with Pleroma. This can be
|
||||
achieved through <literal>pleroma_ctl ecto.rollback</literal>.
|
||||
Refer to the
|
||||
<link xlink:href="https://hexdocs.pm/ecto_sql/Mix.Tasks.Ecto.Rollback.html">Ecto
|
||||
SQL documentation</link> for details.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-advanced-deployment">
|
||||
<title>Advanced deployment options</title>
|
||||
<section xml:id="modules-services-akkoma-confinement">
|
||||
<title>Confinement</title>
|
||||
<para>
|
||||
The Akkoma systemd service may be confined to a chroot with
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
services.systemd.akkoma.confinement.enable = true;
|
||||
</programlisting>
|
||||
<para>
|
||||
Confinement of services is not generally supported in NixOS and
|
||||
therefore disabled by default. Depending on the Akkoma
|
||||
configuration, the default confinement settings may be
|
||||
insufficient and lead to subtle errors at run time, requiring
|
||||
adjustment:
|
||||
</para>
|
||||
<para>
|
||||
Use
|
||||
<link xlink:href="options.html#opt-systemd.services._name_.confinement.packages"><option>services.systemd.akkoma.confinement.packages</option></link>
|
||||
to make packages available in the chroot.
|
||||
</para>
|
||||
<para>
|
||||
<option>services.systemd.akkoma.serviceConfig.BindPaths</option>
|
||||
and
|
||||
<option>services.systemd.akkoma.serviceConfig.BindReadOnlyPaths</option>
|
||||
permit access to outside paths through bind mounts. Refer to
|
||||
<link xlink:href="https://www.freedesktop.org/software/systemd/man/systemd.exec.html#BindPaths="><citerefentry><refentrytitle>systemd.exec</refentrytitle><manvolnum>5</manvolnum></citerefentry></link>
|
||||
for details.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="modules-services-akkoma-distributed-deployment">
|
||||
<title>Distributed deployment</title>
|
||||
<para>
|
||||
Being an Elixir application, Akkoma can be deployed in a
|
||||
distributed fashion.
|
||||
</para>
|
||||
<para>
|
||||
This requires setting
|
||||
<link xlink:href="options.html#opt-services.akkoma.dist.address"><option>services.akkoma.dist.address</option></link>
|
||||
and
|
||||
<link xlink:href="options.html#opt-services.akkoma.dist.cookie"><option>services.akkoma.dist.cookie</option></link>.
|
||||
The specifics depend strongly on the deployment environment. For
|
||||
more information please check the relevant
|
||||
<link xlink:href="https://www.erlang.org/doc/reference_manual/distributed.html">Erlang
|
||||
documentation</link>.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
</chapter>
|
|
@ -1080,6 +1080,6 @@ in
|
|||
];
|
||||
};
|
||||
|
||||
meta.doc = ./discourse.xml;
|
||||
meta.doc = ./discourse.md;
|
||||
meta.maintainers = [ lib.maintainers.talyz ];
|
||||
}
|
||||
|
|
|
@ -1,331 +0,0 @@
|
|||
<!-- Do not edit this file directly, edit its companion .md instead
|
||||
and regenerate this file using nixos/doc/manual/md-to-db.sh -->
|
||||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="module-services-discourse">
|
||||
<title>Discourse</title>
|
||||
<para>
|
||||
<link xlink:href="https://www.discourse.org/">Discourse</link> is a
|
||||
modern and open source discussion platform.
|
||||
</para>
|
||||
<section xml:id="module-services-discourse-basic-usage">
|
||||
<title>Basic usage</title>
|
||||
<para>
|
||||
A minimal configuration using Let’s Encrypt for TLS certificates
|
||||
looks like this:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.discourse = {
|
||||
enable = true;
|
||||
hostname = "discourse.example.com";
|
||||
admin = {
|
||||
email = "admin@example.com";
|
||||
username = "admin";
|
||||
fullName = "Administrator";
|
||||
passwordFile = "/path/to/password_file";
|
||||
};
|
||||
secretKeyBaseFile = "/path/to/secret_key_base_file";
|
||||
};
|
||||
security.acme.email = "me@example.com";
|
||||
security.acme.acceptTerms = true;
|
||||
</programlisting>
|
||||
<para>
|
||||
Provided a proper DNS setup, you’ll be able to connect to the
|
||||
instance at <literal>discourse.example.com</literal> and log in
|
||||
using the credentials provided in
|
||||
<literal>services.discourse.admin</literal>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-discourse-tls">
|
||||
<title>Using a regular TLS certificate</title>
|
||||
<para>
|
||||
To set up TLS using a regular certificate and key on file, use the
|
||||
<xref linkend="opt-services.discourse.sslCertificate" /> and
|
||||
<xref linkend="opt-services.discourse.sslCertificateKey" />
|
||||
options:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.discourse = {
|
||||
enable = true;
|
||||
hostname = "discourse.example.com";
|
||||
sslCertificate = "/path/to/ssl_certificate";
|
||||
sslCertificateKey = "/path/to/ssl_certificate_key";
|
||||
admin = {
|
||||
email = "admin@example.com";
|
||||
username = "admin";
|
||||
fullName = "Administrator";
|
||||
passwordFile = "/path/to/password_file";
|
||||
};
|
||||
secretKeyBaseFile = "/path/to/secret_key_base_file";
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
<section xml:id="module-services-discourse-database">
|
||||
<title>Database access</title>
|
||||
<para>
|
||||
Discourse uses PostgreSQL to store most of its data. A database
|
||||
will automatically be enabled and a database and role created
|
||||
unless <xref linkend="opt-services.discourse.database.host" /> is
|
||||
changed from its default of <literal>null</literal> or
|
||||
<xref linkend="opt-services.discourse.database.createLocally" />
|
||||
is set to <literal>false</literal>.
|
||||
</para>
|
||||
<para>
|
||||
External database access can also be configured by setting
|
||||
<xref linkend="opt-services.discourse.database.host" />,
|
||||
<xref linkend="opt-services.discourse.database.username" /> and
|
||||
<xref linkend="opt-services.discourse.database.passwordFile" /> as
|
||||
appropriate. Note that you need to manually create a database
|
||||
called <literal>discourse</literal> (or the name you chose in
|
||||
<xref linkend="opt-services.discourse.database.name" />) and allow
|
||||
the configured database user full access to it.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-discourse-mail">
|
||||
<title>Email</title>
|
||||
<para>
|
||||
In addition to the basic setup, you’ll want to configure an SMTP
|
||||
server Discourse can use to send user registration and password
|
||||
reset emails, among others. You can also optionally let Discourse
|
||||
receive email, which enables people to reply to threads and
|
||||
conversations via email.
|
||||
</para>
|
||||
<para>
|
||||
A basic setup which assumes you want to use your configured
|
||||
<link linkend="opt-services.discourse.hostname">hostname</link> as
|
||||
email domain can be done like this:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.discourse = {
|
||||
enable = true;
|
||||
hostname = "discourse.example.com";
|
||||
sslCertificate = "/path/to/ssl_certificate";
|
||||
sslCertificateKey = "/path/to/ssl_certificate_key";
|
||||
admin = {
|
||||
email = "admin@example.com";
|
||||
username = "admin";
|
||||
fullName = "Administrator";
|
||||
passwordFile = "/path/to/password_file";
|
||||
};
|
||||
mail.outgoing = {
|
||||
serverAddress = "smtp.emailprovider.com";
|
||||
port = 587;
|
||||
username = "user@emailprovider.com";
|
||||
passwordFile = "/path/to/smtp_password_file";
|
||||
};
|
||||
mail.incoming.enable = true;
|
||||
secretKeyBaseFile = "/path/to/secret_key_base_file";
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
This assumes you have set up an MX record for the address you’ve
|
||||
set in
|
||||
<link linkend="opt-services.discourse.hostname">hostname</link>
|
||||
and requires proper SPF, DKIM and DMARC configuration to be done
|
||||
for the domain you’re sending from, in order for email to be
|
||||
reliably delivered.
|
||||
</para>
|
||||
<para>
|
||||
If you want to use a different domain for your outgoing email (for
|
||||
example <literal>example.com</literal> instead of
|
||||
<literal>discourse.example.com</literal>) you should set
|
||||
<xref linkend="opt-services.discourse.mail.notificationEmailAddress" />
|
||||
and
|
||||
<xref linkend="opt-services.discourse.mail.contactEmailAddress" />
|
||||
manually.
|
||||
</para>
|
||||
<note>
|
||||
<para>
|
||||
Setup of TLS for incoming email is currently only configured
|
||||
automatically when a regular TLS certificate is used, i.e. when
|
||||
<xref linkend="opt-services.discourse.sslCertificate" /> and
|
||||
<xref linkend="opt-services.discourse.sslCertificateKey" /> are
|
||||
set.
|
||||
</para>
|
||||
</note>
|
||||
</section>
|
||||
<section xml:id="module-services-discourse-settings">
|
||||
<title>Additional settings</title>
|
||||
<para>
|
||||
Additional site settings and backend settings, for which no
|
||||
explicit NixOS options are provided, can be set in
|
||||
<xref linkend="opt-services.discourse.siteSettings" /> and
|
||||
<xref linkend="opt-services.discourse.backendSettings" />
|
||||
respectively.
|
||||
</para>
|
||||
<section xml:id="module-services-discourse-site-settings">
|
||||
<title>Site settings</title>
|
||||
<para>
|
||||
<quote>Site settings</quote> are the settings that can be
|
||||
changed through the Discourse UI. Their
|
||||
<emphasis>default</emphasis> values can be set using
|
||||
<xref linkend="opt-services.discourse.siteSettings" />.
|
||||
</para>
|
||||
<para>
|
||||
Settings are expressed as a Nix attribute set which matches the
|
||||
structure of the configuration in
|
||||
<link xlink:href="https://github.com/discourse/discourse/blob/master/config/site_settings.yml">config/site_settings.yml</link>.
|
||||
To find a setting’s path, you only need to care about the first
|
||||
two levels; i.e. its category (e.g. <literal>login</literal>)
|
||||
and name (e.g. <literal>invite_only</literal>).
|
||||
</para>
|
||||
<para>
|
||||
Settings containing secret data should be set to an attribute
|
||||
set containing the attribute <literal>_secret</literal> - a
|
||||
string pointing to a file containing the value the option should
|
||||
be set to. See the example.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-discourse-backend-settings">
|
||||
<title>Backend settings</title>
|
||||
<para>
|
||||
Settings are expressed as a Nix attribute set which matches the
|
||||
structure of the configuration in
|
||||
<link xlink:href="https://github.com/discourse/discourse/blob/stable/config/discourse_defaults.conf">config/discourse.conf</link>.
|
||||
Empty parameters can be defined by setting them to
|
||||
<literal>null</literal>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="module-services-discourse-settings-example">
|
||||
<title>Example</title>
|
||||
<para>
|
||||
The following example sets the title and description of the
|
||||
Discourse instance and enables GitHub login in the site
|
||||
settings, and changes a few request limits in the backend
|
||||
settings:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.discourse = {
|
||||
enable = true;
|
||||
hostname = "discourse.example.com";
|
||||
sslCertificate = "/path/to/ssl_certificate";
|
||||
sslCertificateKey = "/path/to/ssl_certificate_key";
|
||||
admin = {
|
||||
email = "admin@example.com";
|
||||
username = "admin";
|
||||
fullName = "Administrator";
|
||||
passwordFile = "/path/to/password_file";
|
||||
};
|
||||
mail.outgoing = {
|
||||
serverAddress = "smtp.emailprovider.com";
|
||||
port = 587;
|
||||
username = "user@emailprovider.com";
|
||||
passwordFile = "/path/to/smtp_password_file";
|
||||
};
|
||||
mail.incoming.enable = true;
|
||||
siteSettings = {
|
||||
required = {
|
||||
title = "My Cats";
|
||||
site_description = "Discuss My Cats (and be nice plz)";
|
||||
};
|
||||
login = {
|
||||
enable_github_logins = true;
|
||||
github_client_id = "a2f6dfe838cb3206ce20";
|
||||
github_client_secret._secret = /run/keys/discourse_github_client_secret;
|
||||
};
|
||||
};
|
||||
backendSettings = {
|
||||
max_reqs_per_ip_per_minute = 300;
|
||||
max_reqs_per_ip_per_10_seconds = 60;
|
||||
max_asset_reqs_per_ip_per_10_seconds = 250;
|
||||
max_reqs_per_ip_mode = "warn+block";
|
||||
};
|
||||
secretKeyBaseFile = "/path/to/secret_key_base_file";
|
||||
};
|
||||
</programlisting>
|
||||
<para>
|
||||
In the resulting site settings file, the
|
||||
<literal>login.github_client_secret</literal> key will be set to
|
||||
the contents of the
|
||||
<filename>/run/keys/discourse_github_client_secret</filename>
|
||||
file.
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="module-services-discourse-plugins">
|
||||
<title>Plugins</title>
|
||||
<para>
|
||||
You can install Discourse plugins using the
|
||||
<xref linkend="opt-services.discourse.plugins" /> option.
|
||||
Pre-packaged plugins are provided in
|
||||
<literal><your_discourse_package_here>.plugins</literal>. If
|
||||
you want the full suite of plugins provided through
|
||||
<literal>nixpkgs</literal>, you can also set the
|
||||
<xref linkend="opt-services.discourse.package" /> option to
|
||||
<literal>pkgs.discourseAllPlugins</literal>.
|
||||
</para>
|
||||
<para>
|
||||
Plugins can be built with the
|
||||
<literal><your_discourse_package_here>.mkDiscoursePlugin</literal>
|
||||
function. Normally, it should suffice to provide a
|
||||
<literal>name</literal> and <literal>src</literal> attribute. If
|
||||
the plugin has Ruby dependencies, however, they need to be
|
||||
packaged in accordance with the
|
||||
<link xlink:href="https://nixos.org/manual/nixpkgs/stable/#developing-with-ruby">Developing
|
||||
with Ruby</link> section of the Nixpkgs manual and the appropriate
|
||||
gem options set in <literal>bundlerEnvArgs</literal> (normally
|
||||
<literal>gemdir</literal> is sufficient). A plugin’s Ruby
|
||||
dependencies are listed in its <filename>plugin.rb</filename> file
|
||||
as function calls to <literal>gem</literal>. To construct the
|
||||
corresponding <filename>Gemfile</filename> manually, run
|
||||
<command>bundle init</command>, then add the
|
||||
<literal>gem</literal> lines to it verbatim.
|
||||
</para>
|
||||
<para>
|
||||
Much of the packaging can be done automatically by the
|
||||
<filename>nixpkgs/pkgs/servers/web-apps/discourse/update.py</filename>
|
||||
script - just add the plugin to the <literal>plugins</literal>
|
||||
list in the <literal>update_plugins</literal> function and run the
|
||||
script:
|
||||
</para>
|
||||
<programlisting language="bash">
|
||||
./update.py update-plugins
|
||||
</programlisting>
|
||||
<para>
|
||||
Some plugins provide
|
||||
<link linkend="module-services-discourse-site-settings">site
|
||||
settings</link>. Their defaults can be configured using
|
||||
<xref linkend="opt-services.discourse.siteSettings" />, just like
|
||||
regular site settings. To find the names of these settings, look
|
||||
in the <literal>config/settings.yml</literal> file of the plugin
|
||||
repo.
|
||||
</para>
|
||||
<para>
|
||||
For example, to add the
|
||||
<link xlink:href="https://github.com/discourse/discourse-spoiler-alert">discourse-spoiler-alert</link>
|
||||
and
|
||||
<link xlink:href="https://github.com/discourse/discourse-solved">discourse-solved</link>
|
||||
plugins, and disable <literal>discourse-spoiler-alert</literal> by
|
||||
default:
|
||||
</para>
|
||||
<programlisting>
|
||||
services.discourse = {
|
||||
enable = true;
|
||||
hostname = "discourse.example.com";
|
||||
sslCertificate = "/path/to/ssl_certificate";
|
||||
sslCertificateKey = "/path/to/ssl_certificate_key";
|
||||
admin = {
|
||||
email = "admin@example.com";
|
||||
username = "admin";
|
||||
fullName = "Administrator";
|
||||
passwordFile = "/path/to/password_file";
|
||||
};
|
||||
mail.outgoing = {
|
||||
serverAddress = "smtp.emailprovider.com";
|
||||
port = 587;
|
||||
username = "user@emailprovider.com";
|
||||
passwordFile = "/path/to/smtp_password_file";
|
||||
};
|
||||
mail.incoming.enable = true;
|
||||
plugins = with config.services.discourse.package.plugins; [
|
||||
discourse-spoiler-alert
|
||||
discourse-solved
|
||||
];
|
||||
siteSettings = {
|
||||
plugins = {
|
||||
spoiler_enabled = false;
|
||||
};
|
||||
};
|
||||
secretKeyBaseFile = "/path/to/secret_key_base_file";
|
||||
};
|
||||
</programlisting>
|
||||
</section>
|
||||
</chapter>
|
|
@ -167,6 +167,6 @@ in {
|
|||
|
||||
meta = {
|
||||
maintainers = with maintainers; [ ma27 ];
|
||||
doc = ./grocy.xml;
|
||||
doc = ./grocy.md;
|
||||
};
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue