For reasons yet unknown, the vxlan backend doesn't work (at least inside
the qemu networking), so this is moved to the udp backend.
Note changing the backend apparently also changes the interface name,
it's now `flannel0`, not `flannel.1`
fixes#74941
This was whitespace-sensitive, kept fighting with my editor and broke
the tests easily. To fix this, let python convert the output to
individual lines, and strip whitespace from them before comparing.
When trying to build a VM using `nixos-build-vms` with a configuration
that doesn't evaluate, an error "at `<unknown-file>`" is usually shown.
This happens since the `build-vms.nix` creates a VM-network of
NixOS-configurations that are attr-sets or functions and don't contain
any file information. This patch manually adds the `_file`-attribute to
tell the module-system which file contained broken configuration:
```
$ cat vm.nix
{ vm.invalid-option = 1; }
$ nixos-build-vms vm.nix
error: The option `invalid-option' defined in `/home/ma27/Projects/nixpkgs/vm.nix@node-vm' does not exist.
(use '--show-trace' to show detailed location information)
```
This commit:
1. Updates the path of the traefik package, so that the out output is
used.
2. Adapts the configuration settings and options to Traefik v2.
3. Formats the NixOS traefik service using nixfmt.
According to my analysis the last critical fix went into v5.4.23, I have
confirmed this by running WebGL over night and haven't seen a single
i915 GPU hang. Lets remove the notes from the release notes.
(cherry picked from commit da764d22ce3b698707861d58824843ded87cbb0a)
We already set the relevant env vars in the systemd services. That does
not help one when executing any of the executables outside a service,
e.g. when creating a new user.
we use stdenv.hostPlatform.uname.processor, which I believe is just like
`uname -p`.
Example values:
```
(import <nixpkgs> { system = "x86_64-linux"; }).stdenv.hostPlatform.uname.processor
"x86_64"
(import <nixpkgs> { system = "aarch64-linux"; }).stdenv.hostPlatform.uname.processor
aarch64
(import <nixpkgs> { system = "armv7l-linux"; }).stdenv.hostPlatform.uname.processor
"armv7l"
```
The new wording does not assume the user is upgrading.
This is because a user could be setting up a new installation on 20.03
on a server that has a 19.09 or before stateVersion!!
The new wording ensures that confusion is reduced by stating that they
do not have to care about the assumed 16→17 transition.
Then, the wording explains that they should, and how to upgrade to
version 18.
It also reviews the confusing wording about "multiple" upgrades.
* * *
The only thing we cannot really do is stop a fresh install of 17 if
there was no previous install, as it cannot be detected. That makes a
useless upgrade forced for new users with old state versions.
It is also important to state that they must set their package to
Nextcloud 18, as future upgrades to Nextcloud will not allow an uprade
from 17!
I assume future warning messages will exist specifically stating what to
do to go from 18 to 19, then 19 to 20, etc...
This allows to have multiple certificates with the same common name.
Lego uses in its internal directory the common name to name the certificate.
fixes#84409
This is an backward incompatible change from upstream dhcpcd [0], as
this could have easily locked me out of my box.
As dhcpcd doesn't allow to use only a blacklist (denyinterfaces in
dhcpcd.conf) of devices and use all remaining devices, while explicitly
allowing some interfaces like bridges, I think the best option would be
to not change anything about it and just educate the users here about
that edge case and how to solve it.
[0] https://roy.marples.name/archives/dhcpcd-discuss/0002621.html
(cherry picked from commit eeeb2bf8035b309a636d596de6a3b1d52ca427b1)
I've had Netdata crash on me sometimes. Rarely but more than once. And I lost days of data before I noticed.
Let's be nice and restart it on failures by default.
This properly supports the `supportedSystems` and
`limitedSupportedSystems` arguments of `release-combined.nix`.
Previously, evaluation would fail if `x86_64-linux` was not part either
of those, since the tested job always referenced the `x86_64-linux`
nixos tests (which won't exist in an aarch64-only eval).
Since the hydra configuration for the jobset`trunk-combined` has both
`aarch64-linux` and `x86_64-linux` as supported systems, this will make
aarch64 be part of the tested job on that jobset.
Also removed `pkgs.hydra-flakes` since flake-support has been merged
into master[1]. Because of that, `pkgs.hydra-unstable` is now compiled
against `pkgs.nixFlakes` and currently requires a patch since Hydra's
master doesn't compile[2] atm.
[1] https://github.com/NixOS/hydra/pull/730
[2] https://github.com/NixOS/hydra/pull/732
Instead of making the configuration less portable by hard coding the number of
jobs equal to the cores we can also let nix set the same number at runtime.
This prevents duplication in cross-compiled nixos machines. The
bootstrapped glibc differs from the natively compiled one, so we get
two glibc’s in the closure. To reduce closure size, just use
stdenv.cc.libc where available.
Some changes might require manual migration steps:
"Due to changes to the way in which Gollum handles filenames, you may
have to change some links in your wiki when migrating from gollum 4.x.
See the release notes [0] for more details. You may find the
bin/gollum-migrate-tags script helpful to accomplish this. Also see the
--lenient-tag-lookup option for making tag lookup backwards compatible
with 4.x, though note that this will decrease performance on large wikis
with many tags." (source: [1])
[0]: https://github.com/gollum/gollum/wiki/5.0-release-notes
[1]: https://github.com/gollum/gollum/blob/v5.0.0/HISTORY.md
We have this bug https://github.com/elementary/gala/issues/636
when using notifications in gala. It's likely to not really be fixed
because all development is on the new notifications server.
Them removing cerbere and registering with the SessionManager
should make shutdown very fast. This was even done in plank [0]
which was the last factor outside cerbere causing this.
[0]]: a8d2f255b2
As discussed in https://github.com/NixOS/nixpkgs/pull/73763, prevailing
consensus is to revert that commit. People use the hardened profile on
machines and run nix builds, and there's no good reason to use
unsandboxed builds at all unless you're in a platform that doesn't
support them.
This reverts commit 00ac71ab19.
So now we have only packages for human interaction in php.packages and
only extensions in php.extensions. With this php.packages.exts have
been merged into the same attribute set as all the other extensions to
make it flat and nice.
The nextcloud module have been updated to reflect this change as well
as the documentation.
In 7f838b4dde, we dropped systemd-udev-settle.service from display-manager.service's wants.
Unfortunately, we are doing something wrong since without it both Xorg and Wayland fail to start:
Failed to open gpu '/dev/dri/card0': GDBus.Error:org.freedesktop.DBus.Error.AccessDenied: Operation not permitted
Until we sort this out, let's add systemd-udev-settle.service to GDM to unblock the channels.
In contrast to `.service`-units, it's not possible to declare an
`overrides.conf`, however this is done by `generateUnits` for `.nspawn`
units as well. This change breaks the build if you have two derivations
configuring one nspawn unit.
This will happen in a case like this:
``` nix
{ pkgs, ... }: {
systemd.packages = [
(pkgs.writeTextDir "etc/systemd/nspawn/container0.nspawn" ''
[Files]
Bind=/tmp
'')
];
systemd.nspawn.container0 = {
/* ... */
};
}
```
This allows you to specify the system-wide flake registry. One use is
to pin 'nixpkgs' to the Nixpkgs version used to build the system:
nix.registry.nixpkgs.flake = nixpkgs;
where 'nixpkgs' is a flake input. This ensures that commands like
$ nix run nixpkgs#hello
pull in a minimum of additional store paths.
You can also use this to redirect flakes, e.g.
nix.registry.nixpkgs.to = {
type = "github";
owner = "my-org";
repo = "my-nixpkgs";
};
Many options define their example to be a Nix value without using
literalExample. This sometimes gets rendered incorrectly in the manual,
causing confusion like in https://github.com/NixOS/nixpkgs/issues/25516
This fixes it by using literalExample for such options. The list of
option to fix was determined with this expression:
let
nixos = import ./nixos { configuration = {}; };
lib = import ./lib;
valid = d: {
# escapeNixIdentifier from https://github.com/NixOS/nixpkgs/pull/82461
set = lib.all (n: lib.strings.escapeNixIdentifier n == n) (lib.attrNames d) && lib.all (v: valid v) (lib.attrValues d);
list = lib.all (v: valid v) d;
}.${builtins.typeOf d} or true;
optionList = lib.optionAttrSetToDocList nixos.options;
in map (opt: {
file = lib.elemAt opt.declarations 0;
loc = lib.options.showOption opt.loc;
}) (lib.filter (opt: if opt ? example then ! valid opt.example else false) optionList)
which when evaluated will output all options that use a Nix identifier
that would need escaping as an attribute name.
This windowManager and desktopManager doesn't even have
an option to use it. git history suggests to me that there's no way anyone
finds this useful anymore.
* creating a local backup
* creating a borgbackup server
* backing up to a borgbackup server
* hints about the Vorta graphical desktop application
* Added documentation about Vorta desktop client
Tested the examples locally and with my borgbase.com account.
Upgrades Hydra to the latest master/flake branch. To perform this
upgrade, it's needed to do a non-trivial db-migration which provides a
massive performance-improvement[1].
The basic ideas behind multi-step upgrades of services between NixOS versions
have been gathered already[2]. For further context it's recommended to
read this first.
Basically, the following steps are needed:
* Upgrade to a non-breaking version of Hydra with the db-changes
(columns are still nullable here). If `system.stateVersion` is set to
something older than 20.03, the package will be selected
automatically, otherwise `pkgs.hydra-migration` needs to be used.
* Run `hydra-backfill-ids` on the server.
* Deploy either `pkgs.hydra-unstable` (for Hydra master) or
`pkgs.hydra-flakes` (for flakes-support) to activate the optimization.
The steps are also documented in the release-notes and in the module
using `warnings`.
`pkgs.hydra` has been removed as latest Hydra doesn't compile with
`pkgs.nixStable` and to ensure a graceful migration using the newly
introduced packages.
To verify the approach, a simple vm-test has been added which verifies
the migration steps.
[1] https://github.com/NixOS/hydra/pull/711
[2] https://github.com/NixOS/nixpkgs/pull/82353#issuecomment-598269471
After upgrading to NixOS 20.03, I've got the following warning:
nginx: [warn] could not build optimal types_hash, you should increase either types_hash_max_size: 2048 or types_hash_bucket_size: 64; ignoring types_hash_bucket_size
The documentation states that "if nginx emits the message requesting
to increase either hash max size or hash bucket size then the first
parameter should first be increased" (aka types_hash_max_size).
In 19.03, the size of mime.types was around 100 entries. In 20.03, we
are around 900 entries. This is due to ff0148d868 which makes nginx
use mailcap mime.types.
In ff0148d868, nginx configuration was modified to use mime.types
from mailcap package as it is more complete. However, there are two
places where mime.types is included in configuration. When the user
was setting `cfg.httpConfig`, the mime.types from nginx was still
used. This commit fix that by moving the common snippet in a variable
of its own and ensure it is used at both places.
While our ETag patch works pretty fine if it comes to serving data off
store paths, it unfortunately broke something that might be a bit more
common, namely when using regexes to extract path components of
location directives for example.
Recently, @devhell has reported a bug with a nginx location directive
like this:
location ~^/\~([a-z0-9_]+)(/.*)?$" {
alias /home/$1/public_html$2;
}
While this might look harmless at first glance, it does however cause
issues with our ETag patch. The alias directive gets broken up by nginx
like this:
*2 http script copy: "/home/"
*2 http script capture: "foo"
*2 http script copy: "/public_html/"
*2 http script capture: "bar.txt"
In our patch however, we use realpath(3) to get the canonicalised path
from ngx_http_core_loc_conf_s.root, which returns the *configured* value
from the root or alias directive. So in the example above, realpath(3)
boils down to the following syscalls:
lstat("/home", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat("/home/$1", 0x7ffd08da6f60) = -1 ENOENT (No such file or directory)
During my review[1] of the initial patch, I didn't actually notice that
what we're doing here is returning NGX_ERROR if the realpath(3) call
fails, which in turn causes an HTTP 500 error.
Since our patch actually made the canonicalisation (and thus additional
syscalls) necessary, we really shouldn't introduce an additional error
so let's - at least for now - silently skip return value if realpath(3)
has failed.
However since we're using the unaltered root from the config we have
another issue, consider this root:
/nix/store/...-abcde/$1
Calling realpath(3) on this path will fail (except if there's a file
called "$1" of course), so even this fix is not enough because it
results in the ETag not being set to the store path hash.
While this is very ugly and we should fix this very soon, it's not as
serious as getting HTTP 500 errors for serving static files.
I added a small NixOS VM test, which uses the example above as a
regression test.
It seems that my memory is failing these days, since apparently I *knew*
about this issue since digging for existing issues in nixpkgs, I found
this similar pull request which I even reviewed:
https://github.com/NixOS/nixpkgs/pull/66532
However, since the comments weren't addressed and the author hasn't
responded to the pull request, I decided to keep this very commit and do
a follow-up pull request.
[1]: https://github.com/NixOS/nixpkgs/pull/48337
Signed-off-by: aszlig <aszlig@nix.build>
Reported-by: @devhell
Acked-by: @7c6f434c
Acked-by: @yorickvP
Merges: https://github.com/NixOS/nixpkgs/pull/80671
Fixes: https://github.com/NixOS/nixpkgs/pull/66532
The volumeID will now be in the format of:
nixos-$EDITON-$RELEASE-$ARCH
an example for the minimal image would look like:
nixos-minimal-20.09-x86-64-linux
It's impossible to move two major-versions forward when upgrading
Nextcloud. This is an issue when comming from 19.09 (using Nextcloud 16)
and trying to upgrade to 20.03 (using Nextcloud 18 by default).
This patch implements the measurements discussed in #82056 and #82353 to
improve the update process and to circumvent similar issues in the
future:
* `pkgs.nextcloud` has been removed in favor of versioned attributes
(currently `pkgs.nextcloud17` and `pkgs.nextcloud18`). With that
approach we can safely backport major-releases in the future to
simplify those upgrade-paths and we can select one of the
major-releases as default depending on the configuration (helpful to
decide whether e.g. `pkgs.nextcloud17` or `pkgs.nextcloud18` should be
used on 20.03 and `master` atm).
* If `system.stateVersion` is older than `20.03`, `nextcloud17` will be
used (which is one major-release behind v16 from 19.09). When using a
package older than the latest major-release available (currently v18),
the evaluation will cause a warning which describes the issue and
suggests next steps.
To make those package-selections easier, a new option to define the
package to be used for the service (namely
`services.nextcloud.package`) was introduced.
* If `pkgs.nextcloud` exists (e.g. due to an overlay which was used to
provide more recent Nextcloud versions on older NixOS-releases), an
evaluation error will be thrown by default: this is to make sure that
`services.nextcloud.package` doesn't use an older version by accident
after checking the state-version. If `pkgs.nextcloud` is added
manually, it needs to be declared explicitly in
`services.nextcloud.package`.
* The `nixos/nextcloud`-documentation contains a
"Maintainer information"-chapter which describes how to roll out new
Nextcloud releases and how to deal with old (and probably unsafe)
versions.
Closes#82056
Dropbear lags behind OpenSSH significantly in both support for modern
key formats like `ssh-ed25519`, let alone the recently-introduced
U2F/FIDO2-based `sk-ssh-ed25519@openssh.com` (as I found when I switched
my `authorizedKeys` over to it and promptly locked myself out of my
server's initrd SSH, breaking reboots), as well as security features
like multiprocess isolation. Using the same SSH daemon for stage-1 and
the main system ensures key formats will always remain compatible, as
well as more conveniently allowing the sharing of configuration and
host keys.
The main reason to use Dropbear over OpenSSH would be initrd space
concerns, but NixOS initrds are already large (17 MiB currently on my
server), and the size difference between the two isn't huge (the test's
initrd goes from 9.7 MiB to 12 MiB with this change). If the size is
still a problem, then it would be easy to shrink sshd down to a few
hundred kilobytes by using an initrd-specific build that uses musl and
disables things like Kerberos support.
This passes the test and works on my server, but more rigorous testing
and review from people who use initrd SSH would be appreciated!
Running the manual on a TTY is useless in the graphical ISOs and not
particularly useful in non-graphical ISOs (since you can also run
'nixos-help').
Fixes#83157.
* Removed the use of gnome-screensaver (https://gitlab.gnome.org/GNOME/gnome-flashback/issues/18)
* Flashback's menu-related environment variables are now set in the gnome3.nix module instead of gnome-panel to resolve dependency conflict.
While renaming `networking.defaultMailServer` directly to
`services.ssmtp` is shorter and probably clearer, it causes eval errors
due to the second rename (directDelivery -> enable) when using e.g. `lib.mkForce`.
For instance,
``` nix
{ lib, ... }: {
networking.defaultMailServer = {
hostName = "localhost";
directDelivery = lib.mkForce true;
domain = "example.org";
};
}
```
would break with the following (rather confusing) error:
```
error: The option value `services.ssmtp.enable' in `/home/ma27/Projects/nixpkgs/nixos/modules/programs/ssmtp.nix' is not of type `boolean'.
(use '--show-trace' to show detailed location information)
```
Previously, the NixOS ACME module defaulted to using P-384 for
TLS certificates. I believe that this is a mistake, and that we
should use P-256 instead, despite it being theoretically
cryptographically weaker.
The security margin of a 256-bit elliptic curve cipher is substantial;
beyond a certain level, more bits in the key serve more to slow things
down than add meaningful protection. It's much more likely that ECDSA
will be broken entirely, or some fatal flaw will be found in the NIST
curves that makes them all insecure, than that the security margin
will be reduced enough to put P-256 at risk but not P-384. It's also
inconsistent to target a curve with a 192-bit security margin when our
recommended nginx TLS configuration allows 128-bit AES. [This Stack
Exchange answer][pornin] by cryptographer Thomas Pornin conveys the
general attitude among experts:
> Use P-256 to minimize trouble. If you feel that your manhood is
> threatened by using a 256-bit curve where a 384-bit curve is
> available, then use P-384: it will increases your computational and
> network costs (a factor of about 3 for CPU, a few extra dozen bytes
> on the network) but this is likely to be negligible in practice (in a
> SSL-powered Web server, the heavy cost is in "Web", not "SSL").
[pornin]: https://security.stackexchange.com/a/78624
While the NIST curves have many flaws (see [SafeCurves][safecurves]),
P-256 and P-384 are no different in this respect; SafeCurves gives
them the same rating. The only NIST curve Bernstein [thinks better of,
P-521][bernstein] (see "Other standard primes"), isn't usable for Web
PKI (it's [not supported by BoringSSL by default][boringssl] and hence
[doesn't work in Chromium/Chrome][chromium], and Let's Encrypt [don't
support it either][letsencrypt]).
[safecurves]: https://safecurves.cr.yp.to/
[bernstein]: https://blog.cr.yp.to/20140323-ecdsa.html
[boringssl]: https://boringssl.googlesource.com/boringssl/+/e9fc3e547e557492316932b62881c3386973ceb2
[chromium]: https://bugs.chromium.org/p/chromium/issues/detail?id=478225
[letsencrypt]: https://letsencrypt.org/docs/integration-guide/#supported-key-algorithms
So there's no real benefit to using P-384; what's the cost? In the
Stack Exchange answer I linked, Pornin estimates a factor of 3×
CPU usage, which wouldn't be so bad; unfortunately, this is wildly
optimistic in practice, as P-256 is much more common and therefore
much better optimized. [This GitHub comment][openssl] measures the
performance differential for raw Diffie-Hellman operations with OpenSSL
1.1.1 at a whopping 14× (even P-521 fares better!); [Caddy disables
P-384 by default][caddy] due to Go's [lack of accelerated assembly
implementations][crypto/elliptic] for it, and the difference there seems
even more extreme: [this golang-nuts post][golang-nuts] measures the key
generation performance differential at 275×. It's unlikely to be the
bottleneck for anyone, but I still feel kind of bad for anyone having
lego generate hundreds of certificates and sign challenges with them
with performance like that...
[openssl]: https://github.com/mozilla/server-side-tls/issues/190#issuecomment-421831599
[caddy]: 2cab475ba5/modules/caddytls/values.go (L113-L124)
[crypto/elliptic]: 2910c5b4a0/src/crypto/elliptic
[golang-nuts]: https://groups.google.com/forum/#!topic/golang-nuts/nlnJkBMMyzk
In conclusion, there's no real reason to use P-384 in general: if you
don't care about Web PKI compatibility and want to use a nicer curve,
then Ed25519 or P-521 are better options; if you're a NIST-fearing
paranoiac, you should use good old RSA; but if you're a normal person
running a web server, then you're best served by just using P-256. Right
now, NixOS makes an arbitrary decision between two equally-mediocre
curves that just so happens to slow down ECDH key agreement for every
TLS connection by over an order of magnitude; this commit fixes that.
Unfortunately, it seems like existing P-384 certificates won't get
migrated automatically on renewal without manual intervention, but
that's a more general problem with the existing ACME module (see #81634;
I know @yegortimoshenko is working on this). To migrate your
certificates manually, run:
$ sudo find /var/lib/acme/.lego/certificates -type f -delete
$ sudo find /var/lib/acme -name '*.pem' -delete
$ sudo systemctl restart 'acme-*.service' nginx.service
(No warranty. If it breaks, you get to keep both pieces. But it worked
for me.)
NixOS 20.03 is built on kernel 5.4 and 19.09 is on 4.19, so we should update
this option to the highest value possible, per linked upstream instructions from
Amazon.
* nixos/nixpkgs.nix: Allow just using config in system
This assertion requires system to work properly. We might not have
this in cases where the user just sets config and wants Nixpkgs to
infer system from that. This adds a default for when this happens,
using doubleFromSystem.
* parens