This hopefully fixes intermittent initrd failures where udevd cannot
create a Unix domain socket:
machine# running udev...
machine# error getting socket: Address family not supported by protocol
machine# error initializing udev control socket
machine# error getting socket: Address family not supported by protocol
The "unix" kernel module is supposed to be loaded automatically, and
clearly that works most of the time, but maybe there is a race
somewhere. In any case, no sane person would run a kernel without Unix
domain sockets, so we may as well make it builtin.
http://hydra.nixos.org/build/30001448
Two concurrent tarsnap backups cannot be run at the same time with the
same keys - completely separate sets of keys must be generated for each
archive in this case, if you want backups to overlap.
This extends the archives attrset to support a 'keyfile' option, which
defaults to /root/tarsnap.key like the top-level attribute.
With this change, if you generate two keys with tarsnap-keygen(1) and
use each of those separately for each archive, you can backup
concurrently.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Tarsnap locks the cachedir during backup, meaning if you specify
multiple backups with a shared cache that might overlap (for example,
one backup may take an hour), secondary backups will fail. This isn't
very nice behavior for the obvious reasons.
This splits the cache dirs for each archive appropriately. Note that
this will require a rebuild of your archive caches (although if you were
only using one archive for your whole system, you can just move the
directory).
Signed-off-by: Austin Seipp <aseipp@pobox.com>
A machine may not always be active (or online!) when a backup timer
triggers, meaning backups can be missed - now we properly set the
tarsnap timer's Persistent option so systemd will run the command even
when the machine wasn't online at that exact time.
However, we also need to make sure that we can contact the tarsnap
server reliably before we start the backup. So, we attempt to ping the
access endpoint in a loop with a sleep, before continuing.
This fixes#8823.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Tarsnap locks the cachedir during backup, meaning if you specify
multiple backups with a shared cache that might overlap (for example,
one backup may take an hour), secondary backups will fail. This isn't
very nice behavior for the obvious reasons.
This splits the cache dirs for each archive appropriately. Note that
this will require a rebuild of your archive caches (although if you were
only using one archive for your whole system, you can just move the
directory).
Signed-off-by: Austin Seipp <aseipp@pobox.com>
The Bitmessage protocol v3 became mandatory on 16 Nov 2014 and notbit does not support it, nor has there been any activity in the project repository since then.
Part of the way towards #11864. We still don't have the auditd
userland logging daemon, but journald also tracks audit logs so we
can already use this.
This hopefully fixes intermittent test failures like
http://hydra.nixos.org/build/29962437
router# [ 240.128835] INFO: task mke2fs:99 blocked for more than 120 seconds.
router# [ 240.130135] Not tainted 3.18.25 #1-NixOS
router# [ 240.131110] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
assuming that these are caused by high load on the host.
... because we make it built-in by default.
I can't imagine anyone who wanted to purge this module from his/her system,
so let's keep it simple, at least for now.
This commit adds the options --build-host and --target-host to nixos-rebuild.
--build-host instructs nixos-rebuild to perform all nix builds on the
specified host (via ssh). Build results are then copied back to the
local machine and used when activating the system.
--build-target instructs nixos-rebuild to activate the configuration
not on the local machine but on the specified remote host. Build
results are copied to the target machine and then activated there (via ssh).
It is possible to combine the usage of --build-host and --target-host,
in which case you can perform the build on one remote machine and deploy
the configuration to another remote machine. The only requirement is that
the build host has a working ssh connection to the target host (if the
target is not local), and that the local machine can connect to both
the target and the build host. Also, your user must be allowed to copy
nix closures between the local machine and the target and host machines.
At no point in time are the configuration sources (the nix files) copied
anywhere. Instead, nix evaluation always happens locally
(with nix-instantiate). The drv-file is then copied and realised remotely
(with nix-store).
As a convenience, if only --target-host is specified, --build-host is
implicitly set to that host too. So if you want to build locally and deploy
remotely you have to explicitly set "--build-host localhost".
To activate (test, boot or switch) you need to have root access to the
target host. You can specify this by "--target-host root@myhost".
I have tested the obvious scenarios and they are working. Some of the
combinations of --build-host and --target-host and the various actions might
not make much sense, and should maybe be forbidden (like setting a remote
target host when building a VM), and some combinations might not work at all.
Previously this barfed with:
updating GRUB 2 menu...
fileparse(): need a valid pathname at /nix/store/zldbbngl0f8g5iv4rslygxwp0dbg1624-install-grub.pl line 391.
warning: error(s) occured while switching to the new configuration
* Patched fish to load /etc/fish/config.fish if it exists (by default,
it only loads config relative to itself)
* Added fish-foreign-env package to parse the system environment
closes#5331
We seem to be in an unfortunate situation: booting without 'nomodeset'
causes hangs when booting on some NVIDIA cards (6948c3ab80), but on the
other hand adding 'nomodeset' prevents X from starting on other hardware
(e.g. issue #10381 and my Thinkpad X250 with an integrated Broadwell GPU).
Attempt to remedy this situation a bit by adding a separate entry in the
ISOLINUX menu (with the non-'nomodeset' being the default).
The docker module used different code for socket-activated docker daemon than for the non-socket activated daemon.
In particular, if the socket-activated daemon is used, then modprobe wasn't set up to be usable and in PATH for
the docker daemon, which resulted in a failure to start the daemon with overlayfs as storageDriver if the
`overlay` kernel module wasn't already loaded. This commit fixes that bug (which only appears if socket
activation is used), and also reduces the duplication between code paths so that it's easier to keep
both in sync in future.