attribute name of the machine in the model. This allows
networking.hostName and deployment.targetHost to be omitted for
typical networks.
svn path=/nixos/trunk/; revision=25125
It's really an abstract configuration option that specifies that *some*
SSH daemon should be enabled (which could be OpenSSH).
svn path=/nixos/trunk/; revision=25119
- developed services.disnix.infrastructure option, which contains properties for the Disnix infrastructure model (these properties can be either used by Disnix itself or the Avahi publisher)
svn path=/nixos/trunk/; revision=25052
- implemented --no-out-link option so that invoking these tools from scripts leave no garbage behind
- some misc. cleanups
svn path=/nixos/trunk/; revision=25019
now kills its process group when it exits. Without setsid, this
ends up killing the parent (i.e., the builder).
* Use port 445 instead of 139 because the CIFS kernel module tries
port 445 first. If there is an actual Samba running on the host, it
would end up connecting to that one instead of our own and fail.
svn path=/nixos/trunk/; revision=25016
which NixOS should be built. This is useful in NixOS network
specifications, because it allows machines in the network to have
different types, e.g.,
{
machine1 =
{ config, pkgs, ... }:
{ nixpkgs.system = "i686-linux";
... other config ...
};
machine2 =
{ config, pkgs, ... }:
{ nixpkgs.system = "x86_64-linux";
... other config ...
};
}
It can also be useful in distributed NixOS tests.
svn path=/nixos/trunk/; revision=24823
problem is that configuration values below a mkIf are evaluated
strictly even if the condition is false. Thus "${luksRoot}" causes
an evaluation error. As a workaround, use the empty string instead
of `null' as the default value. However, we should really fix the
laziness of mkIf. It's likely that NixOS evaluation would be much
faster if it didn't have to evaluate disabled configuration values.
svn path=/nixos/trunk/; revision=24477
- Added a backdoor option to the interactive run-vms script. This allows me to intergrate the virtual network approach with Disnix
- Small documentation fixes
Some explanation:
The nixos-build-vms command line tool can be used to build a virtual network of a network.nix specification.
For example, a network configuration (network.nix) could look like this:
{
test1 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
...
};
test2 =
{pkgs, config, ...}:
{
services.openssh.enable = true;
services.xserver.enable = true;
}
;
}
By typing the following instruction:
$ nixos-build-vms -n network.nix
a virtual network is built, which can be started by typing:
$ ./result/bin/run-vms
It is also possible to enable a backdoor. In this case *.socket files are stored in the current directory
which can be used by the end-user to invoke remote instruction on a VM in the network through a Unix
domain socket.
For example by building the network with the following instructions:
$ nixos-build-vms -n network.nix --use-backdoor
and launching the virtual network:
$ ./result/bin/run-vms
You can find two socket files in your current directory, namely: test1.socket and test2.socket.
These Unix domain sockets can be used to remotely administer the test1 and test2 machine
in the virtual network.
For example by running:
$ socat ./test1.socket stdio
ls /root
You can retrieve the contents of the /root directory of the virtual machine with identifier test1
svn path=/nixos/trunk/; revision=24410