The test starts the glance service, creates a nixos image and ensures Glance
list it.
Note the test also starts the Keystone service since it is required
by Glance.
Since 8180922d23, the cjdns module
imports from a derivation, which is very bad. It causes all of stdenv
to be built at evaluation time. Since we have a hard 3600 second limit
on Hydra evaluations, this was causing NixOS jobsets to time out.
@joachifm
Test that adding physical devices to containers works, find that network setup
then doesn't work because there is no udev in the container to tell systemd
that the device is present.
Fixed by not depending on the device in the container.
Activate the new container test for release
Bonds, bridges and other network devices need the underlying not as
dependency when used inside the container. Because the device is already
there.
But the address configuration needs the aggregated device itself.
It uses import-from-derivation, which is a bad thing, because this
causes hydra-evaluator to build Cassandra at evaluation time.
$ nix-instantiate nixos/release.nix -A tests.cassandra.i686-linux --dry-run
error: cannot read ‘/nix/store/c41blyjz6pfvk9fnvrn6miihq5w3j0l4-cassandra-2.0.16/conf/cassandra-env.sh’, since path ‘/nix/store/0j9ax4z8xhaz5lhrwl3bwj10waxs3hgy-cassandra-2.0.16.drv’ is not valid, at /home/eelco/Dev/nixpkgs/nixos/modules/services/databases/cassandra.nix:373:11
Also, the module is a mess (bad option descriptions, poor indentation,
a gazillion options where a generic "config" option would suffice, it
opens ports in the firewall, it sets vm.swappiness, ...).
The module will configure a Cassandra server with common options being
tweakable. Included is also a test which will spin up 3 nodes and
verify that the cluster can be formed, broken, and repaired.
With these changes, a container can have more then one veth-pair. This allows for example to have LAN and DMZ as bridges on the host and add dedicated containers for proxies, ipv4-firewall and ipv6-firewall. Or to have a bridge for normal WAN, one bridge for administration and one bridge for customer-internal communication. So that web-server containers can be reached from outside per http, from the management via ssh and can talk to their database via the customer network.
The scripts to set up the containers are now rendered several times instead of just one template. The scripts now contain per-container code to configure the extra veth interfaces. The default template without support for extra-veths is still rendered for the imperative containers.
Also a test is there to see if extra veths can be placed into host-bridges or can be reached via routing.
GoCD is an open source continuous delivery server specializing in advanced workflow
modeling and visualization. Update maintainers list to include swarren83. Update
module list to include gocd agent and server module. Update packages list to include
gocd agent and server package. Update version, revision and checksum for GoCD
release 16.5.0.
We already have a small regression test for #15226 within the swraid
installer test. Unfortunately, we only check there whether the md
kthread got signalled but not whether other rampaging processes are
still alive that *should* have been killed.
So in order to do this we provide multiple canary processes which are
checked after the system has booted up:
* canary1: It's a simple forking daemon which just sleeps until it's
going to be killed. Of course we expect this process to not
be alive anymore after boot up.
* canary2: Similar to canary1, but tries to mimick a kthread to make
sure that it's going to be properly killed at the end of
stage 1.
* canary3: Like canary2, but this time using a @ in front of its
command name to actually prevent it from being killed.
* kcanary: This one is a real kthread and it runs until killed, which
shouldn't be the case.
Tested with and without 67223ee and everything works as expected, at
least on my machine.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This partially reverts f2d24b9840.
Instead of disabling the channels via removing the channel mapping from
the tests themselves, let's just explicitly reference the stable test in
release.nix. That way it's still possible to run the beta and dev tests
via something like "nix-build nixos/tests/chromium.nix -A beta" and
achieve the same effect of not building beta and dev versions on Hydra.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
It's not the job of Nixpkgs to distribute beta versions of upstream
packages. More importantly, building these delays channel updates by
several hours, which is bad for our security fix turnaround time.
The Nix store squashfs is stored inside the initrd instead of separately
(cherry picked from commit 976fd407796877b538c470d3a5253ad3e1f7bc68)
Signed-off-by: Domen Kožar <domen@dev.si>
A small test which checks whether tasks can be synced using the
Taskserver.
It doesn't test group functionality because I suspect that they're not
yet implemented upstream. I haven't done an in-depth check on that but I
couldn't find a method of linking groups to users yet so I guess this
will get in with one of the text releases of Taskwarrior/Taskserver.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
A testcase each for
- declarative ipv6-only container
Seems odd to define the container IPs with their prefix length attached.
There should be a better way…
- declarative bridged container
Also fix the ping test by waiting for the container to start
When the ping was executed, the container might not have finished starting. Or
the host-side of the container wasn't finished with config. Waiting for
2 seconds in between fixes this.