To avoid that JSON messages are parsed twice in case of
remote builds with `ssh-ng://`, I split up the original
`handleJSONLogMessage` into three parts:
* `parseJSONMessage(const std::string&)` checks if it's a message in the
form of `@nix {...}` and tries to parse it (and prints an error if the
parsing fails).
* `handleJSONLogMessage(nlohmann::json&, ...)` reads the fields from the
message and passes them to the logger.
* `handleJSONLogMessage(const std::string&, ...)` behaves as before, but
uses the two functions mentioned above as implementation.
In case of `ssh-ng://`-logs the first two methods are invoked manually.
Right now when building a derivation remotely via
$ nix build -j0 -f . hello -L --builders 'ssh://builder'
it's possible later to read through the entire build-log by running
`nix log -f . hello`. This isn't possible however when using `ssh-ng`
rather than `ssh`.
The reason for that is that there are two different ways to transfer
logs in Nix through e.g. an SSH tunnel (that are used by `ssh`/`ssh-ng`
respectively):
* `ssh://` receives its logs from the fd pointing to `builderOut`. This
is directly passed to the "log-sink" (and to the logger on each `\n`),
hence `nix log` works here.
* `ssh-ng://` however expects JSON-like messages (i.e. `@nix {log data
in here}`) and passes it directly to the logger without doing anything
with the `logSink`. However it's certainly possible to extract
log-lines from this format as these have their own message-type in the
JSON payload (i.e. `resBuildLogLine`).
This is basically what I changed in this patch: if the code-path for
`builderOut` is not reached and a `logSink` is initialized, the
message was successfully processed by the JSON logger (i.e. it's in
the expected format) and the line is of the expected type (i.e.
`resBuildLogLine`), the line will be written to the log-sink as well.
Closes#5079
- Make sure that it starts even without the `nix-command` xp feature
- Fail if it doesn’t manage to start
This fixes a 30s wait for every test in `init.sh` as the daemon couldn’t
start, but the code was just waiting 30s and continuing as if everything
was all right.
Polling every 1 second means that even the simplest test takes at least
2 seconds. We can reasonably poll 1/10 of that to make things much
quicker (esp. given that most of the time 0.1s is enough for the
daemon to be started or stopped)
The tests are scheduled in the order they appear, so running the long
ones first slightly improves the scheduling.
On my machine, this decreases the time of `make install` from 40s to 36s
Same as 1fd127a068, but applied to a
code path (volume_pass_works -> verify_volume_pass) that the reporting
user didn't hit and wasn't able to trigger manually. I am not certain
but I suspect it will be easier to add prophylactically than to debug
if its absence causes trouble some day.
While trying to figure out how `nix-env`/`nix profile` work I had a hard
time understand how man pages were being installed.
Took me quite some time to figure this out, thought it might be useful
to others too!
Fixes#6122, which reports a problem with trying to run the installer
under another user (probably: user is not the disk "owner" and thus
can't mount the volume).
This also makes sure that we get the Docker images from the same Hydra
eval, rather than the latest build from job/nix/.../dockerImage, which
may not be the same.