In 87a19e9048 I merged staging-next into master using the GitHub gui as intended.
In ac241fb7a5 I merged master into staging-next for the next staging cycle, however, I accidentally pushed it to master.
Thinking this may cause trouble, I reverted it in 0be87c7979. This was however wrong, as it "removed" master.
This reverts commit 0be87c7979.
I merged master into staging-next but accidentally pushed it to master.
This should get us back to 87a19e9048.
This reverts commit ac241fb7a5, reversing
changes made to 76a439239e.
The primary motivation of this change was to allow third-party modules
to be used with OpenResty, but it also results in a significant
reduction of code duplication.
The following error was printed every second:
Can't exec "clear": Not a directory at /nix/store/.../bin/.mytop-wrapped line 110.
Use of uninitialized value $CLEAR in string at /nix/store/.../bin/.mytop-wrapped line 781.
Therefore the mytop output was repeatedly displayed instead of updating itself "in-place".
Minor incompatibilities due to moving to upstream defaults:
- capabilities are used instead of systemd.socket units
- the control socket moved:
/run/kresd/control -> /run/knot-resolver/control/1
- cacheDir moved and isn't configurable anymore
- different user+group names, without static IDs
Thanks Mic92 for multiple ideas.
Previously, some files were copied into the Nixpkgs tree, which meant
we wouldn't easily be able to update them, and was also just messy.
The reason it was done that way before was so that a few NixOS
options could be substituted in. Some problems with doing it this way
were that the _package_ changed depending on the values of the
settings, which is pretty strange, and also that it only allowed those
few settings to be set.
In the new model, mailman-web is a usable package without needing to
override, and I've implemented the NixOS options in a much more
flexible way. NixOS' mailman-web config file first reads the
mailman-web settings to use as defaults, but then it loads another
configuration file generated from the new services.mailman.webSettings
option, so _any_ mailman-web Django setting can be customised by the
user, rather than just the three that were supported before. I've
kept the old options, but there might not really be any good reason to
keep them.
We already had python3Packages.mailman, but that's only really usable
as a library. The only other option was to create a whole Python
environment, which was undesirable to install as a system-wide
package.
This replaces all Mailman secrets with ones that are generated the
first time the service is run. This replaces the hyperkittyApiKey
option, which would lead to a secret in the world-readable store.
Even worse were the secrets hard-coded into mailman-web, which are not
just world-readable, but identical for all users!
services.mailman.hyperkittyApiKey has been removed, and so can no
longer be used to determine whether to enable Hyperkitty. In its
place, there is a new option, services.mailman.hyperkitty.enable. For
consistency, services.mailman.hyperkittyBaseUrl has been renamed to
services.mailman.hyperkitty.baseUrl.
Using a custom path in the Nix store meant that users of the module
couldn't add their own config files, which is a desirable feature. I
don't think avoiding /etc buys us anything.
Hasn't been updated since 2017 and breaks when building with a more
recent gcc (gcc5 needs to be removed as it breaks with glibc 2.30). I
tried to update this to a more recent version which didn't work either,
but as this is pretty old I'm not even sure if this package is used atm.
This is a security release in order to address the following defects:
- CVE-2019-14902: Replication of ACLs set to inherit down a subtree on AD Directory not automatic.
- CVE-2019-14907: Crash after failed character conversion at log level 3 or above.
- CVE-2019-19344: Use after free during DNS zone scavenging in Samba AD DC.
According to https://repology.org/repository/nix_unstable/problems, we have a
lot of packages that have http links that redirect to https as their homepage.
This commit updates all these packages to use the https links as their
homepage.
The following script was used to make these updates:
```
curl https://repology.org/api/v1/repository/nix_unstable/problems \
| jq '.[] | .problem' -r \
| rg 'Homepage link "(.+)" is a permanent redirect to "(.+)" and should be updated' --replace 's@$1@$2@' \
| sort | uniq > script.sed
find -name '*.nix' | xargs -P4 -- sed -f script.sed -i
```
The previously propagated build inputs are optional, and so are
included in checkInputs so the tests can run, but not propagated so
they aren't included if unneeded.
Some things were provided by default, some by systemd unit and some
were just miraculously working. This turns them into explicit
dependencies of the package itself, making everything properly
overrideable.
+ providing glibcLocales fixes elixir compile warnings
+ providing systemd dependency allows rabbit to use systemctl for unit
activation check instead of falling back to sleep. This was seen as
a warning during startup.
Before this change, the build would duplicate the prefix in the
installation location for cgi-bin stuff:
-- Installing: /nix/store/skg6b81hikd3fvvdf62xbkm6gsbid41a-zoneminder-1.32.3/nix/store/skg6b81hikd3fvvdf62xbkm6gsbid41a-zoneminder-1.32.3/libexec/zoneminder/cgi-bin/zms
Naive concatenation of $LD_LIBRARY_PATH can result in an empty
colon-delimited segment; this tells glibc to load libraries from the
current directory, which is definitely wrong, and may be a security
vulnerability if the current directory is untrusted. (See #67234, for
example.) Fix this throughout the tree.
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
This fixes some two-digit year rounding bugs that started triggering
because 2020 is closer to 2070 than 1970. Apparently two digits years
are still a thing.
This fixes the patch for nginx to clear the Last-Modified header if a
static file is served from the Nix store.
So far we only used the ETag from the store path, but if the
Last-Modified header is always set to "Thu, 01 Jan 1970 00:00:01 GMT",
Firefox and Chrome/Chromium seem to ignore the ETag and simply use the
cached content instead of revalidating.
Alongside the fix, this also adds a dedicated NixOS VM test, which uses
WebDriver and Firefox to check whether the content is actually served
from the browser's cache and to have a more real-world test case.
* structured config for main config file allows to launch nagios in
debug mode without having to write the whole config file by hand
* build time syntax check
* all options have types, one more example
* I find it misleading that the main nagios config file is linked in
/etc but that if you change the link in /etc/ and restart nagios, it
has no effect. Have nagios use /etc/nagios.cfg
* fix paths in example nagios config files, which allows to reuse it:
services.nagios.objectDefs =
(map (x: "${pkgs.nagios}/etc/objects/${x}.cfg")
[ "templates" "timeperiods" "commands" ]) ++ [ ./main.cfg ]
* for the above reason, add mailutils to default plugins
Co-Authored-By: Aaron Andersen <aaron@fosslib.net>
This is what I've suspected a while ago[1]:
> Heads-up everyone: After testing this in a few production instances,
> it seems that some browsers still get cache hits for new store paths
> (and changed contents) for some reason. I highly suspect that it might
> be due to the last-modified header (as mentioned in [2]).
>
> Going to test this with last-modified disabled for a little while and
> if this is the case I think we should improve that patch by disabling
> last-modified if serving from a store path.
Much earlier[2] when I reviewed the patch, I wrote this:
> Other than that, it looks good to me.
>
> However, I'm not sure what we should do with Last-Modified header.
> From RFC 2616, section 13.3.4:
>
> - If both an entity tag and a Last-Modified value have been
> provided by the origin server, SHOULD use both validators in
> cache-conditional requests. This allows both HTTP/1.0 and
> HTTP/1.1 caches to respond appropriately.
>
> I'm a bit nervous about the SHOULD here, as user agents in the wild
> could possibly just use Last-Modified and use the cached content
> instead.
Unfortunately, I didn't pursue this any further back then because
@pbogdan noted[3] the following:
> Hmm, could they (assuming they are conforming):
>
> * If an entity tag has been provided by the origin server, MUST
> use that entity tag in any cache-conditional request (using If-
> Match or If-None-Match).
Since running with this patch in some deployments, I found that both
Firefox and Chrome/Chromium do NOT re-validate against the ETag if the
Last-Modified header is still the same.
So I wrote a small NixOS VM test with Geckodriver to have a test case
which is closer to the real world and I indeed was able to reproduce
this.
Whether this is actually a bug in Chrome or Firefox is an entirely
different issue and even IF it is the fault of the browsers and it is
fixed at some point, we'd still need to handle this for older browser
versions.
Apart from clearing the header, I also recreated the patch by using a
plain "git diff" with a small description on top. This should make it
easier for future authors to work on that patch.
[1]: https://github.com/NixOS/nixpkgs/pull/48337#issuecomment-495072764
[2]: https://github.com/NixOS/nixpkgs/pull/48337#issuecomment-451644084
[3]: https://github.com/NixOS/nixpkgs/pull/48337#issuecomment-451646135
Signed-off-by: aszlig <aszlig@nix.build>
Tests currently fail like this:
```
/nix/store/yslk7x7iw3hka6d33kmnba9sxaia4492-python3.7-mautrix-0.4.0/lib/python3.7/site-packages/mautrix/util/manhole.py:9: in <module>
from socket import SOL_SOCKET, SO_PEERCRED
E ImportError: cannot import name 'SO_PEERCRED' from 'socket' (/nix/store/81qani7sdir46gjwf3a3jr2cv1aggkz1-python3-3.7.5/lib/python3.7/socket.py)
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
=============================== 1 error in 1.73s ===============================
```
Those values don't seem to be available on the MacOS-version of that
module. As there's no workaround implemented in the source, I assume
that upstream doesn't intend to support darwin-alike platforms atm.
Idea shamelessly stolen from 4e60b0efae.
I realized that I don't really know anymore where I'm listed as maintainer and what
I'm actually (co)-maintaining which means that I can't proactively take
care of packages I officially maintain.
As I don't have the time, energy and motivation to take care of stuff I
was interested in 1 or 2 years ago (or packaged for someone else in the
past), I decided that I make this explicit by removing myself from several
packages and adding myself in some other stuff I'm now interested in.
I've seen it several times now that people remove themselves from a
package without removing the package if it's unmaintained after that
which is why I figured that it's fine in my case as the affected pkgs
are rather low-prio and were pretty easy to maintain.