Merge master into haskell-updates
This commit is contained in:
commit
c1e92170ec
316 changed files with 4094 additions and 2984 deletions
2
.github/CODEOWNERS
vendored
2
.github/CODEOWNERS
vendored
|
@ -78,6 +78,8 @@
|
|||
/nixos/doc/manual/man-nixos-option.xml @nbp
|
||||
/nixos/modules/installer/tools/nixos-option.sh @nbp
|
||||
/nixos/modules/system @dasJ
|
||||
/nixos/modules/system/activation/bootspec.nix @grahamc @cole-h @raitobezarius
|
||||
/nixos/modules/system/activation/bootspec.cue @grahamc @cole-h @raitobezarius
|
||||
|
||||
# NixOS integration test driver
|
||||
/nixos/lib/test-driver @tfc
|
||||
|
|
|
@ -32,3 +32,22 @@ mypkg = let
|
|||
}});
|
||||
in callPackage { inherit cudaPackages; };
|
||||
```
|
||||
|
||||
The CUDA NVCC compiler requires flags to determine which hardware you
|
||||
want to target for in terms of SASS (real hardware) or PTX (JIT kernels).
|
||||
|
||||
Nixpkgs tries to target support real architecture defaults based on the
|
||||
CUDA toolkit version with PTX support for future hardware. Experienced
|
||||
users may optmize this configuration for a variety of reasons such as
|
||||
reducing binary size and compile time, supporting legacy hardware, or
|
||||
optimizing for specific hardware.
|
||||
|
||||
You may provide capabilities to add support or reduce binary size through
|
||||
`config` using `cudaCapabilities = [ "6.0" "7.0" ];` and
|
||||
`cudaForwardCompat = true;` if you want PTX support for future hardware.
|
||||
|
||||
Please consult [GPUs supported](https://en.wikipedia.org/wiki/CUDA#GPUs_supported)
|
||||
for your specific card(s).
|
||||
|
||||
Library maintainers should consult [NVCC Docs](https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/)
|
||||
and release notes for their software package.
|
||||
|
|
|
@ -15923,6 +15923,12 @@
|
|||
github = "kuwii";
|
||||
githubId = 10705175;
|
||||
};
|
||||
kkharji = {
|
||||
name = "kkharji";
|
||||
email = "kkharji@protonmail.com";
|
||||
github = "kkharji";
|
||||
githubId = 65782666;
|
||||
};
|
||||
melias122 = {
|
||||
name = "Martin Elias";
|
||||
email = "martin+nixpkgs@elias.sx";
|
||||
|
|
|
@ -159,6 +159,40 @@ environment.variables.VK_ICD_FILENAMES =
|
|||
"/run/opengl-driver/share/vulkan/icd.d/radeon_icd.x86_64.json";
|
||||
```
|
||||
|
||||
## VA-API {#sec-gpu-accel-va-api}
|
||||
|
||||
[VA-API (Video Acceleration API)](https://www.intel.com/content/www/us/en/developer/articles/technical/linuxmedia-vaapi.html)
|
||||
is an open-source library and API specification, which provides access to
|
||||
graphics hardware acceleration capabilities for video processing.
|
||||
|
||||
VA-API drivers are loaded by `libva`. The version in nixpkgs is built to search
|
||||
the opengl driver path, so drivers can be installed in
|
||||
[](#opt-hardware.opengl.extraPackages).
|
||||
|
||||
VA-API can be tested using:
|
||||
|
||||
```ShellSession
|
||||
$ nix-shell -p libva-utils --run vainfo
|
||||
```
|
||||
|
||||
### Intel {#sec-gpu-accel-va-api-intel}
|
||||
|
||||
Modern Intel GPUs use the iHD driver, which can be installed with:
|
||||
|
||||
```nix
|
||||
hardware.opengl.extraPackages = [
|
||||
intel-media-driver
|
||||
];
|
||||
```
|
||||
|
||||
Older Intel GPUs use the i965 driver, which can be installed with:
|
||||
|
||||
```nix
|
||||
hardware.opengl.extraPackages = [
|
||||
vaapiIntel
|
||||
];
|
||||
```
|
||||
|
||||
## Common issues {#sec-gpu-accel-common-issues}
|
||||
|
||||
### User permissions {#sec-gpu-accel-common-issues-permissions}
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Contributing to this manual {#chap-contributing}
|
||||
|
||||
The DocBook and CommonMark sources of NixOS' manual are in the [nixos/doc/manual](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual) subdirectory of the [Nixpkgs](https://github.com/NixOS/nixpkgs) repository.
|
||||
The [DocBook] and CommonMark sources of the NixOS manual are in the [nixos/doc/manual](https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual) subdirectory of the [Nixpkgs](https://github.com/NixOS/nixpkgs) repository.
|
||||
|
||||
You can quickly check your edits with the following:
|
||||
|
||||
|
@ -11,3 +11,25 @@ $ nix-build nixos/release.nix -A manual.x86_64-linux
|
|||
```
|
||||
|
||||
If the build succeeds, the manual will be in `./result/share/doc/nixos/index.html`.
|
||||
|
||||
**Contributing to the man pages**
|
||||
|
||||
The man pages are written in [DocBook] which is XML.
|
||||
|
||||
To see what your edits look like:
|
||||
|
||||
```ShellSession
|
||||
$ cd /path/to/nixpkgs
|
||||
$ nix-build nixos/release.nix -A manpages.x86_64-linux
|
||||
```
|
||||
|
||||
You can then read the man page you edited by running
|
||||
|
||||
```ShellSession
|
||||
$ man --manpath=result/share/man nixos-rebuild # Replace nixos-rebuild with the command whose manual you edited
|
||||
```
|
||||
|
||||
If you're on a different architecture that's supported by NixOS (check nixos/release.nix) then replace `x86_64-linux` with the architecture.
|
||||
`nix-build` will complain otherwise, but should also tell you which architecture you have + the supported ones.
|
||||
|
||||
[DocBook]: https://en.wikipedia.org/wiki/DocBook
|
||||
|
|
36
nixos/doc/manual/development/bootspec.chapter.md
Normal file
36
nixos/doc/manual/development/bootspec.chapter.md
Normal file
|
@ -0,0 +1,36 @@
|
|||
# Experimental feature: Bootspec {#sec-experimental-bootspec}
|
||||
|
||||
Bootspec is a experimental feature, introduced in the [RFC-0125 proposal](https://github.com/NixOS/rfcs/pull/125), the reference implementation can be found [there](https://github.com/NixOS/nixpkgs/pull/172237) in order to standardize bootloader support
|
||||
and advanced boot workflows such as SecureBoot and potentially more.
|
||||
|
||||
You can enable the creation of bootspec documents through [`boot.bootspec.enable = true`](options.html#opt-boot.bootspec.enable), which will prompt a warning until [RFC-0125](https://github.com/NixOS/rfcs/pull/125) is officially merged.
|
||||
|
||||
## Schema {#sec-experimental-bootspec-schema}
|
||||
|
||||
The bootspec schema is versioned and validated against [a CUE schema file](https://cuelang.org/) which should considered as the source of truth for your applications.
|
||||
|
||||
You will find the current version [here](../../../modules/system/activation/bootspec.cue).
|
||||
|
||||
## Extensions mechanism {#sec-experimental-bootspec-extensions}
|
||||
|
||||
Bootspec cannot account for all usecases.
|
||||
|
||||
For this purpose, Bootspec offers a generic extension facility [`boot.bootspec.extensions`](options.html#opt-boot.bootspec.extensions) which can be used to inject any data needed for your usecases.
|
||||
|
||||
An example for SecureBoot is to get the Nix store path to `/etc/os-release` in order to bake it into a unified kernel image:
|
||||
|
||||
```nix
|
||||
{ config, lib, ... }: {
|
||||
boot.bootspec.extensions = {
|
||||
"org.secureboot.osRelease" = config.environment.etc."os-release".source;
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
To reduce incompatibility and prevent names from clashing between applications, it is **highly recommended** to use a unique namespace for your extensions.
|
||||
|
||||
## External bootloaders {#sec-experimental-bootspec-external-bootloaders}
|
||||
|
||||
It is possible to enable your own bootloader through [`boot.loader.external.installHook`](options.html#opt-boot.loader.external.installHook) which can wrap an existing bootloader.
|
||||
|
||||
Currently, there is no good story to compose existing bootloaders to enrich their features, e.g. SecureBoot, etc. It will be necessary to reimplement or reuse existing parts.
|
|
@ -12,6 +12,7 @@
|
|||
<xi:include href="../from_md/development/sources.chapter.xml" />
|
||||
<xi:include href="../from_md/development/writing-modules.chapter.xml" />
|
||||
<xi:include href="../from_md/development/building-parts.chapter.xml" />
|
||||
<xi:include href="../from_md/development/bootspec.chapter.xml" />
|
||||
<xi:include href="../from_md/development/what-happens-during-a-system-switch.chapter.xml" />
|
||||
<xi:include href="../from_md/development/writing-documentation.chapter.xml" />
|
||||
<xi:include href="../from_md/development/nixos-tests.chapter.xml" />
|
||||
|
|
|
@ -177,6 +177,48 @@ environment.variables.AMD_VULKAN_ICD = "RADV";
|
|||
# Or
|
||||
environment.variables.VK_ICD_FILENAMES =
|
||||
"/run/opengl-driver/share/vulkan/icd.d/radeon_icd.x86_64.json";
|
||||
</programlisting>
|
||||
</section>
|
||||
</section>
|
||||
<section xml:id="sec-gpu-accel-va-api">
|
||||
<title>VA-API</title>
|
||||
<para>
|
||||
<link xlink:href="https://www.intel.com/content/www/us/en/developer/articles/technical/linuxmedia-vaapi.html">VA-API
|
||||
(Video Acceleration API)</link> is an open-source library and API
|
||||
specification, which provides access to graphics hardware
|
||||
acceleration capabilities for video processing.
|
||||
</para>
|
||||
<para>
|
||||
VA-API drivers are loaded by <literal>libva</literal>. The version
|
||||
in nixpkgs is built to search the opengl driver path, so drivers
|
||||
can be installed in
|
||||
<xref linkend="opt-hardware.opengl.extraPackages" />.
|
||||
</para>
|
||||
<para>
|
||||
VA-API can be tested using:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ nix-shell -p libva-utils --run vainfo
|
||||
</programlisting>
|
||||
<section xml:id="sec-gpu-accel-va-api-intel">
|
||||
<title>Intel</title>
|
||||
<para>
|
||||
Modern Intel GPUs use the iHD driver, which can be installed
|
||||
with:
|
||||
</para>
|
||||
<programlisting language="bash">
|
||||
hardware.opengl.extraPackages = [
|
||||
intel-media-driver
|
||||
];
|
||||
</programlisting>
|
||||
<para>
|
||||
Older Intel GPUs use the i965 driver, which can be installed
|
||||
with:
|
||||
</para>
|
||||
<programlisting language="bash">
|
||||
hardware.opengl.extraPackages = [
|
||||
vaapiIntel
|
||||
];
|
||||
</programlisting>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="chap-contributing">
|
||||
<title>Contributing to this manual</title>
|
||||
<para>
|
||||
The DocBook and CommonMark sources of NixOS’ manual are in the
|
||||
The
|
||||
<link xlink:href="https://en.wikipedia.org/wiki/DocBook">DocBook</link>
|
||||
and CommonMark sources of the NixOS manual are in the
|
||||
<link xlink:href="https://github.com/NixOS/nixpkgs/tree/master/nixos/doc/manual">nixos/doc/manual</link>
|
||||
subdirectory of the
|
||||
<link xlink:href="https://github.com/NixOS/nixpkgs">Nixpkgs</link>
|
||||
|
@ -19,4 +21,32 @@ $ nix-build nixos/release.nix -A manual.x86_64-linux
|
|||
If the build succeeds, the manual will be in
|
||||
<literal>./result/share/doc/nixos/index.html</literal>.
|
||||
</para>
|
||||
<para>
|
||||
<emphasis role="strong">Contributing to the man pages</emphasis>
|
||||
</para>
|
||||
<para>
|
||||
The man pages are written in
|
||||
<link xlink:href="https://en.wikipedia.org/wiki/DocBook">DocBook</link>
|
||||
which is XML.
|
||||
</para>
|
||||
<para>
|
||||
To see what your edits look like:
|
||||
</para>
|
||||
<programlisting>
|
||||
$ cd /path/to/nixpkgs
|
||||
$ nix-build nixos/release.nix -A manpages.x86_64-linux
|
||||
</programlisting>
|
||||
<para>
|
||||
You can then read the man page you edited by running
|
||||
</para>
|
||||
<programlisting>
|
||||
$ man --manpath=result/share/man nixos-rebuild # Replace nixos-rebuild with the command whose manual you edited
|
||||
</programlisting>
|
||||
<para>
|
||||
If you’re on a different architecture that’s supported by NixOS
|
||||
(check nixos/release.nix) then replace
|
||||
<literal>x86_64-linux</literal> with the architecture.
|
||||
<literal>nix-build</literal> will complain otherwise, but should
|
||||
also tell you which architecture you have + the supported ones.
|
||||
</para>
|
||||
</chapter>
|
||||
|
|
73
nixos/doc/manual/from_md/development/bootspec.chapter.xml
Normal file
73
nixos/doc/manual/from_md/development/bootspec.chapter.xml
Normal file
|
@ -0,0 +1,73 @@
|
|||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="sec-experimental-bootspec">
|
||||
<title>Experimental feature: Bootspec</title>
|
||||
<para>
|
||||
Bootspec is a experimental feature, introduced in the
|
||||
<link xlink:href="https://github.com/NixOS/rfcs/pull/125">RFC-0125
|
||||
proposal</link>, the reference implementation can be found
|
||||
<link xlink:href="https://github.com/NixOS/nixpkgs/pull/172237">there</link>
|
||||
in order to standardize bootloader support and advanced boot
|
||||
workflows such as SecureBoot and potentially more.
|
||||
</para>
|
||||
<para>
|
||||
You can enable the creation of bootspec documents through
|
||||
<link xlink:href="options.html#opt-boot.bootspec.enable"><literal>boot.bootspec.enable = true</literal></link>,
|
||||
which will prompt a warning until
|
||||
<link xlink:href="https://github.com/NixOS/rfcs/pull/125">RFC-0125</link>
|
||||
is officially merged.
|
||||
</para>
|
||||
<section xml:id="sec-experimental-bootspec-schema">
|
||||
<title>Schema</title>
|
||||
<para>
|
||||
The bootspec schema is versioned and validated against
|
||||
<link xlink:href="https://cuelang.org/">a CUE schema file</link>
|
||||
which should considered as the source of truth for your
|
||||
applications.
|
||||
</para>
|
||||
<para>
|
||||
You will find the current version
|
||||
<link xlink:href="../../../modules/system/activation/bootspec.cue">here</link>.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="sec-experimental-bootspec-extensions">
|
||||
<title>Extensions mechanism</title>
|
||||
<para>
|
||||
Bootspec cannot account for all usecases.
|
||||
</para>
|
||||
<para>
|
||||
For this purpose, Bootspec offers a generic extension facility
|
||||
<link xlink:href="options.html#opt-boot.bootspec.extensions"><literal>boot.bootspec.extensions</literal></link>
|
||||
which can be used to inject any data needed for your usecases.
|
||||
</para>
|
||||
<para>
|
||||
An example for SecureBoot is to get the Nix store path to
|
||||
<literal>/etc/os-release</literal> in order to bake it into a
|
||||
unified kernel image:
|
||||
</para>
|
||||
<programlisting language="bash">
|
||||
{ config, lib, ... }: {
|
||||
boot.bootspec.extensions = {
|
||||
"org.secureboot.osRelease" = config.environment.etc."os-release".source;
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<para>
|
||||
To reduce incompatibility and prevent names from clashing between
|
||||
applications, it is <emphasis role="strong">highly
|
||||
recommended</emphasis> to use a unique namespace for your
|
||||
extensions.
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="sec-experimental-bootspec-external-bootloaders">
|
||||
<title>External bootloaders</title>
|
||||
<para>
|
||||
It is possible to enable your own bootloader through
|
||||
<link xlink:href="options.html#opt-boot.loader.external.installHook"><literal>boot.loader.external.installHook</literal></link>
|
||||
which can wrap an existing bootloader.
|
||||
</para>
|
||||
<para>
|
||||
Currently, there is no good story to compose existing bootloaders
|
||||
to enrich their features, e.g. SecureBoot, etc. It will be
|
||||
necessary to reimplement or reuse existing parts.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
|
@ -44,6 +44,14 @@
|
|||
<link linkend="opt-services.atuin.enable">services.atuin</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://gitlab.com/kop316/mmsd">mmsd</link>,
|
||||
a lower level daemon that transmits and recieves MMSes.
|
||||
Available as
|
||||
<link linkend="opt-services.mmsd.enable">services.mmsd</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<link xlink:href="https://v2raya.org">v2rayA</link>, a Linux
|
||||
|
@ -282,6 +290,20 @@
|
|||
to match upstream.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The new option
|
||||
<literal>services.tailscale.useRoutingFeatures</literal>
|
||||
controls various settings for using Tailscale features like
|
||||
exit nodes and subnet routers. If you wish to use your machine
|
||||
as an exit node, you can set this setting to
|
||||
<literal>server</literal>, otherwise if you wish to use an
|
||||
exit node you can set this setting to
|
||||
<literal>client</literal>. The strict RPF warning has been
|
||||
removed as the RPF will be loosened automatically based on the
|
||||
value of this setting.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
@ -134,7 +134,7 @@
|
|||
</arg>
|
||||
<arg>
|
||||
<option>-I</option>
|
||||
<replaceable>path</replaceable>
|
||||
<replaceable>NIX_PATH</replaceable>
|
||||
</arg>
|
||||
<arg>
|
||||
<group choice='req'>
|
||||
|
@ -624,7 +624,7 @@
|
|||
|
||||
<para>
|
||||
In addition, <command>nixos-rebuild</command> accepts various Nix-related
|
||||
flags, including <option>--max-jobs</option> / <option>-j</option>,
|
||||
flags, including <option>--max-jobs</option> / <option>-j</option>, <option>-I</option>,
|
||||
<option>--show-trace</option>, <option>--keep-failed</option>,
|
||||
<option>--keep-going</option>, <option>--impure</option>, and <option>--verbose</option> /
|
||||
<option>-v</option>. See the Nix manual for details.
|
||||
|
@ -647,6 +647,20 @@
|
|||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>
|
||||
<envar>NIX_PATH</envar>
|
||||
</term>
|
||||
<listitem>
|
||||
<para>
|
||||
A colon-separated list of directories used to look up Nix expressions enclosed in angle brackets (e.g <nixpkgs>). Example
|
||||
<screen>
|
||||
nixpkgs=./my-nixpkgs
|
||||
</screen>
|
||||
</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
|
||||
<varlistentry>
|
||||
<term>
|
||||
<envar>NIX_SSHOPTS</envar>
|
||||
|
|
|
@ -20,6 +20,8 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
|
||||
- [atuin](https://github.com/ellie/atuin), a sync server for shell history. Available as [services.atuin](#opt-services.atuin.enable).
|
||||
|
||||
- [mmsd](https://gitlab.com/kop316/mmsd), a lower level daemon that transmits and recieves MMSes. Available as [services.mmsd](#opt-services.mmsd.enable).
|
||||
|
||||
- [v2rayA](https://v2raya.org), a Linux web GUI client of Project V which supports V2Ray, Xray, SS, SSR, Trojan and Pingtunnel. Available as [services.v2raya](options.html#opt-services.v2raya.enable).
|
||||
|
||||
## Backward Incompatibilities {#sec-release-23.05-incompatibilities}
|
||||
|
@ -81,3 +83,5 @@ In addition to numerous new and upgraded packages, this release has the followin
|
|||
- The `services.fwupd` module now allows arbitrary daemon settings to be configured in a structured manner ([`services.fwupd.daemonSettings`](#opt-services.fwupd.daemonSettings)).
|
||||
|
||||
- The `unifi-poller` package and corresponding NixOS module have been renamed to `unpoller` to match upstream.
|
||||
|
||||
- The new option `services.tailscale.useRoutingFeatures` controls various settings for using Tailscale features like exit nodes and subnet routers. If you wish to use your machine as an exit node, you can set this setting to `server`, otherwise if you wish to use an exit node you can set this setting to `client`. The strict RPF warning has been removed as the RPF will be loosened automatically based on the value of this setting.
|
||||
|
|
|
@ -45,6 +45,7 @@ with lib;
|
|||
networkmanager-vpnc = super.networkmanager-vpnc.override { withGnome = false; };
|
||||
pinentry = super.pinentry.override { enabledFlavors = [ "curses" "tty" "emacs" ]; withLibsecret = false; };
|
||||
qemu = super.qemu.override { gtkSupport = false; spiceSupport = false; sdlSupport = false; };
|
||||
qrencode = super.qrencode.overrideAttrs (_: { doCheck = false; });
|
||||
zbar = super.zbar.override { enableVideo = false; withXorg = false; };
|
||||
}));
|
||||
};
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -11,7 +11,11 @@ let
|
|||
|
||||
mkExcludeFile = cfg:
|
||||
# Write each exclude pattern to a new line
|
||||
pkgs.writeText "excludefile" (concatStringsSep "\n" cfg.exclude);
|
||||
pkgs.writeText "excludefile" (concatMapStrings (s: s + "\n") cfg.exclude);
|
||||
|
||||
mkPatternsFile = cfg:
|
||||
# Write each pattern to a new line
|
||||
pkgs.writeText "patternsfile" (concatMapStrings (s: s + "\n") cfg.patterns);
|
||||
|
||||
mkKeepArgs = cfg:
|
||||
# If cfg.prune.keep e.g. has a yearly attribute,
|
||||
|
@ -47,6 +51,7 @@ let
|
|||
borg create $extraArgs \
|
||||
--compression ${cfg.compression} \
|
||||
--exclude-from ${mkExcludeFile cfg} \
|
||||
--patterns-from ${mkPatternsFile cfg} \
|
||||
$extraCreateArgs \
|
||||
"::$archiveName$archiveSuffix" \
|
||||
${if cfg.paths == null then "-" else escapeShellArgs cfg.paths}
|
||||
|
@ -441,6 +446,21 @@ in {
|
|||
];
|
||||
};
|
||||
|
||||
patterns = mkOption {
|
||||
type = with types; listOf str;
|
||||
description = lib.mdDoc ''
|
||||
Include/exclude paths matching the given patterns. The first
|
||||
matching patterns is used, so if an include pattern (prefix `+`)
|
||||
matches before an exclude pattern (prefix `-`), the file is
|
||||
backed up. See [{command}`borg help patterns`](https://borgbackup.readthedocs.io/en/stable/usage/help.html#borg-patterns) for pattern syntax.
|
||||
'';
|
||||
default = [ ];
|
||||
example = [
|
||||
"+ /home/susan"
|
||||
"- /home/*"
|
||||
];
|
||||
};
|
||||
|
||||
readWritePaths = mkOption {
|
||||
type = with types; listOf path;
|
||||
description = lib.mdDoc ''
|
||||
|
|
|
@ -146,6 +146,7 @@ in
|
|||
Name of the user to ensure.
|
||||
'';
|
||||
};
|
||||
|
||||
ensurePermissions = mkOption {
|
||||
type = types.attrsOf types.str;
|
||||
default = {};
|
||||
|
@ -167,6 +168,154 @@ in
|
|||
}
|
||||
'';
|
||||
};
|
||||
|
||||
ensureClauses = mkOption {
|
||||
description = lib.mdDoc ''
|
||||
An attrset of clauses to grant to the user. Under the hood this uses the
|
||||
[ALTER USER syntax](https://www.postgresql.org/docs/current/sql-alteruser.html) for each attrName where
|
||||
the attrValue is true in the attrSet:
|
||||
`ALTER USER user.name WITH attrName`
|
||||
'';
|
||||
example = literalExpression ''
|
||||
{
|
||||
superuser = true;
|
||||
createrole = true;
|
||||
createdb = true;
|
||||
}
|
||||
'';
|
||||
default = {};
|
||||
defaultText = lib.literalMD ''
|
||||
The default, `null`, means that the user created will have the default permissions assigned by PostgreSQL. Subsequent server starts will not set or unset the clause, so imperative changes are preserved.
|
||||
'';
|
||||
type = types.submodule {
|
||||
options = let
|
||||
defaultText = lib.literalMD ''
|
||||
`null`: do not set. For newly created roles, use PostgreSQL's default. For existing roles, do not touch this clause.
|
||||
'';
|
||||
in {
|
||||
superuser = mkOption {
|
||||
type = types.nullOr types.bool;
|
||||
description = lib.mdDoc ''
|
||||
Grants the user, created by the ensureUser attr, superuser permissions. From the postgres docs:
|
||||
|
||||
A database superuser bypasses all permission checks,
|
||||
except the right to log in. This is a dangerous privilege
|
||||
and should not be used carelessly; it is best to do most
|
||||
of your work as a role that is not a superuser. To create
|
||||
a new database superuser, use CREATE ROLE name SUPERUSER.
|
||||
You must do this as a role that is already a superuser.
|
||||
|
||||
More information on postgres roles can be found [here](https://www.postgresql.org/docs/current/role-attributes.html)
|
||||
'';
|
||||
default = null;
|
||||
inherit defaultText;
|
||||
};
|
||||
createrole = mkOption {
|
||||
type = types.nullOr types.bool;
|
||||
description = lib.mdDoc ''
|
||||
Grants the user, created by the ensureUser attr, createrole permissions. From the postgres docs:
|
||||
|
||||
A role must be explicitly given permission to create more
|
||||
roles (except for superusers, since those bypass all
|
||||
permission checks). To create such a role, use CREATE
|
||||
ROLE name CREATEROLE. A role with CREATEROLE privilege
|
||||
can alter and drop other roles, too, as well as grant or
|
||||
revoke membership in them. However, to create, alter,
|
||||
drop, or change membership of a superuser role, superuser
|
||||
status is required; CREATEROLE is insufficient for that.
|
||||
|
||||
More information on postgres roles can be found [here](https://www.postgresql.org/docs/current/role-attributes.html)
|
||||
'';
|
||||
default = null;
|
||||
inherit defaultText;
|
||||
};
|
||||
createdb = mkOption {
|
||||
type = types.nullOr types.bool;
|
||||
description = lib.mdDoc ''
|
||||
Grants the user, created by the ensureUser attr, createdb permissions. From the postgres docs:
|
||||
|
||||
A role must be explicitly given permission to create
|
||||
databases (except for superusers, since those bypass all
|
||||
permission checks). To create such a role, use CREATE
|
||||
ROLE name CREATEDB.
|
||||
|
||||
More information on postgres roles can be found [here](https://www.postgresql.org/docs/current/role-attributes.html)
|
||||
'';
|
||||
default = null;
|
||||
inherit defaultText;
|
||||
};
|
||||
"inherit" = mkOption {
|
||||
type = types.nullOr types.bool;
|
||||
description = lib.mdDoc ''
|
||||
Grants the user created inherit permissions. From the postgres docs:
|
||||
|
||||
A role is given permission to inherit the privileges of
|
||||
roles it is a member of, by default. However, to create a
|
||||
role without the permission, use CREATE ROLE name
|
||||
NOINHERIT.
|
||||
|
||||
More information on postgres roles can be found [here](https://www.postgresql.org/docs/current/role-attributes.html)
|
||||
'';
|
||||
default = null;
|
||||
inherit defaultText;
|
||||
};
|
||||
login = mkOption {
|
||||
type = types.nullOr types.bool;
|
||||
description = lib.mdDoc ''
|
||||
Grants the user, created by the ensureUser attr, login permissions. From the postgres docs:
|
||||
|
||||
Only roles that have the LOGIN attribute can be used as
|
||||
the initial role name for a database connection. A role
|
||||
with the LOGIN attribute can be considered the same as a
|
||||
“database user”. To create a role with login privilege,
|
||||
use either:
|
||||
|
||||
CREATE ROLE name LOGIN; CREATE USER name;
|
||||
|
||||
(CREATE USER is equivalent to CREATE ROLE except that
|
||||
CREATE USER includes LOGIN by default, while CREATE ROLE
|
||||
does not.)
|
||||
|
||||
More information on postgres roles can be found [here](https://www.postgresql.org/docs/current/role-attributes.html)
|
||||
'';
|
||||
default = null;
|
||||
inherit defaultText;
|
||||
};
|
||||
replication = mkOption {
|
||||
type = types.nullOr types.bool;
|
||||
description = lib.mdDoc ''
|
||||
Grants the user, created by the ensureUser attr, replication permissions. From the postgres docs:
|
||||
|
||||
A role must explicitly be given permission to initiate
|
||||
streaming replication (except for superusers, since those
|
||||
bypass all permission checks). A role used for streaming
|
||||
replication must have LOGIN permission as well. To create
|
||||
such a role, use CREATE ROLE name REPLICATION LOGIN.
|
||||
|
||||
More information on postgres roles can be found [here](https://www.postgresql.org/docs/current/role-attributes.html)
|
||||
'';
|
||||
default = null;
|
||||
inherit defaultText;
|
||||
};
|
||||
bypassrls = mkOption {
|
||||
type = types.nullOr types.bool;
|
||||
description = lib.mdDoc ''
|
||||
Grants the user, created by the ensureUser attr, replication permissions. From the postgres docs:
|
||||
|
||||
A role must be explicitly given permission to bypass
|
||||
every row-level security (RLS) policy (except for
|
||||
superusers, since those bypass all permission checks). To
|
||||
create such a role, use CREATE ROLE name BYPASSRLS as a
|
||||
superuser.
|
||||
|
||||
More information on postgres roles can be found [here](https://www.postgresql.org/docs/current/role-attributes.html)
|
||||
'';
|
||||
default = null;
|
||||
inherit defaultText;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
});
|
||||
default = [];
|
||||
|
@ -380,12 +529,29 @@ in
|
|||
$PSQL -tAc "SELECT 1 FROM pg_database WHERE datname = '${database}'" | grep -q 1 || $PSQL -tAc 'CREATE DATABASE "${database}"'
|
||||
'') cfg.ensureDatabases}
|
||||
'' + ''
|
||||
${concatMapStrings (user: ''
|
||||
$PSQL -tAc "SELECT 1 FROM pg_roles WHERE rolname='${user.name}'" | grep -q 1 || $PSQL -tAc 'CREATE USER "${user.name}"'
|
||||
${concatStringsSep "\n" (mapAttrsToList (database: permission: ''
|
||||
$PSQL -tAc 'GRANT ${permission} ON ${database} TO "${user.name}"'
|
||||
'') user.ensurePermissions)}
|
||||
'') cfg.ensureUsers}
|
||||
${
|
||||
concatMapStrings
|
||||
(user:
|
||||
let
|
||||
userPermissions = concatStringsSep "\n"
|
||||
(mapAttrsToList
|
||||
(database: permission: ''$PSQL -tAc 'GRANT ${permission} ON ${database} TO "${user.name}"' '')
|
||||
user.ensurePermissions
|
||||
);
|
||||
|
||||
filteredClauses = filterAttrs (name: value: value != null) user.ensureClauses;
|
||||
|
||||
clauseSqlStatements = attrValues (mapAttrs (n: v: if v then n else "no${n}") filteredClauses);
|
||||
|
||||
userClauses = ''$PSQL -tAc 'ALTER ROLE "${user.name}" ${concatStringsSep " " clauseSqlStatements}' '';
|
||||
in ''
|
||||
$PSQL -tAc "SELECT 1 FROM pg_roles WHERE rolname='${user.name}'" | grep -q 1 || $PSQL -tAc 'CREATE USER "${user.name}"'
|
||||
${userPermissions}
|
||||
${userClauses}
|
||||
''
|
||||
)
|
||||
cfg.ensureUsers
|
||||
}
|
||||
'';
|
||||
|
||||
serviceConfig = mkMerge [
|
||||
|
|
38
nixos/modules/services/networking/mmsd.nix
Normal file
38
nixos/modules/services/networking/mmsd.nix
Normal file
|
@ -0,0 +1,38 @@
|
|||
{ pkgs, lib, config, ... }:
|
||||
with lib;
|
||||
let
|
||||
cfg = config.services.mmsd;
|
||||
dbusServiceFile = pkgs.writeTextDir "share/dbus-1/services/org.ofono.mms.service" ''
|
||||
[D-BUS Service]
|
||||
Name=org.ofono.mms
|
||||
SystemdService=dbus-org.ofono.mms.service
|
||||
|
||||
# Exec= is still required despite SystemdService= being used:
|
||||
# https://github.com/freedesktop/dbus/blob/ef55a3db0d8f17848f8a579092fb05900cc076f5/test/data/systemd-activation/com.example.SystemdActivatable1.service
|
||||
Exec=${pkgs.coreutils}/bin/false mmsd
|
||||
'';
|
||||
in
|
||||
{
|
||||
options.services.mmsd = {
|
||||
enable = mkEnableOption (mdDoc "Multimedia Messaging Service Daemon");
|
||||
extraArgs = mkOption {
|
||||
type = with types; listOf str;
|
||||
description = mdDoc "Extra arguments passed to `mmsd-tng`";
|
||||
default = [];
|
||||
example = ["--debug"];
|
||||
};
|
||||
};
|
||||
config = mkIf cfg.enable {
|
||||
services.dbus.packages = [ dbusServiceFile ];
|
||||
systemd.user.services.mmsd = {
|
||||
after = [ "ModemManager.service" ];
|
||||
aliases = [ "dbus-org.ofono.mms.service" ];
|
||||
serviceConfig = {
|
||||
Type = "dbus";
|
||||
ExecStart = "${pkgs.mmsd-tng}/bin/mmsdtng " + escapeShellArgs cfg.extraArgs;
|
||||
BusName = "org.ofono.mms";
|
||||
Restart = "on-failure";
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
|
@ -4,10 +4,7 @@ with lib;
|
|||
|
||||
let
|
||||
cfg = config.services.tailscale;
|
||||
firewallOn = config.networking.firewall.enable;
|
||||
rpfMode = config.networking.firewall.checkReversePath;
|
||||
isNetworkd = config.networking.useNetworkd;
|
||||
rpfIsStrict = rpfMode == true || rpfMode == "strict";
|
||||
in {
|
||||
meta.maintainers = with maintainers; [ danderson mbaillie twitchyliquid64 ];
|
||||
|
||||
|
@ -38,14 +35,23 @@ in {
|
|||
defaultText = literalExpression "pkgs.tailscale";
|
||||
description = lib.mdDoc "The package to use for tailscale";
|
||||
};
|
||||
|
||||
useRoutingFeatures = mkOption {
|
||||
type = types.enum [ "none" "client" "server" "both" ];
|
||||
default = "none";
|
||||
example = "server";
|
||||
description = lib.mdDoc ''
|
||||
Enables settings required for Tailscale's routing features like subnet routers and exit nodes.
|
||||
|
||||
To use these these features, you will still need to call `sudo tailscale up` with the relevant flags like `--advertise-exit-node` and `--exit-node`.
|
||||
|
||||
When set to `client` or `both`, reverse path filtering will be set to loose instead of strict.
|
||||
When set to `server` or `both`, IP forwarding will be enabled.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
warnings = optional (firewallOn && rpfIsStrict) ''
|
||||
Strict reverse path filtering breaks Tailscale exit node use and some subnet routing setups. Consider setting:
|
||||
|
||||
networking.firewall.checkReversePath = "loose";
|
||||
'';
|
||||
environment.systemPackages = [ cfg.package ]; # for the CLI
|
||||
systemd.packages = [ cfg.package ];
|
||||
systemd.services.tailscaled = {
|
||||
|
@ -75,6 +81,13 @@ in {
|
|||
stopIfChanged = false;
|
||||
};
|
||||
|
||||
boot.kernel.sysctl = mkIf (cfg.useRoutingFeatures == "server" || cfg.useRoutingFeatures == "both") {
|
||||
"net.ipv4.conf.all.forwarding" = mkDefault true;
|
||||
"net.ipv6.conf.all.forwarding" = mkDefault true;
|
||||
};
|
||||
|
||||
networking.firewall.checkReversePath = mkIf (cfg.useRoutingFeatures == "client" || cfg.useRoutingFeatures == "both") "loose";
|
||||
|
||||
networking.dhcpcd.denyInterfaces = [ cfg.interfaceName ];
|
||||
|
||||
systemd.network.networks."50-tailscale" = mkIf isNetworkd {
|
||||
|
|
27
nixos/modules/services/x11/window-managers/katriawm.nix
Normal file
27
nixos/modules/services/x11/window-managers/katriawm.nix
Normal file
|
@ -0,0 +1,27 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
let
|
||||
inherit (lib) mdDoc mkEnableOption mkIf mkPackageOption singleton;
|
||||
cfg = config.services.xserver.windowManager.katriawm;
|
||||
in
|
||||
{
|
||||
###### interface
|
||||
options = {
|
||||
services.xserver.windowManager.katriawm = {
|
||||
enable = mkEnableOption (mdDoc "katriawm");
|
||||
package = mkPackageOption pkgs "katriawm" {};
|
||||
};
|
||||
};
|
||||
|
||||
###### implementation
|
||||
config = mkIf cfg.enable {
|
||||
services.xserver.windowManager.session = singleton {
|
||||
name = "katriawm";
|
||||
start = ''
|
||||
${cfg.package}/bin/katriawm &
|
||||
waitPID=$!
|
||||
'';
|
||||
};
|
||||
environment.systemPackages = [ cfg.package ];
|
||||
};
|
||||
}
|
17
nixos/modules/system/activation/bootspec.cue
Normal file
17
nixos/modules/system/activation/bootspec.cue
Normal file
|
@ -0,0 +1,17 @@
|
|||
#V1: {
|
||||
init: string
|
||||
initrd?: string
|
||||
initrdSecrets?: string
|
||||
kernel: string
|
||||
kernelParams: [...string]
|
||||
label: string
|
||||
toplevel: string
|
||||
specialisation?: {
|
||||
[=~"^"]: #V1
|
||||
}
|
||||
extensions?: {...}
|
||||
}
|
||||
|
||||
Document: {
|
||||
v1: #V1
|
||||
}
|
124
nixos/modules/system/activation/bootspec.nix
Normal file
124
nixos/modules/system/activation/bootspec.nix
Normal file
|
@ -0,0 +1,124 @@
|
|||
# Note that these schemas are defined by RFC-0125.
|
||||
# This document is considered a stable API, and is depended upon by external tooling.
|
||||
# Changes to the structure of the document, or the semantics of the values should go through an RFC.
|
||||
#
|
||||
# See: https://github.com/NixOS/rfcs/pull/125
|
||||
{ config
|
||||
, pkgs
|
||||
, lib
|
||||
, ...
|
||||
}:
|
||||
let
|
||||
cfg = config.boot.bootspec;
|
||||
children = lib.mapAttrs (childName: childConfig: childConfig.configuration.system.build.toplevel) config.specialisation;
|
||||
schemas = {
|
||||
v1 = rec {
|
||||
filename = "boot.json";
|
||||
json =
|
||||
pkgs.writeText filename
|
||||
(builtins.toJSON
|
||||
{
|
||||
v1 = {
|
||||
kernel = "${config.boot.kernelPackages.kernel}/${config.system.boot.loader.kernelFile}";
|
||||
kernelParams = config.boot.kernelParams;
|
||||
initrd = "${config.system.build.initialRamdisk}/${config.system.boot.loader.initrdFile}";
|
||||
initrdSecrets = "${config.system.build.initialRamdiskSecretAppender}/bin/append-initrd-secrets";
|
||||
label = "NixOS ${config.system.nixos.codeName} ${config.system.nixos.label} (Linux ${config.boot.kernelPackages.kernel.modDirVersion})";
|
||||
|
||||
inherit (cfg) extensions;
|
||||
};
|
||||
});
|
||||
|
||||
generator =
|
||||
let
|
||||
# NOTE: Be careful to not introduce excess newlines at the end of the
|
||||
# injectors, as that may affect the pipes and redirects.
|
||||
|
||||
# Inject toplevel and init into the bootspec.
|
||||
# This can only be done here because we *cannot* depend on $out
|
||||
# referring to the toplevel, except by living in the toplevel itself.
|
||||
toplevelInjector = lib.escapeShellArgs [
|
||||
"${pkgs.jq}/bin/jq"
|
||||
''
|
||||
.v1.toplevel = $toplevel |
|
||||
.v1.init = $init
|
||||
''
|
||||
"--sort-keys"
|
||||
"--arg" "toplevel" "${placeholder "out"}"
|
||||
"--arg" "init" "${placeholder "out"}/init"
|
||||
] + " < ${json}";
|
||||
|
||||
# We slurp all specialisations and inject them as values, such that
|
||||
# `.specialisations.${name}` embeds the specialisation's bootspec
|
||||
# document.
|
||||
specialisationInjector =
|
||||
let
|
||||
specialisationLoader = (lib.mapAttrsToList
|
||||
(childName: childToplevel: lib.escapeShellArgs [ "--slurpfile" childName "${childToplevel}/bootspec/${filename}" ])
|
||||
children);
|
||||
in
|
||||
lib.escapeShellArgs [
|
||||
"${pkgs.jq}/bin/jq"
|
||||
"--sort-keys"
|
||||
".v1.specialisation = ($ARGS.named | map_values(. | first | .v1))"
|
||||
] + " ${lib.concatStringsSep " " specialisationLoader}";
|
||||
in
|
||||
''
|
||||
mkdir -p $out/bootspec
|
||||
|
||||
${toplevelInjector} | ${specialisationInjector} > $out/bootspec/${filename}
|
||||
'';
|
||||
|
||||
validator = pkgs.writeCueValidator ./bootspec.cue {
|
||||
document = "Document"; # Universal validator for any version as long the schema is correctly set.
|
||||
};
|
||||
};
|
||||
};
|
||||
in
|
||||
{
|
||||
options.boot.bootspec = {
|
||||
enable = lib.mkEnableOption (lib.mdDoc "Enable generation of RFC-0125 bootspec in $system/bootspec, e.g. /run/current-system/bootspec");
|
||||
|
||||
extensions = lib.mkOption {
|
||||
type = lib.types.attrs;
|
||||
default = { };
|
||||
description = lib.mdDoc ''
|
||||
User-defined data that extends the bootspec document.
|
||||
|
||||
To reduce incompatibility and prevent names from clashing
|
||||
between applications, it is **highly recommended** to use a
|
||||
unique namespace for your extensions.
|
||||
'';
|
||||
};
|
||||
|
||||
# This will be run as a part of the `systemBuilder` in ./top-level.nix. This
|
||||
# means `$out` points to the output of `config.system.build.toplevel` and can
|
||||
# be used for a variety of things (though, for now, it's only used to report
|
||||
# the path of the `toplevel` itself and the `init` executable).
|
||||
writer = lib.mkOption {
|
||||
internal = true;
|
||||
default = schemas.v1.generator;
|
||||
};
|
||||
|
||||
validator = lib.mkOption {
|
||||
internal = true;
|
||||
default = schemas.v1.validator;
|
||||
};
|
||||
|
||||
filename = lib.mkOption {
|
||||
internal = true;
|
||||
default = schemas.v1.filename;
|
||||
};
|
||||
};
|
||||
|
||||
config = lib.mkIf (cfg.enable) {
|
||||
warnings = [
|
||||
''RFC-0125 is not merged yet, this is a feature preview of bootspec.
|
||||
The schema is not definitive and features are not guaranteed to be stable until RFC-0125 is merged.
|
||||
See:
|
||||
- https://github.com/NixOS/nixpkgs/pull/172237 to track merge status in nixpkgs.
|
||||
- https://github.com/NixOS/rfcs/pull/125 to track RFC status.
|
||||
''
|
||||
];
|
||||
};
|
||||
}
|
|
@ -79,6 +79,11 @@ let
|
|||
|
||||
echo -n "$extraDependencies" > $out/extra-dependencies
|
||||
|
||||
${optionalString (!config.boot.isContainer && config.boot.bootspec.enable) ''
|
||||
${config.boot.bootspec.writer}
|
||||
${config.boot.bootspec.validator} "$out/bootspec/${config.boot.bootspec.filename}"
|
||||
''}
|
||||
|
||||
${config.system.extraSystemBuilderCmds}
|
||||
'';
|
||||
|
||||
|
|
26
nixos/modules/system/boot/loader/external/external.md
vendored
Normal file
26
nixos/modules/system/boot/loader/external/external.md
vendored
Normal file
|
@ -0,0 +1,26 @@
|
|||
# External Bootloader Backends {#sec-bootloader-external}
|
||||
|
||||
NixOS has support for several bootloader backends by default: systemd-boot, grub, uboot, etc.
|
||||
The built-in bootloader backend support is generic and supports most use cases.
|
||||
Some users may prefer to create advanced workflows around managing the bootloader and bootable entries.
|
||||
|
||||
You can replace the built-in bootloader support with your own tooling using the "external" bootloader option.
|
||||
|
||||
Imagine you have created a new package called FooBoot.
|
||||
FooBoot provides a program at `${pkgs.fooboot}/bin/fooboot-install` which takes the system closure's path as its only argument and configures the system's bootloader.
|
||||
|
||||
You can enable FooBoot like this:
|
||||
|
||||
```nix
|
||||
{ pkgs, ... }: {
|
||||
boot.loader.external = {
|
||||
enable = true;
|
||||
installHook = "${pkgs.fooboot}/bin/fooboot-install";
|
||||
};
|
||||
}
|
||||
```
|
||||
|
||||
## Developing Custom Bootloader Backends
|
||||
|
||||
Bootloaders should use [RFC-0125](https://github.com/NixOS/rfcs/pull/125)'s Bootspec format and synthesis tools to identify the key properties for bootable system generations.
|
||||
|
38
nixos/modules/system/boot/loader/external/external.nix
vendored
Normal file
38
nixos/modules/system/boot/loader/external/external.nix
vendored
Normal file
|
@ -0,0 +1,38 @@
|
|||
{ config, lib, pkgs, ... }:
|
||||
|
||||
with lib;
|
||||
|
||||
let
|
||||
cfg = config.boot.loader.external;
|
||||
in
|
||||
{
|
||||
meta = {
|
||||
maintainers = with maintainers; [ cole-h grahamc raitobezarius ];
|
||||
# Don't edit the docbook xml directly, edit the md and generate it:
|
||||
# `pandoc external.md -t docbook --top-level-division=chapter --extract-media=media -f markdown+smart > external.xml`
|
||||
doc = ./external.xml;
|
||||
};
|
||||
|
||||
options.boot.loader.external = {
|
||||
enable = mkEnableOption (lib.mdDoc "use an external tool to install your bootloader");
|
||||
|
||||
installHook = mkOption {
|
||||
type = with types; path;
|
||||
description = lib.mdDoc ''
|
||||
The full path to a program of your choosing which performs the bootloader installation process.
|
||||
|
||||
The program will be called with an argument pointing to the output of the system's toplevel.
|
||||
'';
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
boot.loader = {
|
||||
grub.enable = mkDefault false;
|
||||
systemd-boot.enable = mkDefault false;
|
||||
supportsInitrdSecrets = mkDefault false;
|
||||
};
|
||||
|
||||
system.build.installBootLoader = cfg.installHook;
|
||||
};
|
||||
}
|
41
nixos/modules/system/boot/loader/external/external.xml
vendored
Normal file
41
nixos/modules/system/boot/loader/external/external.xml
vendored
Normal file
|
@ -0,0 +1,41 @@
|
|||
<chapter xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xml:id="sec-bootloader-external">
|
||||
<title>External Bootloader Backends</title>
|
||||
<para>
|
||||
NixOS has support for several bootloader backends by default:
|
||||
systemd-boot, grub, uboot, etc. The built-in bootloader backend
|
||||
support is generic and supports most use cases. Some users may
|
||||
prefer to create advanced workflows around managing the bootloader
|
||||
and bootable entries.
|
||||
</para>
|
||||
<para>
|
||||
You can replace the built-in bootloader support with your own
|
||||
tooling using the <quote>external</quote> bootloader option.
|
||||
</para>
|
||||
<para>
|
||||
Imagine you have created a new package called FooBoot. FooBoot
|
||||
provides a program at
|
||||
<literal>${pkgs.fooboot}/bin/fooboot-install</literal> which takes
|
||||
the system closure’s path as its only argument and configures the
|
||||
system’s bootloader.
|
||||
</para>
|
||||
<para>
|
||||
You can enable FooBoot like this:
|
||||
</para>
|
||||
<programlisting language="nix">
|
||||
{ pkgs, ... }: {
|
||||
boot.loader.external = {
|
||||
enable = true;
|
||||
installHook = "${pkgs.fooboot}/bin/fooboot-install";
|
||||
};
|
||||
}
|
||||
</programlisting>
|
||||
<section xml:id="developing-custom-bootloader-backends">
|
||||
<title>Developing Custom Bootloader Backends</title>
|
||||
<para>
|
||||
Bootloaders should use
|
||||
<link xlink:href="https://github.com/NixOS/rfcs/pull/125">RFC-0125</link>’s
|
||||
Bootspec format and synthesis tools to identify the key properties
|
||||
for bootable system generations.
|
||||
</para>
|
||||
</section>
|
||||
</chapter>
|
144
nixos/tests/bootspec.nix
Normal file
144
nixos/tests/bootspec.nix
Normal file
|
@ -0,0 +1,144 @@
|
|||
{ system ? builtins.currentSystem,
|
||||
config ? {},
|
||||
pkgs ? import ../.. { inherit system config; }
|
||||
}:
|
||||
|
||||
with import ../lib/testing-python.nix { inherit system pkgs; };
|
||||
with pkgs.lib;
|
||||
|
||||
let
|
||||
baseline = {
|
||||
virtualisation.useBootLoader = true;
|
||||
};
|
||||
grub = {
|
||||
boot.loader.grub.enable = true;
|
||||
};
|
||||
systemd-boot = {
|
||||
boot.loader.systemd-boot.enable = true;
|
||||
};
|
||||
uefi = {
|
||||
virtualisation.useEFIBoot = true;
|
||||
boot.loader.efi.canTouchEfiVariables = true;
|
||||
boot.loader.grub.efiSupport = true;
|
||||
environment.systemPackages = [ pkgs.efibootmgr ];
|
||||
};
|
||||
standard = {
|
||||
boot.bootspec.enable = true;
|
||||
|
||||
imports = [
|
||||
baseline
|
||||
systemd-boot
|
||||
uefi
|
||||
];
|
||||
};
|
||||
in
|
||||
{
|
||||
basic = makeTest {
|
||||
name = "systemd-boot-with-bootspec";
|
||||
meta.maintainers = with pkgs.lib.maintainers; [ raitobezarius ];
|
||||
|
||||
nodes.machine = standard;
|
||||
|
||||
testScript = ''
|
||||
machine.start()
|
||||
machine.wait_for_unit("multi-user.target")
|
||||
|
||||
machine.succeed("test -e /run/current-system/bootspec/boot.json")
|
||||
'';
|
||||
};
|
||||
|
||||
grub = makeTest {
|
||||
name = "grub-with-bootspec";
|
||||
meta.maintainers = with pkgs.lib.maintainers; [ raitobezarius ];
|
||||
|
||||
nodes.machine = {
|
||||
boot.bootspec.enable = true;
|
||||
|
||||
imports = [
|
||||
baseline
|
||||
grub
|
||||
uefi
|
||||
];
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
machine.start()
|
||||
machine.wait_for_unit("multi-user.target")
|
||||
|
||||
machine.succeed("test -e /run/current-system/bootspec/boot.json")
|
||||
'';
|
||||
};
|
||||
|
||||
legacy-boot = makeTest {
|
||||
name = "legacy-boot-with-bootspec";
|
||||
meta.maintainers = with pkgs.lib.maintainers; [ raitobezarius ];
|
||||
|
||||
nodes.machine = {
|
||||
boot.bootspec.enable = true;
|
||||
|
||||
imports = [
|
||||
baseline
|
||||
grub
|
||||
];
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
machine.start()
|
||||
machine.wait_for_unit("multi-user.target")
|
||||
|
||||
machine.succeed("test -e /run/current-system/bootspec/boot.json")
|
||||
'';
|
||||
};
|
||||
|
||||
# Check that specialisations create corresponding entries in bootspec.
|
||||
specialisation = makeTest {
|
||||
name = "bootspec-with-specialisation";
|
||||
meta.maintainers = with pkgs.lib.maintainers; [ raitobezarius ];
|
||||
|
||||
nodes.machine = {
|
||||
imports = [ standard ];
|
||||
environment.systemPackages = [ pkgs.jq ];
|
||||
specialisation.something.configuration = {};
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
import json
|
||||
|
||||
machine.start()
|
||||
machine.wait_for_unit("multi-user.target")
|
||||
|
||||
machine.succeed("test -e /run/current-system/bootspec/boot.json")
|
||||
machine.succeed("test -e /run/current-system/specialisation/something/bootspec/boot.json")
|
||||
|
||||
sp_in_parent = json.loads(machine.succeed("jq -r '.v1.specialisation.something' /run/current-system/bootspec/boot.json"))
|
||||
sp_in_fs = json.loads(machine.succeed("cat /run/current-system/specialisation/something/bootspec/boot.json"))
|
||||
|
||||
assert sp_in_parent == sp_in_fs['v1'], "Bootspecs of the same specialisation are different!"
|
||||
'';
|
||||
};
|
||||
|
||||
# Check that extensions are propagated.
|
||||
extensions = makeTest {
|
||||
name = "bootspec-with-extensions";
|
||||
meta.maintainers = with pkgs.lib.maintainers; [ raitobezarius ];
|
||||
|
||||
nodes.machine = { config, ... }: {
|
||||
imports = [ standard ];
|
||||
environment.systemPackages = [ pkgs.jq ];
|
||||
boot.bootspec.extensions = {
|
||||
osRelease = config.environment.etc."os-release".source;
|
||||
};
|
||||
};
|
||||
|
||||
testScript = ''
|
||||
machine.start()
|
||||
machine.wait_for_unit("multi-user.target")
|
||||
|
||||
current_os_release = machine.succeed("cat /etc/os-release")
|
||||
bootspec_os_release = machine.succeed("cat $(jq -r '.v1.extensions.osRelease' /run/current-system/bootspec/boot.json)")
|
||||
|
||||
assert current_os_release == bootspec_os_release, "Filename referenced by extension has unexpected contents"
|
||||
'';
|
||||
};
|
||||
|
||||
}
|
|
@ -130,8 +130,97 @@ let
|
|||
'';
|
||||
|
||||
};
|
||||
|
||||
mk-ensure-clauses-test = postgresql-name: postgresql-package: makeTest {
|
||||
name = postgresql-name;
|
||||
meta = with pkgs.lib.maintainers; {
|
||||
maintainers = [ zagy ];
|
||||
};
|
||||
|
||||
machine = {...}:
|
||||
{
|
||||
services.postgresql = {
|
||||
enable = true;
|
||||
package = postgresql-package;
|
||||
ensureUsers = [
|
||||
{
|
||||
name = "all-clauses";
|
||||
ensureClauses = {
|
||||
superuser = true;
|
||||
createdb = true;
|
||||
createrole = true;
|
||||
"inherit" = true;
|
||||
login = true;
|
||||
replication = true;
|
||||
bypassrls = true;
|
||||
};
|
||||
}
|
||||
{
|
||||
name = "default-clauses";
|
||||
}
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
testScript = let
|
||||
getClausesQuery = user: pkgs.lib.concatStringsSep " "
|
||||
[
|
||||
"SELECT row_to_json(row)"
|
||||
"FROM ("
|
||||
"SELECT"
|
||||
"rolsuper,"
|
||||
"rolinherit,"
|
||||
"rolcreaterole,"
|
||||
"rolcreatedb,"
|
||||
"rolcanlogin,"
|
||||
"rolreplication,"
|
||||
"rolbypassrls"
|
||||
"FROM pg_roles"
|
||||
"WHERE rolname = '${user}'"
|
||||
") row;"
|
||||
];
|
||||
in ''
|
||||
import json
|
||||
machine.start()
|
||||
machine.wait_for_unit("postgresql")
|
||||
|
||||
with subtest("All user permissions are set according to the ensureClauses attr"):
|
||||
clauses = json.loads(
|
||||
machine.succeed(
|
||||
"sudo -u postgres psql -tc \"${getClausesQuery "all-clauses"}\""
|
||||
)
|
||||
)
|
||||
print(clauses)
|
||||
assert clauses['rolsuper'], 'expected user with clauses to have superuser clause'
|
||||
assert clauses['rolinherit'], 'expected user with clauses to have inherit clause'
|
||||
assert clauses['rolcreaterole'], 'expected user with clauses to have create role clause'
|
||||
assert clauses['rolcreatedb'], 'expected user with clauses to have create db clause'
|
||||
assert clauses['rolcanlogin'], 'expected user with clauses to have login clause'
|
||||
assert clauses['rolreplication'], 'expected user with clauses to have replication clause'
|
||||
assert clauses['rolbypassrls'], 'expected user with clauses to have bypassrls clause'
|
||||
|
||||
with subtest("All user permissions default when ensureClauses is not provided"):
|
||||
clauses = json.loads(
|
||||
machine.succeed(
|
||||
"sudo -u postgres psql -tc \"${getClausesQuery "default-clauses"}\""
|
||||
)
|
||||
)
|
||||
assert not clauses['rolsuper'], 'expected user with no clauses set to have default superuser clause'
|
||||
assert clauses['rolinherit'], 'expected user with no clauses set to have default inherit clause'
|
||||
assert not clauses['rolcreaterole'], 'expected user with no clauses set to have default create role clause'
|
||||
assert not clauses['rolcreatedb'], 'expected user with no clauses set to have default create db clause'
|
||||
assert clauses['rolcanlogin'], 'expected user with no clauses set to have default login clause'
|
||||
assert not clauses['rolreplication'], 'expected user with no clauses set to have default replication clause'
|
||||
assert not clauses['rolbypassrls'], 'expected user with no clauses set to have default bypassrls clause'
|
||||
|
||||
machine.shutdown()
|
||||
'';
|
||||
};
|
||||
in
|
||||
(mapAttrs' (name: package: { inherit name; value=make-postgresql-test name package false;}) postgresql-versions) // {
|
||||
concatMapAttrs (name: package: {
|
||||
${name} = make-postgresql-test name package false;
|
||||
${name + "-clauses"} = mk-ensure-clauses-test name package;
|
||||
}) postgresql-versions
|
||||
// {
|
||||
postgresql_11-backup-all = make-postgresql-test "postgresql_11-backup-all" postgresql-versions.postgresql_11 true;
|
||||
}
|
||||
|
||||
|
|
|
@ -33,16 +33,13 @@ buildNpmPackage rec {
|
|||
makeCacheWritable = true;
|
||||
npmFlags = [ "--legacy-peer-deps" ];
|
||||
|
||||
# Override installPhase so we can copy the only folders that matter (app and node_modules)
|
||||
# Override installPhase so we can copy the only directory that matters (app)
|
||||
installPhase = ''
|
||||
runHook preInstall
|
||||
|
||||
# prune unused deps
|
||||
npm prune --omit dev --no-save $npmFlags
|
||||
|
||||
# copy built app and node_modules directories
|
||||
mkdir -p $out/lib/node_modules/open-stage-control
|
||||
cp -r app node_modules $out/lib/node_modules/open-stage-control/
|
||||
cp -r app $out/lib/node_modules/open-stage-control/
|
||||
|
||||
# copy icon
|
||||
install -Dm644 resources/images/logo.png $out/share/icons/hicolor/256x256/apps/open-stage-control.png
|
||||
|
|
|
@ -29,6 +29,8 @@ buildGoModule rec {
|
|||
"cmd/rlpdump"
|
||||
];
|
||||
|
||||
passthru.updateScript = ./update.sh;
|
||||
|
||||
meta = with lib; {
|
||||
homepage = "https://github.com/ledgerwatch/erigon/";
|
||||
description = "Ethereum node implementation focused on scalability and modularity";
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
|
||||
let
|
||||
pname = "trezor-suite";
|
||||
version = "22.10.3";
|
||||
version = "22.11.1";
|
||||
name = "${pname}-${version}";
|
||||
|
||||
suffix = {
|
||||
|
@ -19,8 +19,8 @@ let
|
|||
src = fetchurl {
|
||||
url = "https://github.com/trezor/${pname}/releases/download/v${version}/Trezor-Suite-${version}-${suffix}.AppImage";
|
||||
sha512 = { # curl -Lfs https://github.com/trezor/trezor-suite/releases/latest/download/latest-linux{-arm64,}.yml | grep ^sha512 | sed 's/: /-/'
|
||||
aarch64-linux = "sha512-fI0N1V+6SEZ9eNf+G/w5RcY8oeA5MsVzJnpnWoMzkkHZh5jVHgNbcqVgSPbzvQ/WZNv1MX37KETcxmDwRx//yw==";
|
||||
x86_64-linux = "sha512-zN89Qw6fQh27EaN9ARNwqhiBaiNoMic6Aq2UPG0OSUtOjEOdkGJ2pbR8MgWVccSgRH8ZmAAXZ0snVKfZWHbCjA==";
|
||||
aarch64-linux = "sha512-cZZFc1Ij7KrF0Kc1Xmtg/73ASv56a6SFWFy3Miwl3P5u8ieZGXVDlSQyv84CsuYMbE0Vga3X0XS/BiF7nKNcnA==";
|
||||
x86_64-linux = "sha512-X/IEZGs43riUn6vC5bPyj4DS/VK+s7C10PbBnvwieaclBSVJyQ8H8hbn4eKi0kMVNEl0A9o8W09gXBxAhdNR9g==";
|
||||
}.${stdenv.hostPlatform.system} or (throw "Unsupported system: ${stdenv.hostPlatform.system}");
|
||||
};
|
||||
|
||||
|
|
|
@ -4,12 +4,12 @@ with lib;
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "kakoune-unwrapped";
|
||||
version = "2021.11.08";
|
||||
version = "2022.10.31";
|
||||
src = fetchFromGitHub {
|
||||
repo = "kakoune";
|
||||
owner = "mawww";
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-lMGMt0H1G8EN/7zSVSvU1yU4BYPnSF1vWmozLdrRTQk=";
|
||||
sha256 = "sha256-vmzGaGl0KSjseSD/s6DXxvMUTmAle+Iv/ZP9llaFnXk=";
|
||||
};
|
||||
makeFlags = [ "debug=no" "PREFIX=${placeholder "out"}" ];
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@ updateNightly() {
|
|||
OLD_NIGHTLY_VERSION="$(getLocalVersion "citra-nightly")"
|
||||
OLD_NIGHTLY_HASH="$(getLocalHash "citra-nightly")"
|
||||
|
||||
NEW_NIGHTLY_VERSION="$(curl -s ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \
|
||||
NEW_NIGHTLY_VERSION="$(curl -s ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} \
|
||||
"https://api.github.com/repos/citra-emu/citra-nightly/releases?per_page=1" | jq -r '.[0].name' | cut -d"-" -f2 | cut -d" " -f2)"
|
||||
|
||||
if [[ "${OLD_NIGHTLY_VERSION}" = "${NEW_NIGHTLY_VERSION}" ]]; then
|
||||
|
@ -52,7 +52,7 @@ updateCanary() {
|
|||
OLD_CANARY_VERSION="$(getLocalVersion "citra-canary")"
|
||||
OLD_CANARY_HASH="$(getLocalHash "citra-canary")"
|
||||
|
||||
NEW_CANARY_VERSION="$(curl -s ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \
|
||||
NEW_CANARY_VERSION="$(curl -s ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} \
|
||||
"https://api.github.com/repos/citra-emu/citra-canary/releases?per_page=1" | jq -r '.[0].name' | cut -d"-" -f2 | cut -d" " -f1)"
|
||||
|
||||
if [[ "${OLD_CANARY_VERSION}" = "${NEW_CANARY_VERSION}" ]]; then
|
||||
|
|
|
@ -45,12 +45,12 @@ let
|
|||
in
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "retroarch-bare";
|
||||
version = "1.13.0";
|
||||
version = "1.14.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "libretro";
|
||||
repo = "RetroArch";
|
||||
hash = "sha256-eEe0mM9gUWgEzoRH1Iuet20US9eXNtCVSBi2kX1njVw=";
|
||||
hash = "sha256-oEENGehbzjJq1kTiz6gkXHMMe/rXjWPxxMoe4RqdqK4=";
|
||||
rev = "v${version}";
|
||||
};
|
||||
|
||||
|
|
|
@ -5,12 +5,12 @@
|
|||
|
||||
stdenvNoCC.mkDerivation rec {
|
||||
pname = "libretro-core-info";
|
||||
version = "1.13.0";
|
||||
version = "1.14.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "libretro";
|
||||
repo = "libretro-core-info";
|
||||
hash = "sha256-rTq2h+IGJduBkP4qCACmm3T2PvbZ0mOmwD1jLkJ2j/Q=";
|
||||
hash = "sha256-3nw8jUxBQJxiKlWS6OjTjwUYWKx3r2E7eHmbj4naWrk=";
|
||||
rev = "v${version}";
|
||||
};
|
||||
|
||||
|
|
|
@ -21,10 +21,10 @@ updateBranch() {
|
|||
oldHash="$(nix eval --raw -f "./default.nix" "$attribute".src.drvAttrs.outputHash)"
|
||||
|
||||
if [[ "$branch" = "mainline" ]]; then
|
||||
newVersion="$(curl -s ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} "https://api.github.com/repos/yuzu-emu/yuzu-mainline/releases?per_page=1" \
|
||||
newVersion="$(curl -s ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} "https://api.github.com/repos/yuzu-emu/yuzu-mainline/releases?per_page=1" \
|
||||
| jq -r '.[0].name' | cut -d" " -f2)"
|
||||
elif [[ "$branch" = "early-access" ]]; then
|
||||
newVersion="$(curl -s ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} "https://api.github.com/repos/pineappleEA/pineapple-src/releases?per_page=2" \
|
||||
newVersion="$(curl -s ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} "https://api.github.com/repos/pineappleEA/pineapple-src/releases?per_page=2" \
|
||||
| jq -r '.[].tag_name' | grep '^EA-[0-9]*' | head -n1 | cut -d"-" -f2 | cut -d" " -f1)"
|
||||
fi
|
||||
|
||||
|
@ -50,13 +50,13 @@ updateBranch() {
|
|||
|
||||
updateCompatibilityList() {
|
||||
local latestRevision oldUrl newUrl oldHash newHash oldDate newDate
|
||||
latestRevision="$(curl -s ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} "https://api.github.com/repos/flathub/org.yuzu_emu.yuzu/commits/master" | jq -r '.sha')"
|
||||
latestRevision="$(curl -s ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} "https://api.github.com/repos/flathub/org.yuzu_emu.yuzu/commits/master" | jq -r '.sha')"
|
||||
|
||||
oldUrl="$(sed -n '/yuzu-compat-list/,/url/p' "$DEFAULT_NIX" | tail -n1 | cut -d'"' -f2)"
|
||||
newUrl="https://raw.githubusercontent.com/flathub/org.yuzu_emu.yuzu/${latestRevision}/compatibility_list.json"
|
||||
|
||||
oldDate="$(sed -n '/last updated.*/p' "$DEFAULT_NIX" | rev | cut -d' ' -f1 | rev)"
|
||||
newDate="$(curl -s ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} "https://api.github.com/repos/flathub/org.yuzu_emu.yuzu/commits/${latestRevision}" \
|
||||
newDate="$(curl -s ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} "https://api.github.com/repos/flathub/org.yuzu_emu.yuzu/commits/${latestRevision}" \
|
||||
| jq -r '.commit.committer.date' | cut -d'T' -f1)"
|
||||
|
||||
oldHash="$(sed -n '/yuzu-compat-list/,/sha256/p' "$DEFAULT_NIX" | tail -n1 | cut -d'"' -f2)"
|
||||
|
|
|
@ -44,6 +44,13 @@ stdenv.mkDerivation rec {
|
|||
hash = "sha256-9cpOwio69GvzVeDq79BSmJgds9WU5kA/KUlAkHcpN5c=";
|
||||
};
|
||||
|
||||
outputs = [
|
||||
"out"
|
||||
"dev"
|
||||
"lib"
|
||||
"man"
|
||||
];
|
||||
|
||||
nativeBuildInputs = [
|
||||
autoreconfHook
|
||||
autoconf-archive
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
{ lib, stdenv, fetchurl, Xaw3d, ghostscriptX, perl, pkg-config, libiconv }:
|
||||
{ lib, stdenv, fetchurl, libXext, Xaw3d, ghostscriptX, perl, pkg-config, libiconv }:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "gv";
|
||||
|
@ -15,6 +15,7 @@ stdenv.mkDerivation rec {
|
|||
|
||||
nativeBuildInputs = [ pkg-config ];
|
||||
buildInputs = [
|
||||
libXext
|
||||
Xaw3d
|
||||
ghostscriptX
|
||||
perl
|
||||
|
|
|
@ -101,4 +101,4 @@ DEPENDENCIES
|
|||
jemoji
|
||||
|
||||
BUNDLED WITH
|
||||
2.3.25
|
||||
2.3.9
|
||||
|
|
|
@ -11,6 +11,7 @@ gem "jemoji"
|
|||
# Optional dependencies:
|
||||
gem "jekyll-coffeescript"
|
||||
#gem "jekyll-docs"
|
||||
gem "jekyll-favicon"
|
||||
gem "jekyll-feed", "~> 0.9"
|
||||
gem "jekyll-gist"
|
||||
gem "jekyll-paginate"
|
||||
|
|
|
@ -58,6 +58,10 @@ GEM
|
|||
jekyll-coffeescript (2.0.0)
|
||||
coffee-script (~> 2.2)
|
||||
coffee-script-source (~> 1.12)
|
||||
jekyll-favicon (1.1.0)
|
||||
jekyll (>= 3.0, < 5.0)
|
||||
mini_magick (~> 4.11)
|
||||
rexml (~> 3.2, >= 3.2.5)
|
||||
jekyll-feed (0.17.0)
|
||||
jekyll (>= 3.7, < 5.0)
|
||||
jekyll-gist (1.5.0)
|
||||
|
@ -100,6 +104,7 @@ GEM
|
|||
mime-types (3.4.1)
|
||||
mime-types-data (~> 3.2015)
|
||||
mime-types-data (3.2022.0105)
|
||||
mini_magick (4.11.0)
|
||||
mini_portile2 (2.8.0)
|
||||
minitest (5.16.3)
|
||||
nokogiri (1.13.9)
|
||||
|
@ -146,6 +151,7 @@ DEPENDENCIES
|
|||
jekyll
|
||||
jekyll-avatar
|
||||
jekyll-coffeescript
|
||||
jekyll-favicon
|
||||
jekyll-feed (~> 0.9)
|
||||
jekyll-gist
|
||||
jekyll-mentions
|
||||
|
@ -163,4 +169,4 @@ DEPENDENCIES
|
|||
yajl-ruby (~> 1.4)
|
||||
|
||||
BUNDLED WITH
|
||||
2.3.25
|
||||
2.3.9
|
||||
|
|
|
@ -264,6 +264,17 @@
|
|||
};
|
||||
version = "2.0.0";
|
||||
};
|
||||
jekyll-favicon = {
|
||||
dependencies = ["jekyll" "mini_magick" "rexml"];
|
||||
groups = ["default"];
|
||||
platforms = [];
|
||||
source = {
|
||||
remotes = ["https://rubygems.org"];
|
||||
sha256 = "0dyksm4i11n0qshd7wh6dvk8d0fc70dd32ir2dxs6igxq0gd6hi1";
|
||||
type = "gem";
|
||||
};
|
||||
version = "1.1.0";
|
||||
};
|
||||
jekyll-feed = {
|
||||
dependencies = ["jekyll"];
|
||||
groups = ["default"];
|
||||
|
@ -526,6 +537,16 @@
|
|||
};
|
||||
version = "3.2022.0105";
|
||||
};
|
||||
mini_magick = {
|
||||
groups = ["default"];
|
||||
platforms = [];
|
||||
source = {
|
||||
remotes = ["https://rubygems.org"];
|
||||
sha256 = "1aj604x11d9pksbljh0l38f70b558rhdgji1s9i763hiagvvx2hs";
|
||||
type = "gem";
|
||||
};
|
||||
version = "4.11.0";
|
||||
};
|
||||
mini_portile2 = {
|
||||
groups = ["default"];
|
||||
platforms = [];
|
||||
|
|
|
@ -22,16 +22,16 @@
|
|||
, flake8
|
||||
|
||||
# python dependencies
|
||||
, certifi
|
||||
, dbus-python
|
||||
, distro
|
||||
, evdev
|
||||
, lxml
|
||||
, pillow
|
||||
, pygobject3
|
||||
, pypresence
|
||||
, pyyaml
|
||||
, requests
|
||||
, keyring
|
||||
, python-magic
|
||||
|
||||
# commands that lutris needs
|
||||
, xrandr
|
||||
|
@ -84,13 +84,13 @@ let
|
|||
in
|
||||
buildPythonApplication rec {
|
||||
pname = "lutris-original";
|
||||
version = "0.5.11";
|
||||
version = "0.5.12";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "lutris";
|
||||
repo = "lutris";
|
||||
rev = "refs/tags/v${version}";
|
||||
sha256 = "sha256-D2qMKYmi5TC8jEAECcz2V0rUrmp5kjXJ5qyW6C4re3w=";
|
||||
sha256 = "sha256-rsiXm7L/M85ot6NrTyy//lMRFlLPJYve9y6Erg9Ugxg=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ wrapGAppsHook ];
|
||||
|
@ -104,20 +104,20 @@ buildPythonApplication rec {
|
|||
libnotify
|
||||
pango
|
||||
webkitgtk
|
||||
python-magic
|
||||
] ++ gstDeps;
|
||||
|
||||
# See `install_requires` in https://github.com/lutris/lutris/blob/master/setup.py
|
||||
propagatedBuildInputs = [
|
||||
evdev
|
||||
distro
|
||||
lxml
|
||||
pyyaml
|
||||
pygobject3
|
||||
requests
|
||||
pillow
|
||||
certifi
|
||||
dbus-python
|
||||
keyring
|
||||
python-magic
|
||||
distro
|
||||
evdev
|
||||
lxml
|
||||
pillow
|
||||
pygobject3
|
||||
pypresence
|
||||
pyyaml
|
||||
requests
|
||||
];
|
||||
|
||||
postPatch = ''
|
||||
|
|
|
@ -2,15 +2,19 @@
|
|||
, stdenv
|
||||
, fetchFromGitea
|
||||
, alsa-lib
|
||||
, bison
|
||||
, fcft
|
||||
, flex
|
||||
, json_c
|
||||
, libmpdclient
|
||||
, libxcb
|
||||
, libyaml
|
||||
, meson
|
||||
, ninja
|
||||
, pipewire
|
||||
, pixman
|
||||
, pkg-config
|
||||
, pulseaudio
|
||||
, scdoc
|
||||
, tllist
|
||||
, udev
|
||||
|
@ -26,26 +30,27 @@
|
|||
}:
|
||||
|
||||
let
|
||||
# Courtesy of sternenseemann and FRidh
|
||||
mesonFeatureFlag = feature: flag:
|
||||
"-D${feature}=${if flag then "enabled" else "disabled"}";
|
||||
inherit (lib) mesonEnable;
|
||||
in
|
||||
stdenv.mkDerivation rec {
|
||||
assert (x11Support || waylandSupport);
|
||||
stdenv.mkDerivation (finalAttrs: {
|
||||
pname = "yambar";
|
||||
version = "1.8.0";
|
||||
version = "1.9.0";
|
||||
|
||||
src = fetchFromGitea {
|
||||
domain = "codeberg.org";
|
||||
owner = "dnkl";
|
||||
repo = "yambar";
|
||||
rev = version;
|
||||
hash = "sha256-zXhIXT3JrVSllnYheDU2KK3NE2VYa+xuKufIXjdMFjU=";
|
||||
rev = finalAttrs.version;
|
||||
hash = "sha256-0bgRnZYLGWJ9PE62i04hPBcgzWyd30DK7AUuejSgta4=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
pkg-config
|
||||
bison
|
||||
flex
|
||||
meson
|
||||
ninja
|
||||
pkg-config
|
||||
scdoc
|
||||
wayland-scanner
|
||||
];
|
||||
|
@ -56,7 +61,9 @@ stdenv.mkDerivation rec {
|
|||
json_c
|
||||
libmpdclient
|
||||
libyaml
|
||||
pipewire
|
||||
pixman
|
||||
pulseaudio
|
||||
tllist
|
||||
udev
|
||||
] ++ lib.optionals (waylandSupport) [
|
||||
|
@ -72,13 +79,13 @@ stdenv.mkDerivation rec {
|
|||
mesonBuildType = "release";
|
||||
|
||||
mesonFlags = [
|
||||
(mesonFeatureFlag "backend-x11" x11Support)
|
||||
(mesonFeatureFlag "backend-wayland" waylandSupport)
|
||||
(mesonEnable "backend-x11" x11Support)
|
||||
(mesonEnable "backend-wayland" waylandSupport)
|
||||
];
|
||||
|
||||
meta = with lib; {
|
||||
homepage = "https://codeberg.org/dnkl/yambar";
|
||||
changelog = "https://codeberg.org/dnkl/yambar/releases/tag/${version}";
|
||||
changelog = "https://codeberg.org/dnkl/yambar/releases/tag/${finalAttrs.version}";
|
||||
description = "Modular status panel for X11 and Wayland";
|
||||
longDescription = ''
|
||||
yambar is a lightweight and configurable status panel (bar, for short) for
|
||||
|
@ -107,6 +114,6 @@ stdenv.mkDerivation rec {
|
|||
'';
|
||||
license = licenses.mit;
|
||||
maintainers = with maintainers; [ AndersonTorres ];
|
||||
platforms = with platforms; unix;
|
||||
platforms = platforms.linux;
|
||||
};
|
||||
}
|
||||
})
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
{
|
||||
"stable": {
|
||||
"version": "108.0.5359.98",
|
||||
"sha256": "07jnhd5y7k4zp2ipz052isw7llagxn8l8rbz8x3jkjz3f5wi7dk0",
|
||||
"sha256bin64": "1hx49932g8abnb5f3a4ly7kjbrkh5bs040dh96zpxvfqx7dn6vrs",
|
||||
"version": "108.0.5359.124",
|
||||
"sha256": "0x9ac6m4xdccjdrk2bmq4y7bhfpgf2dv0q7lsbbsa50vlv1gm3fl",
|
||||
"sha256bin64": "00c11svz9dzkg57484jg7c558l0jz8jbgi5zyjs1w7xp24vpnnpg",
|
||||
"deps": {
|
||||
"gn": {
|
||||
"version": "2022-10-05",
|
||||
|
|
|
@ -3,10 +3,10 @@
|
|||
rec {
|
||||
firefox = buildMozillaMach rec {
|
||||
pname = "firefox";
|
||||
version = "108.0";
|
||||
version = "108.0.1";
|
||||
src = fetchurl {
|
||||
url = "mirror://mozilla/firefox/releases/${version}/source/firefox-${version}.source.tar.xz";
|
||||
sha512 = "fa800f62cca395a51b9a04373a27be48fc3860208e34ecf74d908127638d1eb8c41cf9898be6896777d408127d5c4b7104d9ee89c97da923b2dc6ea32186187e";
|
||||
sha512 = "e6219ed6324422ec293ed96868738e056582bb9f7fb82e59362541f3465c6ebca806d26ecd801156b074c3675bd5a22507b1f1fa53eebf82b7dd35f2b1ff0625";
|
||||
};
|
||||
|
||||
meta = {
|
||||
|
|
|
@ -2,13 +2,13 @@
|
|||
|
||||
buildGoModule rec {
|
||||
pname = "argo-rollouts";
|
||||
version = "1.3.1";
|
||||
version = "1.3.2";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "argoproj";
|
||||
repo = "argo-rollouts";
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-qgOhiJdaxauHIoPsMfcdxwrKiv8QD/tFksCbk13Zpiw=";
|
||||
sha256 = "sha256-hsUpZrtgjP6FaVhw0ijDTlvfz9Ok+A4nyAwi2VNxvEg=";
|
||||
};
|
||||
|
||||
vendorSha256 = "sha256-gm96rQdQJGsIcxVgEI7sI7BvEETU/+HsQ6PnDjFXb/0=";
|
||||
|
|
|
@ -7,13 +7,13 @@ NIXPKGS_PATH="$(git rev-parse --show-toplevel)"
|
|||
CMCTL_PATH="$( cd -- "$(dirname "$0")" >/dev/null 2>&1 ; pwd -P )"
|
||||
|
||||
OLD_VERSION="$(nix-instantiate --eval -E "with import $NIXPKGS_PATH {}; cmctl.version or (builtins.parseDrvName cmctl.name).version" | tr -d '"')"
|
||||
LATEST_TAG="$(curl -s ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} "https://api.github.com/repos/cert-manager/cert-manager/releases" | jq '.[].tag_name' --raw-output | sed '/-/d' | sort --version-sort -r | head -n 1)"
|
||||
LATEST_TAG="$(curl -s ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} "https://api.github.com/repos/cert-manager/cert-manager/releases" | jq '.[].tag_name' --raw-output | sed '/-/d' | sort --version-sort -r | head -n 1)"
|
||||
LATEST_VERSION="${LATEST_TAG:1}"
|
||||
|
||||
if [ ! "$OLD_VERSION" = "$LATEST_VERSION" ]; then
|
||||
SHA256=$(nix-prefetch-url --quiet --unpack https://github.com/cert-manager/cert-manager/archive/refs/tags/${LATEST_TAG}.tar.gz)
|
||||
TAG_SHA=$(curl -s ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} "https://api.github.com/repos/cert-manager/cert-manager/git/ref/tags/${LATEST_TAG}" | jq -r '.object.sha')
|
||||
TAG_COMMIT_SHA=$(curl -s ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} "https://api.github.com/repos/cert-manager/cert-manager/git/tags/${TAG_SHA}" | jq '.object.sha' --raw-output)
|
||||
TAG_SHA=$(curl -s ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} "https://api.github.com/repos/cert-manager/cert-manager/git/ref/tags/${LATEST_TAG}" | jq -r '.object.sha')
|
||||
TAG_COMMIT_SHA=$(curl -s ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} "https://api.github.com/repos/cert-manager/cert-manager/git/tags/${TAG_SHA}" | jq '.object.sha' --raw-output)
|
||||
|
||||
setKV () {
|
||||
sed -i "s|$1 = \".*\"|$1 = \"${2:-}\"|" "${CMCTL_PATH}/default.nix"
|
||||
|
|
|
@ -13,7 +13,7 @@ NIXPKGS_CRC_FOLDER=$(
|
|||
cd ${NIXPKGS_CRC_FOLDER}
|
||||
|
||||
LATEST_TAG_RAWFILE=${WORKDIR}/latest_tag.json
|
||||
curl --silent ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \
|
||||
curl --silent ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} \
|
||||
https://api.github.com/repos/code-ready/crc/releases >${LATEST_TAG_RAWFILE}
|
||||
|
||||
LATEST_TAG_NAME=$(jq 'map(.tag_name)' ${LATEST_TAG_RAWFILE} |
|
||||
|
@ -21,7 +21,7 @@ LATEST_TAG_NAME=$(jq 'map(.tag_name)' ${LATEST_TAG_RAWFILE} |
|
|||
|
||||
CRC_VERSION=$(echo ${LATEST_TAG_NAME} | sed 's/^v//')
|
||||
|
||||
CRC_COMMIT=$(curl --silent ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \
|
||||
CRC_COMMIT=$(curl --silent ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} \
|
||||
https://api.github.com/repos/code-ready/crc/tags |
|
||||
jq -r "map(select(.name == \"${LATEST_TAG_NAME}\")) | .[0] | .commit.sha")
|
||||
|
||||
|
|
|
@ -11,7 +11,7 @@ NIXPKGS_K3S_PATH=$(cd $(dirname ${BASH_SOURCE[0]}); pwd -P)/
|
|||
cd ${NIXPKGS_K3S_PATH}
|
||||
|
||||
LATEST_TAG_RAWFILE=${WORKDIR}/latest_tag.json
|
||||
curl --silent ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \
|
||||
curl --silent ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} \
|
||||
https://api.github.com/repos/k3s-io/k3s/releases > ${LATEST_TAG_RAWFILE}
|
||||
|
||||
LATEST_TAG_NAME=$(jq 'map(.tag_name)' ${LATEST_TAG_RAWFILE} | \
|
||||
|
@ -19,7 +19,7 @@ LATEST_TAG_NAME=$(jq 'map(.tag_name)' ${LATEST_TAG_RAWFILE} | \
|
|||
|
||||
K3S_VERSION=$(echo ${LATEST_TAG_NAME} | sed 's/^v//')
|
||||
|
||||
K3S_COMMIT=$(curl --silent ${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \
|
||||
K3S_COMMIT=$(curl --silent ${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} \
|
||||
https://api.github.com/repos/k3s-io/k3s/tags \
|
||||
| jq -r "map(select(.name == \"${LATEST_TAG_NAME}\")) | .[0] | .commit.sha")
|
||||
|
||||
|
|
|
@ -2,13 +2,13 @@
|
|||
|
||||
buildGoModule rec {
|
||||
pname = "kubergrunt";
|
||||
version = "0.9.3";
|
||||
version = "0.10.0";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "gruntwork-io";
|
||||
repo = "kubergrunt";
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-nbpRdAkctLiG/hP6vhfEimplAzzj70d5nnaFcJ1NykY=";
|
||||
sha256 = "sha256-HJZrE0fHlyOTQF9EqdrtQNmaHlrMA2RwNg4P7B2lYI0=";
|
||||
};
|
||||
|
||||
vendorSha256 = "sha256-9hWX6INN5HWXyeFQRjkqr+BsGv56lInVYacvT6Imahw=";
|
||||
|
|
|
@ -3,14 +3,12 @@
|
|||
, buildPythonApplication
|
||||
, fetchFromGitHub
|
||||
, python3
|
||||
, pythonOlder
|
||||
, html5lib
|
||||
, invoke
|
||||
, openpyxl
|
||||
, poetry-core
|
||||
, tidylib
|
||||
, beautifulsoup4
|
||||
, dataclasses
|
||||
, datauri
|
||||
, docutils
|
||||
, jinja2
|
||||
|
@ -73,8 +71,6 @@ buildPythonApplication rec {
|
|||
textx
|
||||
xlrd
|
||||
XlsxWriter
|
||||
] ++ lib.optionals (pythonOlder "3.7") [
|
||||
dataclasses
|
||||
];
|
||||
|
||||
checkInputs = [
|
||||
|
|
|
@ -2,11 +2,10 @@
|
|||
, opencv3, gtest, blas, gomp, llvmPackages, perl
|
||||
, cudaSupport ? config.cudaSupport or false, cudaPackages ? {}, nvidia_x11
|
||||
, cudnnSupport ? cudaSupport
|
||||
, cudaCapabilities ? [ "3.7" "5.0" "6.0" "7.0" "7.5" "8.0" "8.6" ]
|
||||
}:
|
||||
|
||||
let
|
||||
inherit (cudaPackages) cudatoolkit cudnn;
|
||||
inherit (cudaPackages) cudatoolkit cudaFlags cudnn;
|
||||
in
|
||||
|
||||
assert cudnnSupport -> cudaSupport;
|
||||
|
@ -51,7 +50,7 @@ stdenv.mkDerivation rec {
|
|||
"-DUSE_OLDCMAKECUDA=ON" # see https://github.com/apache/incubator-mxnet/issues/10743
|
||||
"-DCUDA_ARCH_NAME=All"
|
||||
"-DCUDA_HOST_COMPILER=${cudatoolkit.cc}/bin/cc"
|
||||
"-DMXNET_CUDA_ARCH=${lib.concatStringsSep ";" cudaCapabilities}"
|
||||
"-DMXNET_CUDA_ARCH=${cudaFlags.cudaCapabilitiesSemiColonString}"
|
||||
] else [ "-DUSE_CUDA=OFF" ])
|
||||
++ lib.optional (!cudnnSupport) "-DUSE_CUDNN=OFF";
|
||||
|
||||
|
|
|
@ -2,16 +2,16 @@
|
|||
|
||||
buildGoModule rec {
|
||||
pname = "lefthook";
|
||||
version = "1.2.4";
|
||||
version = "1.2.6";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
rev = "v${version}";
|
||||
owner = "evilmartians";
|
||||
repo = "lefthook";
|
||||
sha256 = "sha256-Z6j/Y8b9lq2nYS5Ki8iJoDsG3l5M6RylfDqQL7WrwNg=";
|
||||
sha256 = "sha256-M15ESB8JCSryD6/+6N2EA6NUzLI4cwgAJUQC9UDNJrM=";
|
||||
};
|
||||
|
||||
vendorSha256 = "sha256-sBcgt2YsV9RQhSjPN6N54tRk7nNvcOVhPEsEP+0Dtco=";
|
||||
vendorSha256 = "sha256-KNegRQhVZMNDgcJZOgEei3oviDPM/RFwZbpoh38pxBw=";
|
||||
|
||||
nativeBuildInputs = [ installShellFiles ];
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
{ lib, buildPythonApplication, fetchPypi, pythonOlder
|
||||
, installShellFiles
|
||||
, mock, pytest, nose
|
||||
, pyyaml, backports_ssl_match_hostname, colorama, docopt
|
||||
, pyyaml, colorama, docopt
|
||||
, dockerpty, docker, jsonschema, requests
|
||||
, six, texttable, websocket-client, cached-property
|
||||
, enum34, functools32, paramiko, distro, python-dotenv
|
||||
|
@ -24,7 +24,7 @@ buildPythonApplication rec {
|
|||
pyyaml colorama dockerpty docker
|
||||
jsonschema requests six texttable websocket-client
|
||||
docopt cached-property paramiko distro python-dotenv
|
||||
] ++ lib.optional (pythonOlder "3.7") backports_ssl_match_hostname
|
||||
]
|
||||
++ lib.optional (pythonOlder "3.4") enum34
|
||||
++ lib.optional (pythonOlder "3.2") functools32;
|
||||
|
||||
|
|
|
@ -17,19 +17,19 @@
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "pods";
|
||||
version = "1.0.0-beta.9";
|
||||
version = "1.0.0-rc.2";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "marhkb";
|
||||
repo = pname;
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-cW6n00EPe7eFuqT2Vk27Ax0fxjz9kWSlYuS2oIj0mXY=";
|
||||
sha256 = "sha256-fyhp0Qumku2EO+5+AaWBLp6xG9mpfNuxhr/PoLca1a4=";
|
||||
};
|
||||
|
||||
cargoDeps = rustPlatform.fetchCargoTarball {
|
||||
inherit src;
|
||||
name = "${pname}-${version}";
|
||||
sha256 = "sha256-y0njqlzAx1M7iC8bZrKlKACSiYnSRaHOrcAxs3bFF30=";
|
||||
sha256 = "sha256-v6ZGDd1mAxb55JIijJHlthrTta2PwZMRa8MqVnJMyzQ=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
|
|
|
@ -1,371 +0,0 @@
|
|||
From 8ab70b8958a8f9cb9bd316eecd3ccbcf05c06614 Mon Sep 17 00:00:00 2001
|
||||
From: Linus Heckemann <git@sphalerite.org>
|
||||
Date: Tue, 4 Oct 2022 12:41:21 +0200
|
||||
Subject: [PATCH] 9pfs: use GHashTable for fid table
|
||||
MIME-Version: 1.0
|
||||
Content-Type: text/plain; charset=UTF-8
|
||||
Content-Transfer-Encoding: 8bit
|
||||
|
||||
The previous implementation would iterate over the fid table for
|
||||
lookup operations, resulting in an operation with O(n) complexity on
|
||||
the number of open files and poor cache locality -- for every open,
|
||||
stat, read, write, etc operation.
|
||||
|
||||
This change uses a hashtable for this instead, significantly improving
|
||||
the performance of the 9p filesystem. The runtime of NixOS's simple
|
||||
installer test, which copies ~122k files totalling ~1.8GiB from 9p,
|
||||
decreased by a factor of about 10.
|
||||
|
||||
Signed-off-by: Linus Heckemann <git@sphalerite.org>
|
||||
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
|
||||
Reviewed-by: Greg Kurz <groug@kaod.org>
|
||||
[CS: - Retain BUG_ON(f->clunked) in get_fid().
|
||||
- Add TODO comment in clunk_fid(). ]
|
||||
Message-Id: <20221004104121.713689-1-git@sphalerite.org>
|
||||
[CS: - Drop unnecessary goto and out: label. ]
|
||||
Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
|
||||
---
|
||||
hw/9pfs/9p.c | 194 +++++++++++++++++++++++++++++----------------------
|
||||
hw/9pfs/9p.h | 2 +-
|
||||
2 files changed, 112 insertions(+), 84 deletions(-)
|
||||
|
||||
diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
|
||||
index aebadeaa03..9bf13133e5 100644
|
||||
--- a/hw/9pfs/9p.c
|
||||
+++ b/hw/9pfs/9p.c
|
||||
@@ -256,7 +256,8 @@ static size_t v9fs_string_size(V9fsString *str)
|
||||
}
|
||||
|
||||
/*
|
||||
- * returns 0 if fid got re-opened, 1 if not, < 0 on error */
|
||||
+ * returns 0 if fid got re-opened, 1 if not, < 0 on error
|
||||
+ */
|
||||
static int coroutine_fn v9fs_reopen_fid(V9fsPDU *pdu, V9fsFidState *f)
|
||||
{
|
||||
int err = 1;
|
||||
@@ -282,33 +283,32 @@ static V9fsFidState *coroutine_fn get_fid(V9fsPDU *pdu, int32_t fid)
|
||||
V9fsFidState *f;
|
||||
V9fsState *s = pdu->s;
|
||||
|
||||
- QSIMPLEQ_FOREACH(f, &s->fid_list, next) {
|
||||
+ f = g_hash_table_lookup(s->fids, GINT_TO_POINTER(fid));
|
||||
+ if (f) {
|
||||
BUG_ON(f->clunked);
|
||||
- if (f->fid == fid) {
|
||||
- /*
|
||||
- * Update the fid ref upfront so that
|
||||
- * we don't get reclaimed when we yield
|
||||
- * in open later.
|
||||
- */
|
||||
- f->ref++;
|
||||
- /*
|
||||
- * check whether we need to reopen the
|
||||
- * file. We might have closed the fd
|
||||
- * while trying to free up some file
|
||||
- * descriptors.
|
||||
- */
|
||||
- err = v9fs_reopen_fid(pdu, f);
|
||||
- if (err < 0) {
|
||||
- f->ref--;
|
||||
- return NULL;
|
||||
- }
|
||||
- /*
|
||||
- * Mark the fid as referenced so that the LRU
|
||||
- * reclaim won't close the file descriptor
|
||||
- */
|
||||
- f->flags |= FID_REFERENCED;
|
||||
- return f;
|
||||
+ /*
|
||||
+ * Update the fid ref upfront so that
|
||||
+ * we don't get reclaimed when we yield
|
||||
+ * in open later.
|
||||
+ */
|
||||
+ f->ref++;
|
||||
+ /*
|
||||
+ * check whether we need to reopen the
|
||||
+ * file. We might have closed the fd
|
||||
+ * while trying to free up some file
|
||||
+ * descriptors.
|
||||
+ */
|
||||
+ err = v9fs_reopen_fid(pdu, f);
|
||||
+ if (err < 0) {
|
||||
+ f->ref--;
|
||||
+ return NULL;
|
||||
}
|
||||
+ /*
|
||||
+ * Mark the fid as referenced so that the LRU
|
||||
+ * reclaim won't close the file descriptor
|
||||
+ */
|
||||
+ f->flags |= FID_REFERENCED;
|
||||
+ return f;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
@@ -317,12 +317,11 @@ static V9fsFidState *alloc_fid(V9fsState *s, int32_t fid)
|
||||
{
|
||||
V9fsFidState *f;
|
||||
|
||||
- QSIMPLEQ_FOREACH(f, &s->fid_list, next) {
|
||||
+ f = g_hash_table_lookup(s->fids, GINT_TO_POINTER(fid));
|
||||
+ if (f) {
|
||||
/* If fid is already there return NULL */
|
||||
BUG_ON(f->clunked);
|
||||
- if (f->fid == fid) {
|
||||
- return NULL;
|
||||
- }
|
||||
+ return NULL;
|
||||
}
|
||||
f = g_new0(V9fsFidState, 1);
|
||||
f->fid = fid;
|
||||
@@ -333,7 +332,7 @@ static V9fsFidState *alloc_fid(V9fsState *s, int32_t fid)
|
||||
* reclaim won't close the file descriptor
|
||||
*/
|
||||
f->flags |= FID_REFERENCED;
|
||||
- QSIMPLEQ_INSERT_TAIL(&s->fid_list, f, next);
|
||||
+ g_hash_table_insert(s->fids, GINT_TO_POINTER(fid), f);
|
||||
|
||||
v9fs_readdir_init(s->proto_version, &f->fs.dir);
|
||||
v9fs_readdir_init(s->proto_version, &f->fs_reclaim.dir);
|
||||
@@ -424,12 +423,12 @@ static V9fsFidState *clunk_fid(V9fsState *s, int32_t fid)
|
||||
{
|
||||
V9fsFidState *fidp;
|
||||
|
||||
- QSIMPLEQ_FOREACH(fidp, &s->fid_list, next) {
|
||||
- if (fidp->fid == fid) {
|
||||
- QSIMPLEQ_REMOVE(&s->fid_list, fidp, V9fsFidState, next);
|
||||
- fidp->clunked = true;
|
||||
- return fidp;
|
||||
- }
|
||||
+ /* TODO: Use g_hash_table_steal_extended() instead? */
|
||||
+ fidp = g_hash_table_lookup(s->fids, GINT_TO_POINTER(fid));
|
||||
+ if (fidp) {
|
||||
+ g_hash_table_remove(s->fids, GINT_TO_POINTER(fid));
|
||||
+ fidp->clunked = true;
|
||||
+ return fidp;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
@@ -439,10 +438,15 @@ void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
|
||||
int reclaim_count = 0;
|
||||
V9fsState *s = pdu->s;
|
||||
V9fsFidState *f;
|
||||
+ GHashTableIter iter;
|
||||
+ gpointer fid;
|
||||
+
|
||||
+ g_hash_table_iter_init(&iter, s->fids);
|
||||
+
|
||||
QSLIST_HEAD(, V9fsFidState) reclaim_list =
|
||||
QSLIST_HEAD_INITIALIZER(reclaim_list);
|
||||
|
||||
- QSIMPLEQ_FOREACH(f, &s->fid_list, next) {
|
||||
+ while (g_hash_table_iter_next(&iter, &fid, (gpointer *) &f)) {
|
||||
/*
|
||||
* Unlink fids cannot be reclaimed. Check
|
||||
* for them and skip them. Also skip fids
|
||||
@@ -514,72 +518,85 @@ void coroutine_fn v9fs_reclaim_fd(V9fsPDU *pdu)
|
||||
}
|
||||
}
|
||||
|
||||
+/*
|
||||
+ * This is used when a path is removed from the directory tree. Any
|
||||
+ * fids that still reference it must not be closed from then on, since
|
||||
+ * they cannot be reopened.
|
||||
+ */
|
||||
static int coroutine_fn v9fs_mark_fids_unreclaim(V9fsPDU *pdu, V9fsPath *path)
|
||||
{
|
||||
- int err;
|
||||
+ int err = 0;
|
||||
V9fsState *s = pdu->s;
|
||||
- V9fsFidState *fidp, *fidp_next;
|
||||
+ V9fsFidState *fidp;
|
||||
+ gpointer fid;
|
||||
+ GHashTableIter iter;
|
||||
+ /*
|
||||
+ * The most common case is probably that we have exactly one
|
||||
+ * fid for the given path, so preallocate exactly one.
|
||||
+ */
|
||||
+ g_autoptr(GArray) to_reopen = g_array_sized_new(FALSE, FALSE,
|
||||
+ sizeof(V9fsFidState *), 1);
|
||||
+ gint i;
|
||||
|
||||
- fidp = QSIMPLEQ_FIRST(&s->fid_list);
|
||||
- if (!fidp) {
|
||||
- return 0;
|
||||
- }
|
||||
+ g_hash_table_iter_init(&iter, s->fids);
|
||||
|
||||
/*
|
||||
- * v9fs_reopen_fid() can yield : a reference on the fid must be held
|
||||
- * to ensure its pointer remains valid and we can safely pass it to
|
||||
- * QSIMPLEQ_NEXT(). The corresponding put_fid() can also yield so
|
||||
- * we must keep a reference on the next fid as well. So the logic here
|
||||
- * is to get a reference on a fid and only put it back during the next
|
||||
- * iteration after we could get a reference on the next fid. Start with
|
||||
- * the first one.
|
||||
+ * We iterate over the fid table looking for the entries we need
|
||||
+ * to reopen, and store them in to_reopen. This is because
|
||||
+ * v9fs_reopen_fid() and put_fid() yield. This allows the fid table
|
||||
+ * to be modified in the meantime, invalidating our iterator.
|
||||
*/
|
||||
- for (fidp->ref++; fidp; fidp = fidp_next) {
|
||||
+ while (g_hash_table_iter_next(&iter, &fid, (gpointer *) &fidp)) {
|
||||
if (fidp->path.size == path->size &&
|
||||
!memcmp(fidp->path.data, path->data, path->size)) {
|
||||
- /* Mark the fid non reclaimable. */
|
||||
- fidp->flags |= FID_NON_RECLAIMABLE;
|
||||
-
|
||||
- /* reopen the file/dir if already closed */
|
||||
- err = v9fs_reopen_fid(pdu, fidp);
|
||||
- if (err < 0) {
|
||||
- put_fid(pdu, fidp);
|
||||
- return err;
|
||||
- }
|
||||
- }
|
||||
-
|
||||
- fidp_next = QSIMPLEQ_NEXT(fidp, next);
|
||||
-
|
||||
- if (fidp_next) {
|
||||
/*
|
||||
- * Ensure the next fid survives a potential clunk request during
|
||||
- * put_fid() below and v9fs_reopen_fid() in the next iteration.
|
||||
+ * Ensure the fid survives a potential clunk request during
|
||||
+ * v9fs_reopen_fid or put_fid.
|
||||
*/
|
||||
- fidp_next->ref++;
|
||||
+ fidp->ref++;
|
||||
+ fidp->flags |= FID_NON_RECLAIMABLE;
|
||||
+ g_array_append_val(to_reopen, fidp);
|
||||
}
|
||||
+ }
|
||||
|
||||
- /* We're done with this fid */
|
||||
- put_fid(pdu, fidp);
|
||||
+ for (i = 0; i < to_reopen->len; i++) {
|
||||
+ fidp = g_array_index(to_reopen, V9fsFidState*, i);
|
||||
+ /* reopen the file/dir if already closed */
|
||||
+ err = v9fs_reopen_fid(pdu, fidp);
|
||||
+ if (err < 0) {
|
||||
+ break;
|
||||
+ }
|
||||
}
|
||||
|
||||
- return 0;
|
||||
+ for (i = 0; i < to_reopen->len; i++) {
|
||||
+ put_fid(pdu, g_array_index(to_reopen, V9fsFidState*, i));
|
||||
+ }
|
||||
+ return err;
|
||||
}
|
||||
|
||||
static void coroutine_fn virtfs_reset(V9fsPDU *pdu)
|
||||
{
|
||||
V9fsState *s = pdu->s;
|
||||
V9fsFidState *fidp;
|
||||
+ GList *freeing;
|
||||
+ /*
|
||||
+ * Get a list of all the values (fid states) in the table, which
|
||||
+ * we then...
|
||||
+ */
|
||||
+ g_autoptr(GList) fids = g_hash_table_get_values(s->fids);
|
||||
|
||||
- /* Free all fids */
|
||||
- while (!QSIMPLEQ_EMPTY(&s->fid_list)) {
|
||||
- /* Get fid */
|
||||
- fidp = QSIMPLEQ_FIRST(&s->fid_list);
|
||||
- fidp->ref++;
|
||||
+ /* ... remove from the table, taking over ownership. */
|
||||
+ g_hash_table_steal_all(s->fids);
|
||||
|
||||
- /* Clunk fid */
|
||||
- QSIMPLEQ_REMOVE(&s->fid_list, fidp, V9fsFidState, next);
|
||||
+ /*
|
||||
+ * This allows us to release our references to them asynchronously without
|
||||
+ * iterating over the hash table and risking iterator invalidation
|
||||
+ * through concurrent modifications.
|
||||
+ */
|
||||
+ for (freeing = fids; freeing; freeing = freeing->next) {
|
||||
+ fidp = freeing->data;
|
||||
+ fidp->ref++;
|
||||
fidp->clunked = true;
|
||||
-
|
||||
put_fid(pdu, fidp);
|
||||
}
|
||||
}
|
||||
@@ -3205,6 +3222,8 @@ static int coroutine_fn v9fs_complete_rename(V9fsPDU *pdu, V9fsFidState *fidp,
|
||||
V9fsFidState *tfidp;
|
||||
V9fsState *s = pdu->s;
|
||||
V9fsFidState *dirfidp = NULL;
|
||||
+ GHashTableIter iter;
|
||||
+ gpointer fid;
|
||||
|
||||
v9fs_path_init(&new_path);
|
||||
if (newdirfid != -1) {
|
||||
@@ -3238,11 +3257,13 @@ static int coroutine_fn v9fs_complete_rename(V9fsPDU *pdu, V9fsFidState *fidp,
|
||||
if (err < 0) {
|
||||
goto out;
|
||||
}
|
||||
+
|
||||
/*
|
||||
* Fixup fid's pointing to the old name to
|
||||
* start pointing to the new name
|
||||
*/
|
||||
- QSIMPLEQ_FOREACH(tfidp, &s->fid_list, next) {
|
||||
+ g_hash_table_iter_init(&iter, s->fids);
|
||||
+ while (g_hash_table_iter_next(&iter, &fid, (gpointer *) &tfidp)) {
|
||||
if (v9fs_path_is_ancestor(&fidp->path, &tfidp->path)) {
|
||||
/* replace the name */
|
||||
v9fs_fix_path(&tfidp->path, &new_path, strlen(fidp->path.data));
|
||||
@@ -3320,6 +3341,8 @@ static int coroutine_fn v9fs_fix_fid_paths(V9fsPDU *pdu, V9fsPath *olddir,
|
||||
V9fsPath oldpath, newpath;
|
||||
V9fsState *s = pdu->s;
|
||||
int err;
|
||||
+ GHashTableIter iter;
|
||||
+ gpointer fid;
|
||||
|
||||
v9fs_path_init(&oldpath);
|
||||
v9fs_path_init(&newpath);
|
||||
@@ -3336,7 +3359,8 @@ static int coroutine_fn v9fs_fix_fid_paths(V9fsPDU *pdu, V9fsPath *olddir,
|
||||
* Fixup fid's pointing to the old name to
|
||||
* start pointing to the new name
|
||||
*/
|
||||
- QSIMPLEQ_FOREACH(tfidp, &s->fid_list, next) {
|
||||
+ g_hash_table_iter_init(&iter, s->fids);
|
||||
+ while (g_hash_table_iter_next(&iter, &fid, (gpointer *) &tfidp)) {
|
||||
if (v9fs_path_is_ancestor(&oldpath, &tfidp->path)) {
|
||||
/* replace the name */
|
||||
v9fs_fix_path(&tfidp->path, &newpath, strlen(oldpath.data));
|
||||
@@ -4226,7 +4250,7 @@ int v9fs_device_realize_common(V9fsState *s, const V9fsTransport *t,
|
||||
s->ctx.fmode = fse->fmode;
|
||||
s->ctx.dmode = fse->dmode;
|
||||
|
||||
- QSIMPLEQ_INIT(&s->fid_list);
|
||||
+ s->fids = g_hash_table_new(NULL, NULL);
|
||||
qemu_co_rwlock_init(&s->rename_lock);
|
||||
|
||||
if (s->ops->init(&s->ctx, errp) < 0) {
|
||||
@@ -4286,6 +4310,10 @@ void v9fs_device_unrealize_common(V9fsState *s)
|
||||
if (s->ctx.fst) {
|
||||
fsdev_throttle_cleanup(s->ctx.fst);
|
||||
}
|
||||
+ if (s->fids) {
|
||||
+ g_hash_table_destroy(s->fids);
|
||||
+ s->fids = NULL;
|
||||
+ }
|
||||
g_free(s->tag);
|
||||
qp_table_destroy(&s->qpd_table);
|
||||
qp_table_destroy(&s->qpp_table);
|
||||
diff --git a/hw/9pfs/9p.h b/hw/9pfs/9p.h
|
||||
index 994f952600..10fd2076c2 100644
|
||||
--- a/hw/9pfs/9p.h
|
||||
+++ b/hw/9pfs/9p.h
|
||||
@@ -339,7 +339,7 @@ typedef struct {
|
||||
struct V9fsState {
|
||||
QLIST_HEAD(, V9fsPDU) free_list;
|
||||
QLIST_HEAD(, V9fsPDU) active_list;
|
||||
- QSIMPLEQ_HEAD(, V9fsFidState) fid_list;
|
||||
+ GHashTable *fids;
|
||||
FileOperations *ops;
|
||||
FsContext ctx;
|
||||
char *tag;
|
||||
--
|
||||
2.36.2
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
, perl, pixman, vde2, alsa-lib, texinfo, flex
|
||||
, bison, lzo, snappy, libaio, libtasn1, gnutls, nettle, curl, ninja, meson, sigtool
|
||||
, makeWrapper, runtimeShell, removeReferencesTo
|
||||
, attr, libcap, libcap_ng, socat
|
||||
, attr, libcap, libcap_ng, socat, libslirp
|
||||
, CoreServices, Cocoa, Hypervisor, rez, setfile, vmnet
|
||||
, guestAgentSupport ? with stdenv.hostPlatform; isLinux || isSunOS || isWindows
|
||||
, numaSupport ? stdenv.isLinux && !stdenv.isAarch32, numactl
|
||||
|
@ -42,11 +42,11 @@ stdenv.mkDerivation rec {
|
|||
+ lib.optionalString xenSupport "-xen"
|
||||
+ lib.optionalString hostCpuOnly "-host-cpu-only"
|
||||
+ lib.optionalString nixosTestRunner "-for-vm-tests";
|
||||
version = "7.1.0";
|
||||
version = "7.2.0";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://download.qemu.org/qemu-${version}.tar.xz";
|
||||
sha256 = "1rmvrgqjhrvcmchnz170dxvrrf14n6nm39y8ivrprmfydd9lwqx0";
|
||||
sha256 = "sha256-W0nOJod0Ta1JSukKiYxSIEo0BuhNBySCoeG+hU7rIVc=";
|
||||
};
|
||||
|
||||
depsBuildBuild = [ buildPackages.stdenv.cc ];
|
||||
|
@ -57,7 +57,7 @@ stdenv.mkDerivation rec {
|
|||
|
||||
buildInputs = [ zlib glib perl pixman
|
||||
vde2 texinfo lzo snappy libtasn1
|
||||
gnutls nettle curl
|
||||
gnutls nettle curl libslirp
|
||||
]
|
||||
++ lib.optionals ncursesSupport [ ncurses ]
|
||||
++ lib.optionals stdenv.isDarwin [ CoreServices Cocoa Hypervisor rez setfile vmnet ]
|
||||
|
@ -111,18 +111,12 @@ stdenv.mkDerivation rec {
|
|||
sha256 = "sha256-oC+bRjEHixv1QEFO9XAm4HHOwoiT+NkhknKGPydnZ5E=";
|
||||
revert = true;
|
||||
})
|
||||
./9pfs-use-GHashTable-for-fid-table.patch
|
||||
(fetchpatch {
|
||||
name = "CVE-2022-3165.patch";
|
||||
url = "https://gitlab.com/qemu-project/qemu/-/commit/d307040b18bfcb1393b910f1bae753d5c12a4dc7.patch";
|
||||
sha256 = "sha256-YPhm580lBNuAv7G1snYccKZ2V5ycdV8Ri8mTw5jjFBc=";
|
||||
})
|
||||
]
|
||||
++ lib.optional nixosTestRunner ./force-uid0-on-9p.patch;
|
||||
|
||||
postPatch = ''
|
||||
# Otherwise tries to ensure /var/run exists.
|
||||
sed -i "/install_subdir('run', install_dir: get_option('localstatedir'))/d" \
|
||||
sed -i "/install_emptydir(get_option('localstatedir') \/ 'run')/d" \
|
||||
qga/meson.build
|
||||
'';
|
||||
|
||||
|
|
|
@ -1,14 +1,5 @@
|
|||
From 756021d1e433925cf9a732d7ea67b01b0beb061c Mon Sep 17 00:00:00 2001
|
||||
From: Will Cohen <willcohen@users.noreply.github.com>
|
||||
Date: Tue, 29 Mar 2022 14:00:56 -0400
|
||||
Subject: [PATCH] Revert "ui/cocoa: Add clipboard support"
|
||||
|
||||
This reverts commit 7e3e20d89129614f4a7b2451fe321cc6ccca3b76.
|
||||
---
|
||||
include/ui/clipboard.h | 2 +-
|
||||
ui/clipboard.c | 2 +-
|
||||
ui/cocoa.m | 123 -----------------------------------------
|
||||
3 files changed, 2 insertions(+), 125 deletions(-)
|
||||
Based on a reversion of upstream 7e3e20d89129614f4a7b2451fe321cc6ccca3b76,
|
||||
adapted for 7.2.0
|
||||
|
||||
diff --git a/include/ui/clipboard.h b/include/ui/clipboard.h
|
||||
index ce76aa451f..c4e1dc4ff4 100644
|
||||
|
@ -24,10 +15,10 @@ index ce76aa451f..c4e1dc4ff4 100644
|
|||
|
||||
G_DEFINE_AUTOPTR_CLEANUP_FUNC(QemuClipboardInfo, qemu_clipboard_info_unref)
|
||||
diff --git a/ui/clipboard.c b/ui/clipboard.c
|
||||
index 9079ef829b..6b9ed59e1b 100644
|
||||
index 3d14bffaf8..2c3f4c3ba0 100644
|
||||
--- a/ui/clipboard.c
|
||||
+++ b/ui/clipboard.c
|
||||
@@ -140,7 +140,7 @@ void qemu_clipboard_set_data(QemuClipboardPeer *peer,
|
||||
@@ -154,7 +154,7 @@ void qemu_clipboard_set_data(QemuClipboardPeer *peer,
|
||||
QemuClipboardInfo *info,
|
||||
QemuClipboardType type,
|
||||
uint32_t size,
|
||||
|
@ -37,7 +28,7 @@ index 9079ef829b..6b9ed59e1b 100644
|
|||
{
|
||||
if (!info ||
|
||||
diff --git a/ui/cocoa.m b/ui/cocoa.m
|
||||
index 5a8bd5dd84..79ed6d043f 100644
|
||||
index 660d3e0935..0e6760c360 100644
|
||||
--- a/ui/cocoa.m
|
||||
+++ b/ui/cocoa.m
|
||||
@@ -29,7 +29,6 @@
|
||||
|
@ -48,8 +39,8 @@ index 5a8bd5dd84..79ed6d043f 100644
|
|||
#include "ui/console.h"
|
||||
#include "ui/input.h"
|
||||
#include "ui/kbd-state.h"
|
||||
@@ -109,10 +108,6 @@ static void cocoa_switch(DisplayChangeListener *dcl,
|
||||
static QemuSemaphore app_started_sem;
|
||||
@@ -105,10 +104,6 @@ static void cocoa_switch(DisplayChangeListener *dcl,
|
||||
|
||||
static bool allow_events;
|
||||
|
||||
-static NSInteger cbchangecount = -1;
|
||||
|
@ -59,7 +50,7 @@ index 5a8bd5dd84..79ed6d043f 100644
|
|||
// Utility functions to run specified code block with iothread lock held
|
||||
typedef void (^CodeBlock)(void);
|
||||
typedef bool (^BoolCodeBlock)(void);
|
||||
@@ -1815,107 +1810,6 @@ static void addRemovableDevicesMenuItems(void)
|
||||
@@ -1799,107 +1794,6 @@ static void addRemovableDevicesMenuItems(void)
|
||||
qapi_free_BlockInfoList(pointerToFree);
|
||||
}
|
||||
|
||||
|
@ -167,15 +158,15 @@ index 5a8bd5dd84..79ed6d043f 100644
|
|||
/*
|
||||
* The startup process for the OSX/Cocoa UI is complicated, because
|
||||
* OSX insists that the UI runs on the initial main thread, and so we
|
||||
@@ -1950,7 +1844,6 @@ static void cocoa_clipboard_request(QemuClipboardInfo *info,
|
||||
COCOA_DEBUG("Second thread: calling qemu_main()\n");
|
||||
status = qemu_main(gArgc, gArgv, *_NSGetEnviron());
|
||||
COCOA_DEBUG("Second thread: qemu_main() returned, exiting\n");
|
||||
@@ -1922,7 +1816,6 @@ static void cocoa_clipboard_request(QemuClipboardInfo *info,
|
||||
status = qemu_default_main();
|
||||
qemu_mutex_unlock_iothread();
|
||||
COCOA_DEBUG("Second thread: qemu_default_main() returned, exiting\n");
|
||||
- [cbowner release];
|
||||
exit(status);
|
||||
}
|
||||
|
||||
@@ -2066,18 +1959,6 @@ static void cocoa_refresh(DisplayChangeListener *dcl)
|
||||
@@ -2003,18 +1896,6 @@ static void cocoa_refresh(DisplayChangeListener *dcl)
|
||||
[cocoaView setAbsoluteEnabled:YES];
|
||||
});
|
||||
}
|
||||
|
@ -194,7 +185,7 @@ index 5a8bd5dd84..79ed6d043f 100644
|
|||
[pool release];
|
||||
}
|
||||
|
||||
@@ -2117,10 +1998,6 @@ static void cocoa_display_init(DisplayState *ds, DisplayOptions *opts)
|
||||
@@ -2071,12 +1952,6 @@ static void cocoa_display_init(DisplayState *ds, DisplayOptions *opts)
|
||||
|
||||
// register vga output callbacks
|
||||
register_displaychangelistener(&dcl);
|
||||
|
@ -202,9 +193,8 @@ index 5a8bd5dd84..79ed6d043f 100644
|
|||
- qemu_event_init(&cbevent, false);
|
||||
- cbowner = [[QemuCocoaPasteboardTypeOwner alloc] init];
|
||||
- qemu_clipboard_peer_register(&cbpeer);
|
||||
-
|
||||
- [pool release];
|
||||
}
|
||||
|
||||
static QemuDisplay qemu_display_cocoa = {
|
||||
--
|
||||
2.35.1
|
||||
|
||||
|
|
43
pkgs/applications/window-managers/katriawm/default.nix
Normal file
43
pkgs/applications/window-managers/katriawm/default.nix
Normal file
|
@ -0,0 +1,43 @@
|
|||
{ lib
|
||||
, stdenv
|
||||
, fetchzip
|
||||
, libX11
|
||||
, libXft
|
||||
, libXrandr
|
||||
, pkg-config
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation (finalAttrs: {
|
||||
pname = "katriawm";
|
||||
version = "21.09";
|
||||
|
||||
src = fetchzip {
|
||||
name = finalAttrs.pname + "-" + finalAttrs.version;
|
||||
url = "https://www.uninformativ.de/git/katriawm/archives/katriawm-v${finalAttrs.version}.tar.gz";
|
||||
hash = "sha256-xt0sWEwTcCs5cwoB3wVbYcyAKL0jx7KyeCefEBVFhH8=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
pkg-config
|
||||
];
|
||||
|
||||
buildInputs = [
|
||||
libX11
|
||||
libXft
|
||||
libXrandr
|
||||
];
|
||||
|
||||
preBuild = ''
|
||||
cd src
|
||||
'';
|
||||
|
||||
installFlags = [ "prefix=$(out)" ];
|
||||
|
||||
meta = with lib; {
|
||||
homepage = "https://www.uninformativ.de/git/katriawm/file/README.html";
|
||||
description = "A non-reparenting, dynamic window manager with decorations";
|
||||
license = licenses.mit;
|
||||
maintainers = with maintainers; [ AndersonTorres ];
|
||||
inherit (libX11.meta) platforms;
|
||||
};
|
||||
})
|
|
@ -16,6 +16,11 @@ stdenv.mkDerivation rec {
|
|||
sha256 = "14swd0yqci8lxn259fkd9w92bgyf4rmjwgvgyqp78wlfix6ai4mv";
|
||||
};
|
||||
|
||||
# error: 'PATH_MAX' undeclared
|
||||
postPatch = ''
|
||||
sed 1i'#include <linux/limits.h>' -i mod_notionflux/notionflux/notionflux.c
|
||||
'';
|
||||
|
||||
nativeBuildInputs = [ pkg-config makeWrapper groff ];
|
||||
buildInputs = [ lua gettext which readline fontconfig libX11 libXext libSM
|
||||
libXinerama libXrandr libXft ];
|
||||
|
|
|
@ -1,27 +0,0 @@
|
|||
{ lib, stdenv, fetchFromGitHub, asciidoc, libxcb, xcbutil, xcbutilkeysyms
|
||||
, xcbutilwm
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "sxhkd";
|
||||
version = "0.6.2";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "baskerville";
|
||||
repo = "sxhkd";
|
||||
rev = version;
|
||||
sha256 = "1winwzdy9yxvxnrv8gqpigl9y0c2px27mnms62bdilp4x6llrs9r";
|
||||
};
|
||||
|
||||
buildInputs = [ asciidoc libxcb xcbutil xcbutilkeysyms xcbutilwm ];
|
||||
|
||||
makeFlags = [ "PREFIX=$(out)" ];
|
||||
|
||||
meta = with lib; {
|
||||
description = "Simple X hotkey daemon";
|
||||
homepage = "https://github.com/baskerville/sxhkd";
|
||||
license = licenses.bsd2;
|
||||
maintainers = with maintainers; [ vyp ];
|
||||
platforms = platforms.linux;
|
||||
};
|
||||
}
|
|
@ -167,7 +167,8 @@ stdenv.mkDerivation {
|
|||
|
||||
# Create symlinks for rest of the binaries.
|
||||
+ ''
|
||||
for binary in objdump objcopy size strings as ar nm gprof dwp c++filt addr2line ranlib readelf elfedit; do
|
||||
for binary in objdump objcopy size strings as ar nm gprof dwp c++filt addr2line \
|
||||
ranlib readelf elfedit dlltool dllwrap windmc windres; do
|
||||
if [ -e $ldPath/${targetPrefix}''${binary} ]; then
|
||||
ln -s $ldPath/${targetPrefix}''${binary} $out/bin/${targetPrefix}''${binary}
|
||||
fi
|
||||
|
|
|
@ -209,7 +209,7 @@ let
|
|||
flags+=($buildFlags "''${buildFlagsArray[@]}")
|
||||
flags+=(''${tags:+-tags=${lib.concatStringsSep "," tags}})
|
||||
flags+=(''${ldflags:+-ldflags="$ldflags"})
|
||||
flags+=("-v" "-p" "$NIX_BUILD_CORES")
|
||||
flags+=("-p" "$NIX_BUILD_CORES")
|
||||
|
||||
if [ "$cmd" = "test" ]; then
|
||||
flags+=(-vet=off)
|
||||
|
|
|
@ -168,7 +168,7 @@ let
|
|||
flags+=($buildFlags "''${buildFlagsArray[@]}")
|
||||
flags+=(''${tags:+-tags=${lib.concatStringsSep "," tags}})
|
||||
flags+=(''${ldflags:+-ldflags="$ldflags"})
|
||||
flags+=("-v" "-p" "$NIX_BUILD_CORES")
|
||||
flags+=("-p" "$NIX_BUILD_CORES")
|
||||
|
||||
if [ "$cmd" = "test" ]; then
|
||||
flags+=(-vet=off)
|
||||
|
|
|
@ -104,13 +104,13 @@ badPath() {
|
|||
# directory (including the build directory).
|
||||
test \
|
||||
"$p" != "/dev/null" -a \
|
||||
"${p#${NIX_STORE}}" = "$p" -a \
|
||||
"${p#${NIX_BUILD_TOP}}" = "$p" -a \
|
||||
"${p#/tmp}" = "$p" -a \
|
||||
"${p#${TMP:-/tmp}}" = "$p" -a \
|
||||
"${p#${TMPDIR:-/tmp}}" = "$p" -a \
|
||||
"${p#${TEMP:-/tmp}}" = "$p" -a \
|
||||
"${p#${TEMPDIR:-/tmp}}" = "$p"
|
||||
"${p#"${NIX_STORE}"}" = "$p" -a \
|
||||
"${p#"${NIX_BUILD_TOP}"}" = "$p" -a \
|
||||
"${p#/tmp}" = "$p" -a \
|
||||
"${p#"${TMP:-/tmp}"}" = "$p" -a \
|
||||
"${p#"${TMPDIR:-/tmp}"}" = "$p" -a \
|
||||
"${p#"${TEMP:-/tmp}"}" = "$p" -a \
|
||||
"${p#"${TEMPDIR:-/tmp}"}" = "$p"
|
||||
}
|
||||
|
||||
expandResponseParams() {
|
||||
|
|
|
@ -17,10 +17,20 @@
|
|||
}:
|
||||
|
||||
let
|
||||
blocklist = writeText "cacert-blocklist.txt" (lib.concatStringsSep "\n" blacklist);
|
||||
blocklist = writeText "cacert-blocklist.txt" (lib.concatStringsSep "\n" (blacklist ++ [
|
||||
# Mozilla does not trust new certificates issued by these CAs after 2022/11/30¹
|
||||
# in their products, but unfortunately we don't have such a fine-grained
|
||||
# solution for most system packages², so we decided to eject these.
|
||||
#
|
||||
# [1] https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/oxX69KFvsm4/m/yLohoVqtCgAJ
|
||||
# [2] https://utcc.utoronto.ca/~cks/space/blog/linux/CARootStoreTrustProblem
|
||||
"TrustCor ECA-1"
|
||||
"TrustCor RootCert CA-1"
|
||||
"TrustCor RootCert CA-2"
|
||||
]));
|
||||
extraCertificatesBundle = writeText "cacert-extra-certificates-bundle.crt" (lib.concatStringsSep "\n\n" extraCertificateStrings);
|
||||
|
||||
srcVersion = "3.83";
|
||||
srcVersion = "3.86";
|
||||
version = if nssOverride != null then nssOverride.version else srcVersion;
|
||||
meta = with lib; {
|
||||
homepage = "https://curl.haxx.se/docs/caextract.html";
|
||||
|
@ -35,7 +45,7 @@ let
|
|||
|
||||
src = if nssOverride != null then nssOverride.src else fetchurl {
|
||||
url = "mirror://mozilla/security/nss/releases/NSS_${lib.replaceStrings ["."] ["_"] version}_RTM/src/nss-${version}.tar.gz";
|
||||
sha256 = "sha256-qyPqZ/lkCQuLc8gKZ0CCVxw25fTrqSBXrGSMnB3vASg=";
|
||||
sha256 = "sha256-PzhfxoZHa7uoEQNfpoIbVCR11VdHsYwgwiHU1mVzuXU=";
|
||||
};
|
||||
|
||||
dontBuild = true;
|
||||
|
|
|
@ -2,11 +2,11 @@
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "mobile-broadband-provider-info";
|
||||
version = "20220725";
|
||||
version = "20221107";
|
||||
|
||||
src = fetchurl {
|
||||
url = "mirror://gnome/sources/${pname}/${version}/${pname}-${version}.tar.xz";
|
||||
sha256 = "sha256-SEWuAcKH8t+wIrxi1ZoUiHP/xKZz9RAgViZXQm1jKs0=";
|
||||
sha256 = "sha256-2TOSVmw0epbu2V2oxmpdoN2U9BFc+zowX/JoLGTP2BA=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
|
|
|
@ -1,17 +1,17 @@
|
|||
{ lib, stdenv, fetchurl, fetchpatch, buildPackages }:
|
||||
{ lib, stdenv, fetchurl, buildPackages }:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "tzdata";
|
||||
version = "2022f";
|
||||
version = "2022g";
|
||||
|
||||
srcs = [
|
||||
(fetchurl {
|
||||
url = "https://data.iana.org/time-zones/releases/tzdata${version}.tar.gz";
|
||||
hash = "sha256-mZDXH2ddISVnuTH+iq4cq3An+J/vuKedgIppM6Z68AA=";
|
||||
hash = "sha256-RJHbgoGulKhNk55Ce92D3DifJnZNJ9mlxS14LBZ2RHg=";
|
||||
})
|
||||
(fetchurl {
|
||||
url = "https://data.iana.org/time-zones/releases/tzcode${version}.tar.gz";
|
||||
hash = "sha256-5FQ+kPhPkfqCgJ6piTAFL9vBOIDIpiPuOk6qQvimTBU=";
|
||||
hash = "sha256-lhC7C5ZW/0BMNhpB8yhtpTBktUadhPAMnLIxTIYU2nQ=";
|
||||
})
|
||||
];
|
||||
|
||||
|
@ -19,17 +19,6 @@ stdenv.mkDerivation rec {
|
|||
|
||||
patches = lib.optionals stdenv.hostPlatform.isWindows [
|
||||
./0001-Add-exe-extension-for-MS-Windows-binaries.patch
|
||||
] ++ [
|
||||
(fetchpatch {
|
||||
name = "fix-get-random-on-osx-1.patch";
|
||||
url = "https://github.com/eggert/tz/commit/5db8b3ba4816ccb8f4ffeb84f05b99e87d3b1be6.patch";
|
||||
hash = "sha256-FevGjiSahYwEjRUTvRY0Y6/jUO4YHiTlAAPixzEy5hw=";
|
||||
})
|
||||
(fetchpatch {
|
||||
name = "fix-get-random-on-osx-2.patch";
|
||||
url = "https://github.com/eggert/tz/commit/841183210311b1d4ffb4084bfde8fa8bdf3e6757.patch";
|
||||
hash = "sha256-1tUTZBMT7V463P7eygpFS6/k5gTeeXumk5+V4gdKpEI=";
|
||||
})
|
||||
];
|
||||
|
||||
outputs = [ "out" "bin" "man" "dev" ];
|
||||
|
|
|
@ -13,13 +13,13 @@
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "bulky";
|
||||
version = "2.6";
|
||||
version = "2.7";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "linuxmint";
|
||||
repo = "bulky";
|
||||
rev = version;
|
||||
hash = "sha256-OI7sIPMZOTmVoWj4Y7kEH0mxay4DwO5kPjclgRDVMus=";
|
||||
hash = "sha256-Ps7ql6EAdoljQ6S8D2JxNSh0+jtEVZpnQv3fpvWkQSk=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
|
|
|
@ -6,13 +6,13 @@
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "cinnamon-translations";
|
||||
version = "5.6.0";
|
||||
version = "5.6.1";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "linuxmint";
|
||||
repo = pname;
|
||||
rev = version;
|
||||
hash = "sha256-ztHHqX0OuwSFGlxCoJhZXnUsMM0WrkwiQINgDVlb0XA=";
|
||||
hash = "sha256-567xkQGLLhZtjAWXzW/MRiD14rrWeg0yvx97jtukRvc=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
|
|
|
@ -29,13 +29,13 @@
|
|||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "pix";
|
||||
version = "2.8.8";
|
||||
version = "2.8.9";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "linuxmint";
|
||||
repo = pname;
|
||||
rev = version;
|
||||
sha256 = "sha256-dvxbnf6tvBAwYM0EKpd/mPfW2PXeV1H2khYl8LIJqa0=";
|
||||
sha256 = "sha256-7g0j1cWgNtWlqKWzBnngUA2WNr8Zh8YO/jJ8OdTII7Y=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
|
||||
python3.pkgs.buildPythonApplication rec {
|
||||
pname = "warpinator";
|
||||
version = "1.4.2";
|
||||
version = "1.4.3";
|
||||
|
||||
format = "other";
|
||||
|
||||
|
@ -23,7 +23,7 @@ python3.pkgs.buildPythonApplication rec {
|
|||
owner = "linuxmint";
|
||||
repo = pname;
|
||||
rev = version;
|
||||
hash = "sha256-aiHlBeWGYqSaqvRtwL7smqt4iueIKzQoDawdFSCn6eg=";
|
||||
hash = "sha256-blsDOAdfu0N6I+6ZvycL+BIIsZPIjwYm+sJnbZtHJE8=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [
|
||||
|
|
|
@ -10,6 +10,8 @@ final: prev: let
|
|||
### Add classic cudatoolkit package
|
||||
cudatoolkit = buildCudaToolkitPackage ((attrs: attrs // { gcc = prev.pkgs.${attrs.gcc}; }) cudatoolkitVersions.${final.cudaVersion});
|
||||
|
||||
cudaFlags = final.callPackage ./flags.nix {};
|
||||
|
||||
in {
|
||||
inherit cudatoolkit;
|
||||
inherit cudatoolkit cudaFlags;
|
||||
}
|
||||
|
|
78
pkgs/development/compilers/cudatoolkit/flags.nix
Normal file
78
pkgs/development/compilers/cudatoolkit/flags.nix
Normal file
|
@ -0,0 +1,78 @@
|
|||
{ config
|
||||
, lib
|
||||
, cudatoolkit
|
||||
}:
|
||||
let
|
||||
|
||||
# Flags are determined based on your CUDA toolkit by default. You may benefit
|
||||
# from improved performance, reduced file size, or greater hardware suppport by
|
||||
# passing a configuration based on your specific GPU environment.
|
||||
#
|
||||
# config.cudaCapabilities: list of hardware generations to support (e.g., "8.0")
|
||||
# config.cudaForwardCompat: bool for compatibility with future GPU generations
|
||||
#
|
||||
# Please see the accompanying documentation or https://github.com/NixOS/nixpkgs/pull/205351
|
||||
|
||||
defaultCudaCapabilities = rec {
|
||||
cuda9 = [
|
||||
"3.0"
|
||||
"3.5"
|
||||
"5.0"
|
||||
"5.2"
|
||||
"6.0"
|
||||
"6.1"
|
||||
"7.0"
|
||||
];
|
||||
|
||||
cuda10 = cuda9 ++ [
|
||||
"7.5"
|
||||
];
|
||||
|
||||
cuda11 = [
|
||||
"3.5"
|
||||
"5.0"
|
||||
"5.2"
|
||||
"6.0"
|
||||
"6.1"
|
||||
"7.0"
|
||||
"7.5"
|
||||
"8.0"
|
||||
"8.6"
|
||||
];
|
||||
|
||||
};
|
||||
|
||||
cudaMicroarchitectureNames = {
|
||||
"3" = "Kepler";
|
||||
"5" = "Maxwell";
|
||||
"6" = "Pascal";
|
||||
"7" = "Volta";
|
||||
"8" = "Ampere";
|
||||
"9" = "Hopper";
|
||||
};
|
||||
|
||||
defaultCudaArchList = defaultCudaCapabilities."cuda${lib.versions.major cudatoolkit.version}";
|
||||
cudaRealCapabilities = config.cudaCapabilities or defaultCudaArchList;
|
||||
capabilitiesForward = "${lib.last cudaRealCapabilities}+PTX";
|
||||
|
||||
dropDot = ver: builtins.replaceStrings ["."] [""] ver;
|
||||
|
||||
archMapper = feat: map (ver: "${feat}_${dropDot ver}");
|
||||
gencodeMapper = feat: map (ver: "-gencode=arch=compute_${dropDot ver},code=${feat}_${dropDot ver}");
|
||||
cudaRealArchs = archMapper "sm" cudaRealCapabilities;
|
||||
cudaPTXArchs = archMapper "compute" cudaRealCapabilities;
|
||||
cudaArchs = cudaRealArchs ++ [ (lib.last cudaPTXArchs) ];
|
||||
|
||||
cudaArchNames = lib.unique (map (v: cudaMicroarchitectureNames.${lib.versions.major v}) cudaRealCapabilities);
|
||||
cudaCapabilities = cudaRealCapabilities ++ lib.optional (config.cudaForwardCompat or true) capabilitiesForward;
|
||||
cudaGencode = gencodeMapper "sm" cudaRealCapabilities ++ lib.optionals (config.cudaForwardCompat or true) (gencodeMapper "compute" [ (lib.last cudaPTXArchs) ]);
|
||||
|
||||
cudaCapabilitiesCommaString = lib.strings.concatStringsSep "," cudaCapabilities;
|
||||
cudaCapabilitiesSemiColonString = lib.strings.concatStringsSep ";" cudaCapabilities;
|
||||
cudaRealCapabilitiesCommaString = lib.strings.concatStringsSep "," cudaRealCapabilities;
|
||||
|
||||
in
|
||||
{
|
||||
inherit cudaArchs cudaArchNames cudaCapabilities cudaCapabilitiesCommaString cudaCapabilitiesSemiColonString
|
||||
cudaRealCapabilities cudaRealCapabilitiesCommaString cudaGencode cudaRealArchs cudaPTXArchs;
|
||||
}
|
|
@ -2,13 +2,13 @@
|
|||
|
||||
rustPlatform.buildRustPackage rec {
|
||||
pname = "gleam";
|
||||
version = "0.25.1";
|
||||
version = "0.25.3";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "gleam-lang";
|
||||
repo = pname;
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-PzvFX1ssBPXhHBNGK38y427HYJ9Q40c4w2mqGZ/2rtI=";
|
||||
sha256 = "sha256-JT9NUca+DaqxT36heaNKijIuqdnSvrYCfY2uM7wTOGo=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ pkg-config ];
|
||||
|
@ -16,7 +16,7 @@ rustPlatform.buildRustPackage rec {
|
|||
buildInputs = [ openssl ] ++
|
||||
lib.optionals stdenv.isDarwin [ Security libiconv ];
|
||||
|
||||
cargoSha256 = "sha256-NeNpT/yOXE70ElawrOGBc4G5bN2ohzYVVUtF4yVCJOo=";
|
||||
cargoSha256 = "sha256-YPyGCd4//yta3jy5tWB4C5yRgxNbfG+hGF5/QSch/6M=";
|
||||
|
||||
meta = with lib; {
|
||||
description = "A statically typed language for the Erlang VM";
|
||||
|
|
|
@ -45,11 +45,11 @@ let
|
|||
in
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "go";
|
||||
version = "1.19.3";
|
||||
version = "1.19.4";
|
||||
|
||||
src = fetchurl {
|
||||
url = "https://go.dev/dl/go${version}.src.tar.gz";
|
||||
sha256 = "sha256-GKwmPjkhC89o2F9DcOl/sXNBZplaH2P7OLT24H2Q0hI=";
|
||||
sha256 = "sha256-7adNtKxJSACj5m7nhOSVv7ubjlNd+SSosBsagCi382g=";
|
||||
};
|
||||
|
||||
strictDeps = true;
|
||||
|
|
|
@ -6,7 +6,6 @@
|
|||
, bison
|
||||
, flex
|
||||
, llvmPackages_11
|
||||
, lld_11
|
||||
, opencl-clang
|
||||
, python3
|
||||
, spirv-tools
|
||||
|
@ -20,42 +19,40 @@ let
|
|||
vc_intrinsics_src = fetchFromGitHub {
|
||||
owner = "intel";
|
||||
repo = "vc-intrinsics";
|
||||
rev = "v0.3.0";
|
||||
sha256 = "sha256-1Rm4TCERTOcPGWJF+yNoKeB9x3jfqnh7Vlv+0Xpmjbk=";
|
||||
rev = "v0.7.1";
|
||||
sha256 = "sha256-bpi4hLpov1CbFy4jr9Eytc5O4ismYw0J+KgXyZtQYks=";
|
||||
};
|
||||
|
||||
llvmPkgs = llvmPackages_11 // {
|
||||
inherit spirv-llvm-translator;
|
||||
};
|
||||
inherit (llvmPkgs) llvm;
|
||||
inherit (if buildWithPatches then opencl-clang else llvmPkgs) clang libclang spirv-llvm-translator;
|
||||
inherit (lib) getVersion optional optionals versionOlder versions;
|
||||
spirv-llvm-translator = spirv-llvm-translator.override { llvm = llvm; };
|
||||
} // lib.optionalAttrs buildWithPatches opencl-clang;
|
||||
|
||||
inherit (llvmPackages_11) lld llvm;
|
||||
inherit (llvmPkgs) clang libclang spirv-llvm-translator;
|
||||
in
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
pname = "intel-graphics-compiler";
|
||||
version = "1.0.11061";
|
||||
version = "1.0.12504.5";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "intel";
|
||||
repo = "intel-graphics-compiler";
|
||||
rev = "igc-${version}";
|
||||
sha256 = "sha256-qS/+GTqHtp3T6ggPKrCDsrTb7XvVOUaNbMzGU51jTu4=";
|
||||
sha256 = "sha256-Ok+cXMTBABrHHM4Vc2yzlou48YHoQnaB3We8mGZhSwI=";
|
||||
};
|
||||
|
||||
nativeBuildInputs = [ clang cmake bison flex python3 ];
|
||||
nativeBuildInputs = [ cmake bison flex python3 ];
|
||||
|
||||
buildInputs = [ spirv-headers spirv-tools clang opencl-clang spirv-llvm-translator llvm lld_11 ];
|
||||
buildInputs = [ spirv-headers spirv-tools spirv-llvm-translator llvm lld ];
|
||||
|
||||
strictDeps = true;
|
||||
|
||||
# checkInputs = [ lit pythonPackages.nose ];
|
||||
|
||||
# FIXME: How do we run the test suite?
|
||||
# https://github.com/intel/intel-graphics-compiler/issues/98
|
||||
# testing is done via intel-compute-runtime
|
||||
doCheck = false;
|
||||
|
||||
postPatch = ''
|
||||
substituteInPlace ./external/SPIRV-Tools/CMakeLists.txt \
|
||||
substituteInPlace external/SPIRV-Tools/CMakeLists.txt \
|
||||
--replace '$'''{SPIRV-Tools_DIR}../../..' \
|
||||
'${spirv-tools}' \
|
||||
--replace 'SPIRV-Headers_INCLUDE_DIR "/usr/include"' \
|
||||
|
@ -64,7 +61,7 @@ stdenv.mkDerivation rec {
|
|||
'set_target_properties(SPIRV-Tools-shared' \
|
||||
--replace 'IGC_BUILD__PROJ__SPIRV-Tools SPIRV-Tools' \
|
||||
'IGC_BUILD__PROJ__SPIRV-Tools SPIRV-Tools-shared'
|
||||
substituteInPlace ./IGC/AdaptorOCL/igc-opencl.pc.in \
|
||||
substituteInPlace IGC/AdaptorOCL/igc-opencl.pc.in \
|
||||
--replace '/@CMAKE_INSTALL_INCLUDEDIR@' "/include" \
|
||||
--replace '/@CMAKE_INSTALL_LIBDIR@' "/lib"
|
||||
'';
|
||||
|
@ -74,10 +71,10 @@ stdenv.mkDerivation rec {
|
|||
prebuilds = runCommandLocal "igc-cclang-prebuilds" { } ''
|
||||
mkdir $out
|
||||
ln -s ${clang}/bin/clang $out/
|
||||
ln -s clang $out/clang-${versions.major (getVersion clang)}
|
||||
ln -s clang $out/clang-${lib.versions.major (lib.getVersion clang)}
|
||||
ln -s ${opencl-clang}/lib/* $out/
|
||||
ln -s ${lib.getLib libclang}/lib/clang/${getVersion clang}/include/opencl-c.h $out/
|
||||
ln -s ${lib.getLib libclang}/lib/clang/${getVersion clang}/include/opencl-c-base.h $out/
|
||||
ln -s ${lib.getLib libclang}/lib/clang/${lib.getVersion clang}/include/opencl-c.h $out/
|
||||
ln -s ${lib.getLib libclang}/lib/clang/${lib.getVersion clang}/include/opencl-c-base.h $out/
|
||||
'';
|
||||
|
||||
cmakeFlags = [
|
||||
|
@ -86,15 +83,14 @@ stdenv.mkDerivation rec {
|
|||
"-DIGC_OPTION__SPIRV_TOOLS_MODE=Prebuilds"
|
||||
"-DCCLANG_BUILD_PREBUILDS=ON"
|
||||
"-DCCLANG_BUILD_PREBUILDS_DIR=${prebuilds}"
|
||||
"-DIGC_PREFERRED_LLVM_VERSION=${getVersion llvm}"
|
||||
"-DIGC_PREFERRED_LLVM_VERSION=${lib.getVersion llvm}"
|
||||
];
|
||||
|
||||
meta = with lib; {
|
||||
homepage = "https://github.com/intel/intel-graphics-compiler";
|
||||
description = "LLVM-based compiler for OpenCL targeting Intel Gen graphics hardware";
|
||||
license = licenses.mit;
|
||||
platforms = platforms.all;
|
||||
maintainers = with maintainers; [ gloaming ];
|
||||
broken = stdenv.isDarwin; # never built on Hydra https://hydra.nixos.org/job/nixpkgs/trunk/intel-graphics-compiler.x86_64-darwin
|
||||
platforms = platforms.linux;
|
||||
maintainers = with maintainers; [ SuperSandro2000 ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -12,19 +12,13 @@
|
|||
, libwhich
|
||||
, libxml2
|
||||
, libunwind
|
||||
, libgit2
|
||||
, curl
|
||||
, nghttp2
|
||||
, mbedtls_2
|
||||
, libssh2
|
||||
, gmp
|
||||
, mpfr
|
||||
, suitesparse
|
||||
, utf8proc
|
||||
, zlib
|
||||
, p7zip
|
||||
, ncurses
|
||||
, pcre2
|
||||
}:
|
||||
|
||||
stdenv.mkDerivation rec {
|
||||
|
@ -41,15 +35,6 @@ stdenv.mkDerivation rec {
|
|||
path = name: "https://raw.githubusercontent.com/archlinux/svntogit-community/6fd126d089d44fdc875c363488a7c7435a223cec/trunk/${name}";
|
||||
in
|
||||
[
|
||||
# Pull upstream fix to fix tests mpfr-4.1.1
|
||||
# https://github.com/JuliaLang/julia/pull/47659
|
||||
(fetchpatch {
|
||||
name = "mfr-4.1.1.patch";
|
||||
url = "https://github.com/JuliaLang/julia/commit/59965205ccbdffb4e25e1b60f651ca9df79230a4.patch";
|
||||
hash = "sha256-QJ5wxZMhz+or8BqcYv/5fNSTxDAvdSizTYqt7630kcw=";
|
||||
includes = [ "stdlib/MPFR_jll/test/runtests.jl" ];
|
||||
})
|
||||
|
||||
(fetchurl {
|
||||
url = path "julia-hardcoded-libs.patch";
|
||||
sha256 = "sha256-kppSpVA7bRohd0wXDs4Jgct9ocHnpbeiiSz7ElFom1U=";
|
||||
|
@ -77,17 +62,11 @@ stdenv.mkDerivation rec {
|
|||
buildInputs = [
|
||||
libxml2
|
||||
libunwind
|
||||
libgit2
|
||||
curl
|
||||
nghttp2
|
||||
mbedtls_2
|
||||
libssh2
|
||||
gmp
|
||||
mpfr
|
||||
utf8proc
|
||||
zlib
|
||||
p7zip
|
||||
pcre2
|
||||
];
|
||||
|
||||
JULIA_RPATH = lib.makeLibraryPath (buildInputs ++ [ stdenv.cc.cc gfortran.cc ncurses ]);
|
||||
|
@ -106,29 +85,32 @@ stdenv.mkDerivation rec {
|
|||
"USE_SYSTEM_CSL=1"
|
||||
"USE_SYSTEM_LLVM=0" # a patched version is required
|
||||
"USE_SYSTEM_LIBUNWIND=1"
|
||||
"USE_SYSTEM_PCRE=1"
|
||||
"USE_SYSTEM_PCRE=0" # version checks
|
||||
"USE_SYSTEM_LIBM=0"
|
||||
"USE_SYSTEM_OPENLIBM=0"
|
||||
"USE_SYSTEM_DSFMT=0" # not available in nixpkgs
|
||||
"USE_SYSTEM_LIBBLASTRAMPOLINE=0" # not available in nixpkgs
|
||||
"USE_SYSTEM_BLAS=0" # test failure
|
||||
"USE_SYSTEM_LAPACK=0" # test failure
|
||||
"USE_SYSTEM_GMP=1"
|
||||
"USE_SYSTEM_MPFR=1"
|
||||
"USE_SYSTEM_GMP=1" # version checks, but bundled version fails build
|
||||
"USE_SYSTEM_MPFR=0" # version checks
|
||||
"USE_SYSTEM_LIBSUITESPARSE=0" # test failure
|
||||
"USE_SYSTEM_LIBUV=0" # a patched version is required
|
||||
"USE_SYSTEM_UTF8PROC=1"
|
||||
"USE_SYSTEM_MBEDTLS=1"
|
||||
"USE_SYSTEM_LIBSSH2=1"
|
||||
"USE_SYSTEM_NGHTTP2=1"
|
||||
"USE_SYSTEM_MBEDTLS=0" # version checks
|
||||
"USE_SYSTEM_LIBSSH2=0" # version checks
|
||||
"USE_SYSTEM_NGHTTP2=0" # version checks
|
||||
"USE_SYSTEM_CURL=1"
|
||||
"USE_SYSTEM_LIBGIT2=1"
|
||||
"USE_SYSTEM_LIBGIT2=0" # version checks
|
||||
"USE_SYSTEM_PATCHELF=1"
|
||||
"USE_SYSTEM_LIBWHICH=1"
|
||||
"USE_SYSTEM_ZLIB=1"
|
||||
"USE_SYSTEM_ZLIB=1" # version checks, but the system zlib is used anyway
|
||||
"USE_SYSTEM_P7ZIP=1"
|
||||
|
||||
"PCRE_INCL_PATH=${pcre2.dev}/include/pcre2.h"
|
||||
] ++ lib.optionals stdenv.isx86_64 [
|
||||
# https://github.com/JuliaCI/julia-buildbot/blob/master/master/inventory.py
|
||||
"JULIA_CPU_TARGET=generic;sandybridge,-xsaveopt,clone_all;haswell,-rdrnd,base(1)"
|
||||
] ++ lib.optionals stdenv.isAarch64 [
|
||||
"JULIA_CPU_TERGET=generic;cortex-a57;thunderx2t99;armv8.2-a,crypto,fullfp16,lse,rdm"
|
||||
];
|
||||
|
||||
doInstallCheck = true;
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
From 1faa30525c9671ffd3a08901896b521a040d7e5c Mon Sep 17 00:00:00 2001
|
||||
From b2a58160fd194858267c433ae551f24840a0b3f4 Mon Sep 17 00:00:00 2001
|
||||
From: Nick Cao <nickcao@nichi.co>
|
||||
Date: Tue, 20 Sep 2022 18:42:08 +0800
|
||||
Subject: [PATCH 1/4] skip symlink system libraries
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
From 05c008dcabaf94f5623f2f7e267005eef0a8c5fc Mon Sep 17 00:00:00 2001
|
||||
From ddf422a97973a1f4d2d4d32272396c7165580702 Mon Sep 17 00:00:00 2001
|
||||
From: Nick Cao <nickcao@nichi.co>
|
||||
Date: Tue, 20 Sep 2022 18:42:31 +0800
|
||||
Subject: [PATCH 2/4] skip building doc
|
||||
|
@ -8,10 +8,10 @@ Subject: [PATCH 2/4] skip building doc
|
|||
1 file changed, 1 insertion(+), 1 deletion(-)
|
||||
|
||||
diff --git a/Makefile b/Makefile
|
||||
index d38311dce..a775d36e1 100644
|
||||
index 57b595310..563be74c9 100644
|
||||
--- a/Makefile
|
||||
+++ b/Makefile
|
||||
@@ -227,7 +227,7 @@ define stringreplace
|
||||
@@ -229,7 +229,7 @@ define stringreplace
|
||||
endef
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,25 @@
|
|||
From ed596b33005a438109f0078ed0ba30ebe464b4b5 Mon Sep 17 00:00:00 2001
|
||||
From: Nick Cao <nickcao@nichi.co>
|
||||
Date: Tue, 20 Sep 2022 18:42:59 +0800
|
||||
Subject: [PATCH 3/4] skip failing and flaky tests
|
||||
|
||||
---
|
||||
test/Makefile | 2 +-
|
||||
1 file changed, 1 insertion(+), 1 deletion(-)
|
||||
|
||||
diff --git a/test/Makefile b/test/Makefile
|
||||
index 24e137a5b..553d9d095 100644
|
||||
--- a/test/Makefile
|
||||
+++ b/test/Makefile
|
||||
@@ -23,7 +23,7 @@ default:
|
||||
|
||||
$(TESTS):
|
||||
@cd $(SRCDIR) && \
|
||||
- $(call PRINT_JULIA, $(call spawn,$(JULIA_EXECUTABLE)) --check-bounds=yes --startup-file=no --depwarn=error ./runtests.jl $@)
|
||||
+ $(call PRINT_JULIA, $(call spawn,$(JULIA_EXECUTABLE)) --check-bounds=yes --startup-file=no --depwarn=error ./runtests.jl --skip MozillaCACerts_jll --skip NetworkOptions --skip Zlib_jll --skip GMP_jll --skip channels $@)
|
||||
|
||||
$(addprefix revise-, $(TESTS)): revise-% :
|
||||
@cd $(SRCDIR) && \
|
||||
--
|
||||
2.38.1
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
From 756d4e977f8f224e20effa82c612e5a9cc14d82e Mon Sep 17 00:00:00 2001
|
||||
From f91c8c6364eb321dd5e66fa443472fca6bcda7d6 Mon Sep 17 00:00:00 2001
|
||||
From: Nick Cao <nickcao@nichi.co>
|
||||
Date: Tue, 20 Sep 2022 18:42:59 +0800
|
||||
Subject: [PATCH 3/4] skip failing tests
|
||||
|
@ -8,7 +8,7 @@ Subject: [PATCH 3/4] skip failing tests
|
|||
1 file changed, 1 insertion(+), 1 deletion(-)
|
||||
|
||||
diff --git a/test/Makefile b/test/Makefile
|
||||
index 24e137a5b..c17ccea8a 100644
|
||||
index 24e137a5b..2b30ab392 100644
|
||||
--- a/test/Makefile
|
||||
+++ b/test/Makefile
|
||||
@@ -23,7 +23,7 @@ default:
|
||||
|
@ -16,7 +16,7 @@ index 24e137a5b..c17ccea8a 100644
|
|||
$(TESTS):
|
||||
@cd $(SRCDIR) && \
|
||||
- $(call PRINT_JULIA, $(call spawn,$(JULIA_EXECUTABLE)) --check-bounds=yes --startup-file=no --depwarn=error ./runtests.jl $@)
|
||||
+ $(call PRINT_JULIA, $(call spawn,$(JULIA_EXECUTABLE)) --check-bounds=yes --startup-file=no --depwarn=error ./runtests.jl --skip LibGit2_jll --skip MozillaCACerts_jll --skip NetworkOptions --skip nghttp2_jll --skip Zlib_jll --skip MbedTLS_jll $@)
|
||||
+ $(call PRINT_JULIA, $(call spawn,$(JULIA_EXECUTABLE)) --check-bounds=yes --startup-file=no --depwarn=error ./runtests.jl --skip MozillaCACerts_jll --skip NetworkOptions --skip Zlib_jll --skip GMP_jll $@)
|
||||
|
||||
$(addprefix revise-, $(TESTS)): revise-% :
|
||||
@cd $(SRCDIR) && \
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
From c0e587f4c50bd7bedfe6e5102e9b47c9704fac9b Mon Sep 17 00:00:00 2001
|
||||
From 4bd87f2f3151ad07d311f7d33c2b890977aca93d Mon Sep 17 00:00:00 2001
|
||||
From: Nick Cao <nickcao@nichi.co>
|
||||
Date: Tue, 20 Sep 2022 18:43:15 +0800
|
||||
Subject: [PATCH 4/4] ignore absolute path when loading library
|
||||
|
|
|
@ -1,9 +1,6 @@
|
|||
import ./generic.nix {
|
||||
major_version = "5";
|
||||
minor_version = "0";
|
||||
patch_version = "0-rc1";
|
||||
src = fetchTarball {
|
||||
url = "https://caml.inria.fr/pub/distrib/ocaml-5.0/ocaml-5.0.0~rc1.tar.xz";
|
||||
sha256 = "sha256:1ql9rmh2g9fhfv99vk9sdca1biiin32vi4idgdgl668n0vb8blw8";
|
||||
};
|
||||
patch_version = "0";
|
||||
sha256 = "sha256-yxfwpTTdSz/sk9ARsL4bpcYIfaAzz3iehaNLlkHsxl8=";
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
{ stdenv, lib, fetchurl, fetchFromGitHub, bash, pkg-config, autoconf, cpio
|
||||
{ stdenv, lib, fetchurl, fetchpatch, fetchFromGitHub, bash, pkg-config, autoconf, cpio
|
||||
, file, which, unzip, zip, perl, cups, freetype, harfbuzz, alsa-lib, libjpeg, giflib
|
||||
, libpng, zlib, lcms2, libX11, libICE, libXrender, libXext, libXt, libXtst
|
||||
, libXi, libXinerama, libXcursor, libXrandr, fontconfig, openjdk18-bootstrap
|
||||
|
@ -49,6 +49,13 @@ let
|
|||
url = "https://src.fedoraproject.org/rpms/java-openjdk/raw/06c001c7d87f2e9fe4fedeef2d993bcd5d7afa2a/f/rh1673833-remove_removal_of_wformat_during_test_compilation.patch";
|
||||
sha256 = "082lmc30x64x583vqq00c8y0wqih3y4r0mp1c4bqq36l22qv6b6r";
|
||||
})
|
||||
|
||||
# Patch borrowed from Alpine to fix build errors with musl libc and recent gcc.
|
||||
# This is applied anywhere to prevent patchrot.
|
||||
(fetchpatch {
|
||||
url = "https://git.alpinelinux.org/aports/plain/testing/openjdk18/FixNullPtrCast.patch?id=b93d1fc37fcf106144958d957bb97c7db67bd41f";
|
||||
hash = "sha256-nvO8RcmKwMcPdzq28mZ4If1XJ6FQ76CYWqRIozPCk5U=";
|
||||
})
|
||||
] ++ lib.optionals (!headless && enableGnome2) [
|
||||
./swing-use-gtk-jdk13.patch
|
||||
];
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
{ stdenv, lib, fetchurl, fetchFromGitHub, bash, pkg-config, autoconf, cpio
|
||||
{ stdenv, lib, fetchurl, fetchpatch, fetchFromGitHub, bash, pkg-config, autoconf, cpio
|
||||
, file, which, unzip, zip, perl, cups, freetype, alsa-lib, libjpeg, giflib
|
||||
, libpng, zlib, lcms2, libX11, libICE, libXrender, libXext, libXt, libXtst
|
||||
, libXi, libXinerama, libXcursor, libXrandr, fontconfig, openjdk19-bootstrap
|
||||
|
@ -51,6 +51,13 @@ let
|
|||
url = "https://src.fedoraproject.org/rpms/java-openjdk/raw/06c001c7d87f2e9fe4fedeef2d993bcd5d7afa2a/f/rh1673833-remove_removal_of_wformat_during_test_compilation.patch";
|
||||
sha256 = "082lmc30x64x583vqq00c8y0wqih3y4r0mp1c4bqq36l22qv6b6r";
|
||||
})
|
||||
|
||||
# Patch borrowed from Alpine to fix build errors with musl libc and recent gcc.
|
||||
# This is applied anywhere to prevent patchrot.
|
||||
(fetchpatch {
|
||||
url = "https://git.alpinelinux.org/aports/plain/testing/openjdk19/FixNullPtrCast.patch?id=b93d1fc37fcf106144958d957bb97c7db67bd41f";
|
||||
hash = "sha256-cnpeYcVoRYjuDgrl2x27frv6KUAnu1+1MVPehPZy/Cg=";
|
||||
})
|
||||
] ++ lib.optionals (!headless && enableGnome2) [
|
||||
./swing-use-gtk-jdk13.patch
|
||||
];
|
||||
|
|
|
@ -18,7 +18,7 @@ rec {
|
|||
fetchCargoTarball importCargoLock rustc;
|
||||
};
|
||||
|
||||
importCargoLock = buildPackages.callPackage ../../../build-support/rust/import-cargo-lock.nix {};
|
||||
importCargoLock = buildPackages.callPackage ../../../build-support/rust/import-cargo-lock.nix { inherit cargo; };
|
||||
|
||||
rustcSrc = callPackage ./rust-src.nix {
|
||||
inherit runCommand rustc;
|
||||
|
|
|
@ -57,6 +57,10 @@ let
|
|||
"2.2.10" = {
|
||||
sha256 = "sha256-jMPDqHYSI63vFEqIcwsmdQg6Oyb6FV1wz5GruTXpCDM=";
|
||||
};
|
||||
|
||||
"2.2.11" = {
|
||||
sha256 = "sha256-NgfWgBZzGICEXO1dXVXGBUzEnxkSGhUCfmxWB66Elt8=";
|
||||
};
|
||||
};
|
||||
|
||||
in with versionMap.${version};
|
||||
|
@ -169,7 +173,9 @@ stdenv.mkDerivation rec {
|
|||
# duplicate symbol '_static_code_space_free_pointer' in: alloc.o traceroot.o
|
||||
# Should be fixed past 2.1.10 release.
|
||||
"-fcommon"
|
||||
];
|
||||
]
|
||||
# Fails to find `O_LARGEFILE` otherwise.
|
||||
++ [ "-D_GNU_SOURCE" ];
|
||||
|
||||
buildPhase = ''
|
||||
runHook preBuild
|
||||
|
|
|
@ -22,11 +22,11 @@ let
|
|||
hash = "sha256-BhNAApgZ/w/92XjpoDY6ZEIhSTwgJ4D3/EfNvPmNM2o=";
|
||||
} else if llvmMajor == "11" then {
|
||||
version = "unstable-2022-05-04";
|
||||
rev = "99420daab98998a7e36858befac9c5ed109d4920"; # 265 commits ahead of v11.0.0
|
||||
hash = "sha256-/vUyL6Wh8hykoGz1QmT1F7lfGDEmG4U3iqmqrJxizOg=";
|
||||
rev = "4ef524240833abfeee1c5b9fff6b1bd53f4806b3"; # 267 commits ahead of v11.0.0
|
||||
hash = "sha256-NoIoa20+2sH41rEnr8lsMhtfesrtdPINiXtUnxYVm8s=";
|
||||
} else throw "Incompatible LLVM version.";
|
||||
in
|
||||
stdenv.mkDerivation rec {
|
||||
stdenv.mkDerivation {
|
||||
pname = "SPIRV-LLVM-Translator";
|
||||
inherit (branch) version;
|
||||
|
||||
|
@ -64,7 +64,7 @@ stdenv.mkDerivation rec {
|
|||
homepage = "https://github.com/KhronosGroup/SPIRV-LLVM-Translator";
|
||||
description = "A tool and a library for bi-directional translation between SPIR-V and LLVM IR";
|
||||
license = licenses.ncsa;
|
||||
platforms = platforms.all;
|
||||
platforms = platforms.unix;
|
||||
maintainers = with maintainers; [ gloaming ];
|
||||
};
|
||||
}
|
||||
|
|
|
@ -30,7 +30,7 @@ buildGraalvmNativeImage rec {
|
|||
set -euo pipefail
|
||||
|
||||
readonly latest_version="$(curl \
|
||||
''${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \
|
||||
''${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} \
|
||||
-s "https://api.github.com/repos/babashka/babashka/releases/latest" \
|
||||
| jq -r '.tag_name')"
|
||||
|
||||
|
|
|
@ -64,7 +64,7 @@ stdenv.mkDerivation rec {
|
|||
|
||||
# `jq -r '.[0].name'` results in `v0.0`
|
||||
readonly latest_version="$(curl \
|
||||
''${GITHUB_TOKEN:+"-u \":$GITHUB_TOKEN\""} \
|
||||
''${GITHUB_TOKEN:+-u ":$GITHUB_TOKEN"} \
|
||||
-s "https://api.github.com/repos/clojure/brew-install/tags" \
|
||||
| jq -r '.[1].name')"
|
||||
|
||||
|
|
|
@ -5,16 +5,16 @@
|
|||
|
||||
rustPlatform.buildRustPackage rec {
|
||||
pname = "nickel";
|
||||
version = "0.3.0";
|
||||
version = "0.3.1";
|
||||
|
||||
src = fetchFromGitHub {
|
||||
owner = "tweag";
|
||||
repo = pname;
|
||||
rev = "refs/tags/${version}"; # because pure ${version} doesn't work
|
||||
hash = "sha256-L2MQ0dS9mZ+SOFoS/rclPtEl3/iFyEKn6Bse/ysHyKo=";
|
||||
hash = "sha256-bUUQP7ze0j8d+VEckexDOferAgAHdKZbdKR3q0TNOeE=";
|
||||
};
|
||||
|
||||
cargoSha256 = "sha256-3ucWGmylRatJOl8zktSRMXr5p6L+5+LQV6ALJTtQpiA=";
|
||||
cargoSha256 = "sha256-E8eIUASjCIVsZhptbU41VfK8bFmA4FTT3LVagLrgUso=";
|
||||
|
||||
meta = with lib; {
|
||||
homepage = "https://nickel-lang.org/";
|
||||
|
|
|
@ -0,0 +1,598 @@
|
|||
REVERT https://github.com/python/cpython/commit/300d812fd1c4d9244e71de0d228cc72439d312a7
|
||||
--- b/Doc/library/asyncio-eventloop.rst
|
||||
+++ a/Doc/library/asyncio-eventloop.rst
|
||||
@@ -43,12 +43,10 @@
|
||||
|
||||
Get the current event loop.
|
||||
|
||||
+ If there is no current event loop set in the current OS thread,
|
||||
+ the OS thread is main, and :func:`set_event_loop` has not yet
|
||||
+ been called, asyncio will create a new event loop and set it as the
|
||||
+ current one.
|
||||
- When called from a coroutine or a callback (e.g. scheduled with
|
||||
- call_soon or similar API), this function will always return the
|
||||
- running event loop.
|
||||
-
|
||||
- If there is no running event loop set, the function will return
|
||||
- the result of ``get_event_loop_policy().get_event_loop()`` call.
|
||||
|
||||
Because this function has rather complex behavior (especially
|
||||
when custom event loop policies are in use), using the
|
||||
@@ -60,14 +58,10 @@
|
||||
event loop.
|
||||
|
||||
.. deprecated:: 3.10
|
||||
+ Emits a deprecation warning if there is no running event loop.
|
||||
+ In future Python releases, this function may become an alias of
|
||||
+ :func:`get_running_loop` and will accordingly raise a
|
||||
+ :exc:`RuntimeError` if there is no running event loop.
|
||||
- Deprecation warning is emitted if there is no current event loop.
|
||||
- In Python 3.12 it will be an error.
|
||||
-
|
||||
- .. note::
|
||||
- In Python versions 3.10.0--3.10.8 this function
|
||||
- (and other functions which used it implicitly) emitted a
|
||||
- :exc:`DeprecationWarning` if there was no running event loop, even if
|
||||
- the current loop was set.
|
||||
|
||||
.. function:: set_event_loop(loop)
|
||||
|
||||
reverted:
|
||||
--- b/Doc/library/asyncio-llapi-index.rst
|
||||
+++ a/Doc/library/asyncio-llapi-index.rst
|
||||
@@ -19,7 +19,7 @@
|
||||
- The **preferred** function to get the running event loop.
|
||||
|
||||
* - :func:`asyncio.get_event_loop`
|
||||
+ - Get an event loop instance (current or via the policy).
|
||||
- - Get an event loop instance (running or current via the current policy).
|
||||
|
||||
* - :func:`asyncio.set_event_loop`
|
||||
- Set the event loop as current via the current policy.
|
||||
reverted:
|
||||
--- b/Doc/library/asyncio-policy.rst
|
||||
+++ a/Doc/library/asyncio-policy.rst
|
||||
@@ -112,11 +112,6 @@
|
||||
|
||||
On Windows, :class:`ProactorEventLoop` is now used by default.
|
||||
|
||||
- .. deprecated:: 3.10.9
|
||||
- :meth:`get_event_loop` now emits a :exc:`DeprecationWarning` if there
|
||||
- is no current event loop set and a new event loop has been implicitly
|
||||
- created. In Python 3.12 it will be an error.
|
||||
-
|
||||
|
||||
.. class:: WindowsSelectorEventLoopPolicy
|
||||
|
||||
reverted:
|
||||
--- b/Lib/asyncio/events.py
|
||||
+++ a/Lib/asyncio/events.py
|
||||
@@ -650,21 +650,6 @@
|
||||
if (self._local._loop is None and
|
||||
not self._local._set_called and
|
||||
threading.current_thread() is threading.main_thread()):
|
||||
- stacklevel = 2
|
||||
- try:
|
||||
- f = sys._getframe(1)
|
||||
- except AttributeError:
|
||||
- pass
|
||||
- else:
|
||||
- while f:
|
||||
- module = f.f_globals.get('__name__')
|
||||
- if not (module == 'asyncio' or module.startswith('asyncio.')):
|
||||
- break
|
||||
- f = f.f_back
|
||||
- stacklevel += 1
|
||||
- import warnings
|
||||
- warnings.warn('There is no current event loop',
|
||||
- DeprecationWarning, stacklevel=stacklevel)
|
||||
self.set_event_loop(self.new_event_loop())
|
||||
|
||||
if self._local._loop is None:
|
||||
@@ -778,13 +763,12 @@
|
||||
|
||||
|
||||
def _get_event_loop(stacklevel=3):
|
||||
- # This internal method is going away in Python 3.12, left here only for
|
||||
- # backwards compatibility with 3.10.0 - 3.10.8 and 3.11.0.
|
||||
- # Similarly, this method's C equivalent in _asyncio is going away as well.
|
||||
- # See GH-99949 for more details.
|
||||
current_loop = _get_running_loop()
|
||||
if current_loop is not None:
|
||||
return current_loop
|
||||
+ import warnings
|
||||
+ warnings.warn('There is no current event loop',
|
||||
+ DeprecationWarning, stacklevel=stacklevel)
|
||||
return get_event_loop_policy().get_event_loop()
|
||||
|
||||
|
||||
reverted:
|
||||
--- b/Lib/test/test_asyncio/test_base_events.py
|
||||
+++ a/Lib/test/test_asyncio/test_base_events.py
|
||||
@@ -752,7 +752,7 @@
|
||||
def test_env_var_debug(self):
|
||||
code = '\n'.join((
|
||||
'import asyncio',
|
||||
+ 'loop = asyncio.get_event_loop()',
|
||||
- 'loop = asyncio.new_event_loop()',
|
||||
'print(loop.get_debug())'))
|
||||
|
||||
# Test with -E to not fail if the unit test was run with
|
||||
reverted:
|
||||
--- b/Lib/test/test_asyncio/test_events.py
|
||||
+++ a/Lib/test/test_asyncio/test_events.py
|
||||
@@ -2561,9 +2561,8 @@
|
||||
def test_get_event_loop(self):
|
||||
policy = asyncio.DefaultEventLoopPolicy()
|
||||
self.assertIsNone(policy._local._loop)
|
||||
+
|
||||
+ loop = policy.get_event_loop()
|
||||
- with self.assertWarns(DeprecationWarning) as cm:
|
||||
- loop = policy.get_event_loop()
|
||||
- self.assertEqual(cm.filename, __file__)
|
||||
self.assertIsInstance(loop, asyncio.AbstractEventLoop)
|
||||
|
||||
self.assertIs(policy._local._loop, loop)
|
||||
@@ -2577,10 +2576,7 @@
|
||||
policy, "set_event_loop",
|
||||
wraps=policy.set_event_loop) as m_set_event_loop:
|
||||
|
||||
+ loop = policy.get_event_loop()
|
||||
- with self.assertWarns(DeprecationWarning) as cm:
|
||||
- loop = policy.get_event_loop()
|
||||
- self.addCleanup(loop.close)
|
||||
- self.assertEqual(cm.filename, __file__)
|
||||
|
||||
# policy._local._loop must be set through .set_event_loop()
|
||||
# (the unix DefaultEventLoopPolicy needs this call to attach
|
||||
@@ -2614,8 +2610,7 @@
|
||||
|
||||
def test_set_event_loop(self):
|
||||
policy = asyncio.DefaultEventLoopPolicy()
|
||||
+ old_loop = policy.get_event_loop()
|
||||
- old_loop = policy.new_event_loop()
|
||||
- policy.set_event_loop(old_loop)
|
||||
|
||||
self.assertRaises(AssertionError, policy.set_event_loop, object())
|
||||
|
||||
@@ -2728,11 +2723,15 @@
|
||||
asyncio.set_event_loop_policy(Policy())
|
||||
loop = asyncio.new_event_loop()
|
||||
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaises(TestError):
|
||||
+ asyncio.get_event_loop()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaises(TestError):
|
||||
- asyncio.get_event_loop()
|
||||
asyncio.set_event_loop(None)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaises(TestError):
|
||||
+ asyncio.get_event_loop()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaises(TestError):
|
||||
- asyncio.get_event_loop()
|
||||
|
||||
with self.assertRaisesRegex(RuntimeError, 'no running'):
|
||||
asyncio.get_running_loop()
|
||||
@@ -2746,11 +2745,16 @@
|
||||
loop.run_until_complete(func())
|
||||
|
||||
asyncio.set_event_loop(loop)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaises(TestError):
|
||||
+ asyncio.get_event_loop()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
+
|
||||
- with self.assertRaises(TestError):
|
||||
- asyncio.get_event_loop()
|
||||
asyncio.set_event_loop(None)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaises(TestError):
|
||||
+ asyncio.get_event_loop()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaises(TestError):
|
||||
- asyncio.get_event_loop()
|
||||
|
||||
finally:
|
||||
asyncio.set_event_loop_policy(old_policy)
|
||||
@@ -2774,8 +2778,10 @@
|
||||
self.addCleanup(loop2.close)
|
||||
self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
asyncio.set_event_loop(None)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaisesRegex(RuntimeError, 'no current'):
|
||||
+ asyncio.get_event_loop()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current'):
|
||||
- asyncio.get_event_loop()
|
||||
|
||||
with self.assertRaisesRegex(RuntimeError, 'no running'):
|
||||
asyncio.get_running_loop()
|
||||
@@ -2789,11 +2795,15 @@
|
||||
loop.run_until_complete(func())
|
||||
|
||||
asyncio.set_event_loop(loop)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ self.assertIs(asyncio.get_event_loop(), loop)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- self.assertIs(asyncio.get_event_loop(), loop)
|
||||
|
||||
asyncio.set_event_loop(None)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaisesRegex(RuntimeError, 'no current'):
|
||||
+ asyncio.get_event_loop()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current'):
|
||||
- asyncio.get_event_loop()
|
||||
|
||||
finally:
|
||||
asyncio.set_event_loop_policy(old_policy)
|
||||
reverted:
|
||||
--- b/Lib/test/test_asyncio/test_futures.py
|
||||
+++ a/Lib/test/test_asyncio/test_futures.py
|
||||
@@ -145,8 +145,10 @@
|
||||
self.assertTrue(f.cancelled())
|
||||
|
||||
def test_constructor_without_loop(self):
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
|
||||
+ self._new_future()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
|
||||
- self._new_future()
|
||||
|
||||
def test_constructor_use_running_loop(self):
|
||||
async def test():
|
||||
@@ -156,10 +158,12 @@
|
||||
self.assertIs(f.get_loop(), self.loop)
|
||||
|
||||
def test_constructor_use_global_loop(self):
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10, undeprecated in 3.11.1
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ f = self._new_future()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- f = self._new_future()
|
||||
self.assertIs(f._loop, self.loop)
|
||||
self.assertIs(f.get_loop(), self.loop)
|
||||
|
||||
@@ -495,8 +499,10 @@
|
||||
return (arg, threading.get_ident())
|
||||
ex = concurrent.futures.ThreadPoolExecutor(1)
|
||||
f1 = ex.submit(run, 'oi')
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaises(RuntimeError):
|
||||
+ asyncio.wrap_future(f1)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
|
||||
- asyncio.wrap_future(f1)
|
||||
ex.shutdown(wait=True)
|
||||
|
||||
def test_wrap_future_use_running_loop(self):
|
||||
@@ -511,14 +517,16 @@
|
||||
ex.shutdown(wait=True)
|
||||
|
||||
def test_wrap_future_use_global_loop(self):
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10, undeprecated in 3.11.1
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
def run(arg):
|
||||
return (arg, threading.get_ident())
|
||||
ex = concurrent.futures.ThreadPoolExecutor(1)
|
||||
f1 = ex.submit(run, 'oi')
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ f2 = asyncio.wrap_future(f1)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- f2 = asyncio.wrap_future(f1)
|
||||
self.assertIs(self.loop, f2._loop)
|
||||
ex.shutdown(wait=True)
|
||||
|
||||
reverted:
|
||||
--- b/Lib/test/test_asyncio/test_streams.py
|
||||
+++ a/Lib/test/test_asyncio/test_streams.py
|
||||
@@ -747,8 +747,10 @@
|
||||
self.assertEqual(data, b'data')
|
||||
|
||||
def test_streamreader_constructor_without_loop(self):
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
|
||||
+ asyncio.StreamReader()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
|
||||
- asyncio.StreamReader()
|
||||
|
||||
def test_streamreader_constructor_use_running_loop(self):
|
||||
# asyncio issue #184: Ensure that StreamReaderProtocol constructor
|
||||
@@ -762,17 +764,21 @@
|
||||
def test_streamreader_constructor_use_global_loop(self):
|
||||
# asyncio issue #184: Ensure that StreamReaderProtocol constructor
|
||||
# retrieves the current loop if the loop parameter is not set
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10, undeprecated in 3.11.1
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
asyncio.set_event_loop(self.loop)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ reader = asyncio.StreamReader()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- reader = asyncio.StreamReader()
|
||||
self.assertIs(reader._loop, self.loop)
|
||||
|
||||
|
||||
def test_streamreaderprotocol_constructor_without_loop(self):
|
||||
reader = mock.Mock()
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
|
||||
+ asyncio.StreamReaderProtocol(reader)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
|
||||
- asyncio.StreamReaderProtocol(reader)
|
||||
|
||||
def test_streamreaderprotocol_constructor_use_running_loop(self):
|
||||
# asyncio issue #184: Ensure that StreamReaderProtocol constructor
|
||||
@@ -786,11 +792,13 @@
|
||||
def test_streamreaderprotocol_constructor_use_global_loop(self):
|
||||
# asyncio issue #184: Ensure that StreamReaderProtocol constructor
|
||||
# retrieves the current loop if the loop parameter is not set
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10, undeprecated in 3.11.1
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
asyncio.set_event_loop(self.loop)
|
||||
reader = mock.Mock()
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ protocol = asyncio.StreamReaderProtocol(reader)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- protocol = asyncio.StreamReaderProtocol(reader)
|
||||
self.assertIs(protocol._loop, self.loop)
|
||||
|
||||
def test_multiple_drain(self):
|
||||
reverted:
|
||||
--- b/Lib/test/test_asyncio/test_tasks.py
|
||||
+++ a/Lib/test/test_asyncio/test_tasks.py
|
||||
@@ -210,8 +210,10 @@
|
||||
|
||||
a = notmuch()
|
||||
self.addCleanup(a.close)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
|
||||
+ asyncio.ensure_future(a)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
|
||||
- asyncio.ensure_future(a)
|
||||
|
||||
async def test():
|
||||
return asyncio.ensure_future(notmuch())
|
||||
@@ -221,10 +223,12 @@
|
||||
self.assertTrue(t.done())
|
||||
self.assertEqual(t.result(), 'ok')
|
||||
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10.0, undeprecated in 3.10.9
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ t = asyncio.ensure_future(notmuch())
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- t = asyncio.ensure_future(notmuch())
|
||||
self.assertIs(t._loop, self.loop)
|
||||
self.loop.run_until_complete(t)
|
||||
self.assertTrue(t.done())
|
||||
@@ -243,8 +247,10 @@
|
||||
|
||||
a = notmuch()
|
||||
self.addCleanup(a.close)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
|
||||
+ asyncio.ensure_future(a)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
|
||||
- asyncio.ensure_future(a)
|
||||
|
||||
async def test():
|
||||
return asyncio.ensure_future(notmuch())
|
||||
@@ -254,10 +260,12 @@
|
||||
self.assertTrue(t.done())
|
||||
self.assertEqual(t.result(), 'ok')
|
||||
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10.0, undeprecated in 3.10.9
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ t = asyncio.ensure_future(notmuch())
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- t = asyncio.ensure_future(notmuch())
|
||||
self.assertIs(t._loop, self.loop)
|
||||
self.loop.run_until_complete(t)
|
||||
self.assertTrue(t.done())
|
||||
@@ -1480,8 +1488,10 @@
|
||||
self.addCleanup(a.close)
|
||||
|
||||
futs = asyncio.as_completed([a])
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
|
||||
+ list(futs)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
|
||||
- list(futs)
|
||||
|
||||
def test_as_completed_coroutine_use_running_loop(self):
|
||||
loop = self.new_test_loop()
|
||||
@@ -1497,14 +1507,17 @@
|
||||
loop.run_until_complete(test())
|
||||
|
||||
def test_as_completed_coroutine_use_global_loop(self):
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10.0, undeprecated in 3.10.9
|
||||
async def coro():
|
||||
return 42
|
||||
|
||||
loop = self.new_test_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
+ futs = asyncio.as_completed([coro()])
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ futs = list(futs)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- futs = list(asyncio.as_completed([coro()]))
|
||||
self.assertEqual(len(futs), 1)
|
||||
self.assertEqual(loop.run_until_complete(futs[0]), 42)
|
||||
|
||||
@@ -1974,8 +1987,10 @@
|
||||
|
||||
inner = coro()
|
||||
self.addCleanup(inner.close)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
|
||||
+ asyncio.shield(inner)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
|
||||
- asyncio.shield(inner)
|
||||
|
||||
def test_shield_coroutine_use_running_loop(self):
|
||||
async def coro():
|
||||
@@ -1989,13 +2004,15 @@
|
||||
self.assertEqual(res, 42)
|
||||
|
||||
def test_shield_coroutine_use_global_loop(self):
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10.0, undeprecated in 3.10.9
|
||||
async def coro():
|
||||
return 42
|
||||
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ outer = asyncio.shield(coro())
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- outer = asyncio.shield(coro())
|
||||
self.assertEqual(outer._loop, self.loop)
|
||||
res = self.loop.run_until_complete(outer)
|
||||
self.assertEqual(res, 42)
|
||||
@@ -2933,7 +2950,7 @@
|
||||
self.assertIsNone(asyncio.current_task(loop=self.loop))
|
||||
|
||||
def test_current_task_no_running_loop_implicit(self):
|
||||
+ with self.assertRaises(RuntimeError):
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no running event loop'):
|
||||
asyncio.current_task()
|
||||
|
||||
def test_current_task_with_implicit_loop(self):
|
||||
@@ -3097,8 +3114,10 @@
|
||||
return asyncio.gather(*args, **kwargs)
|
||||
|
||||
def test_constructor_empty_sequence_without_loop(self):
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaises(RuntimeError):
|
||||
+ asyncio.gather()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
|
||||
- asyncio.gather()
|
||||
|
||||
def test_constructor_empty_sequence_use_running_loop(self):
|
||||
async def gather():
|
||||
@@ -3111,10 +3130,12 @@
|
||||
self.assertEqual(fut.result(), [])
|
||||
|
||||
def test_constructor_empty_sequence_use_global_loop(self):
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10.0, undeprecated in 3.10.9
|
||||
asyncio.set_event_loop(self.one_loop)
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ fut = asyncio.gather()
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- fut = asyncio.gather()
|
||||
self.assertIsInstance(fut, asyncio.Future)
|
||||
self.assertIs(fut._loop, self.one_loop)
|
||||
self._run_loop(self.one_loop)
|
||||
@@ -3202,8 +3223,10 @@
|
||||
self.addCleanup(gen1.close)
|
||||
gen2 = coro()
|
||||
self.addCleanup(gen2.close)
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ with self.assertRaises(RuntimeError):
|
||||
+ asyncio.gather(gen1, gen2)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
|
||||
- asyncio.gather(gen1, gen2)
|
||||
|
||||
def test_constructor_use_running_loop(self):
|
||||
async def coro():
|
||||
@@ -3217,14 +3240,16 @@
|
||||
self.one_loop.run_until_complete(fut)
|
||||
|
||||
def test_constructor_use_global_loop(self):
|
||||
+ # Deprecated in 3.10
|
||||
- # Deprecated in 3.10.0, undeprecated in 3.10.9
|
||||
async def coro():
|
||||
return 'abc'
|
||||
asyncio.set_event_loop(self.other_loop)
|
||||
self.addCleanup(asyncio.set_event_loop, None)
|
||||
gen1 = coro()
|
||||
gen2 = coro()
|
||||
+ with self.assertWarns(DeprecationWarning) as cm:
|
||||
+ fut = asyncio.gather(gen1, gen2)
|
||||
+ self.assertEqual(cm.warnings[0].filename, __file__)
|
||||
- fut = asyncio.gather(gen1, gen2)
|
||||
self.assertIs(fut._loop, self.other_loop)
|
||||
self.other_loop.run_until_complete(fut)
|
||||
|
||||
reverted:
|
||||
--- b/Lib/test/test_asyncio/test_unix_events.py
|
||||
+++ a/Lib/test/test_asyncio/test_unix_events.py
|
||||
@@ -1740,8 +1740,7 @@
|
||||
|
||||
def test_child_watcher_replace_mainloop_existing(self):
|
||||
policy = self.create_policy()
|
||||
+ loop = policy.get_event_loop()
|
||||
- loop = policy.new_event_loop()
|
||||
- policy.set_event_loop(loop)
|
||||
|
||||
# Explicitly setup SafeChildWatcher,
|
||||
# default ThreadedChildWatcher has no _loop property
|
||||
reverted:
|
||||
--- b/Lib/test/test_coroutines.py
|
||||
+++ a/Lib/test/test_coroutines.py
|
||||
@@ -2319,8 +2319,7 @@
|
||||
def test_unawaited_warning_during_shutdown(self):
|
||||
code = ("import asyncio\n"
|
||||
"async def f(): pass\n"
|
||||
+ "asyncio.gather(f())\n")
|
||||
- "async def t(): asyncio.gather(f())\n"
|
||||
- "asyncio.run(t())\n")
|
||||
assert_python_ok("-c", code)
|
||||
|
||||
code = ("import sys\n"
|
||||
reverted:
|
||||
--- b/Modules/_asynciomodule.c
|
||||
+++ a/Modules/_asynciomodule.c
|
||||
@@ -332,6 +332,13 @@
|
||||
return loop;
|
||||
}
|
||||
|
||||
+ if (PyErr_WarnEx(PyExc_DeprecationWarning,
|
||||
+ "There is no current event loop",
|
||||
+ stacklevel))
|
||||
+ {
|
||||
+ return NULL;
|
||||
+ }
|
||||
+
|
||||
policy = PyObject_CallNoArgs(asyncio_get_event_loop_policy);
|
||||
if (policy == NULL) {
|
||||
return NULL;
|
||||
@@ -3085,11 +3092,6 @@
|
||||
return get_event_loop(1);
|
||||
}
|
||||
|
||||
-// This internal method is going away in Python 3.12, left here only for
|
||||
-// backwards compatibility with 3.10.0 - 3.10.8 and 3.11.0.
|
||||
-// Similarly, this method's Python equivalent in asyncio.events is going
|
||||
-// away as well.
|
||||
-// See GH-99949 for more details.
|
||||
/*[clinic input]
|
||||
_asyncio._get_event_loop
|
||||
stacklevel: int = 3
|
|
@ -1,19 +1,18 @@
|
|||
From 105621b99cc30615c79b5aa3d12d6732e14b0d59 Mon Sep 17 00:00:00 2001
|
||||
From: Frederik Rietdijk <fridh@fridh.nl>
|
||||
Date: Mon, 28 Aug 2017 09:24:06 +0200
|
||||
Subject: [PATCH] Don't use ldconfig and speed up uuid load
|
||||
From 5330b6af9f832af59aa5c61d9ef6971053a8e709 Mon Sep 17 00:00:00 2001
|
||||
From: Jonathan Ringer <jonringer117@gmail.com>
|
||||
Date: Mon, 9 Nov 2020 10:24:35 -0800
|
||||
Subject: [PATCH] CPython: Don't use ldconfig
|
||||
|
||||
---
|
||||
Lib/ctypes/util.py | 70 ++----------------------------------------------------
|
||||
Lib/uuid.py | 48 -------------------------------------
|
||||
2 files changed, 2 insertions(+), 116 deletions(-)
|
||||
Lib/ctypes/util.py | 77 ++--------------------------------------------
|
||||
1 file changed, 2 insertions(+), 75 deletions(-)
|
||||
|
||||
diff --git a/Lib/ctypes/util.py b/Lib/ctypes/util.py
|
||||
index 339ae8aa8a..2944985c30 100644
|
||||
index 0c2510e161..7fb98af308 100644
|
||||
--- a/Lib/ctypes/util.py
|
||||
+++ b/Lib/ctypes/util.py
|
||||
@@ -85,46 +85,7 @@ elif os.name == "posix":
|
||||
import re, tempfile
|
||||
@@ -100,53 +100,7 @@ def _is_elf(filename):
|
||||
return thefile.read(4) == elf_header
|
||||
|
||||
def _findLib_gcc(name):
|
||||
- # Run GCC's linker with the -t (aka --trace) option and examine the
|
||||
|
@ -52,15 +51,22 @@ index 339ae8aa8a..2944985c30 100644
|
|||
- # Raised if the file was already removed, which is the normal
|
||||
- # behaviour of GCC if linking fails
|
||||
- pass
|
||||
- res = re.search(expr, trace)
|
||||
- res = re.findall(expr, trace)
|
||||
- if not res:
|
||||
- return None
|
||||
- return os.fsdecode(res.group(0))
|
||||
-
|
||||
- for file in res:
|
||||
- # Check if the given file is an elf file: gcc can report
|
||||
- # some files that are linker scripts and not actual
|
||||
- # shared objects. See bpo-41976 for more details
|
||||
- if not _is_elf(file):
|
||||
- continue
|
||||
- return os.fsdecode(file)
|
||||
+ return None
|
||||
|
||||
|
||||
if sys.platform == "sunos5":
|
||||
@@ -246,34 +207,7 @@ elif os.name == "posix":
|
||||
@@ -268,34 +222,7 @@ def find_library(name, is64 = False):
|
||||
else:
|
||||
|
||||
def _findSoname_ldconfig(name):
|
||||
|
@ -96,68 +102,6 @@ index 339ae8aa8a..2944985c30 100644
|
|||
|
||||
def _findLib_ld(name):
|
||||
# See issue #9998 for why this is needed
|
||||
diff --git a/Lib/uuid.py b/Lib/uuid.py
|
||||
index 200c800b34..31160ace95 100644
|
||||
--- a/Lib/uuid.py
|
||||
+++ b/Lib/uuid.py
|
||||
@@ -455,57 +455,9 @@ def _netbios_getnode():
|
||||
continue
|
||||
return int.from_bytes(bytes, 'big')
|
||||
|
||||
-# Thanks to Thomas Heller for ctypes and for his help with its use here.
|
||||
|
||||
-# If ctypes is available, use it to find system routines for UUID generation.
|
||||
-# XXX This makes the module non-thread-safe!
|
||||
_uuid_generate_time = _UuidCreate = None
|
||||
-try:
|
||||
- import ctypes, ctypes.util
|
||||
- import sys
|
||||
|
||||
- # The uuid_generate_* routines are provided by libuuid on at least
|
||||
- # Linux and FreeBSD, and provided by libc on Mac OS X.
|
||||
- _libnames = ['uuid']
|
||||
- if not sys.platform.startswith('win'):
|
||||
- _libnames.append('c')
|
||||
- for libname in _libnames:
|
||||
- try:
|
||||
- lib = ctypes.CDLL(ctypes.util.find_library(libname))
|
||||
- except Exception:
|
||||
- continue
|
||||
- if hasattr(lib, 'uuid_generate_time'):
|
||||
- _uuid_generate_time = lib.uuid_generate_time
|
||||
- break
|
||||
- del _libnames
|
||||
-
|
||||
- # The uuid_generate_* functions are broken on MacOS X 10.5, as noted
|
||||
- # in issue #8621 the function generates the same sequence of values
|
||||
- # in the parent process and all children created using fork (unless
|
||||
- # those children use exec as well).
|
||||
- #
|
||||
- # Assume that the uuid_generate functions are broken from 10.5 onward,
|
||||
- # the test can be adjusted when a later version is fixed.
|
||||
- if sys.platform == 'darwin':
|
||||
- if int(os.uname().release.split('.')[0]) >= 9:
|
||||
- _uuid_generate_time = None
|
||||
-
|
||||
- # On Windows prior to 2000, UuidCreate gives a UUID containing the
|
||||
- # hardware address. On Windows 2000 and later, UuidCreate makes a
|
||||
- # random UUID and UuidCreateSequential gives a UUID containing the
|
||||
- # hardware address. These routines are provided by the RPC runtime.
|
||||
- # NOTE: at least on Tim's WinXP Pro SP2 desktop box, while the last
|
||||
- # 6 bytes returned by UuidCreateSequential are fixed, they don't appear
|
||||
- # to bear any relationship to the MAC address of any network device
|
||||
- # on the box.
|
||||
- try:
|
||||
- lib = ctypes.windll.rpcrt4
|
||||
- except:
|
||||
- lib = None
|
||||
- _UuidCreate = getattr(lib, 'UuidCreateSequential',
|
||||
- getattr(lib, 'UuidCreate', None))
|
||||
-except:
|
||||
- pass
|
||||
|
||||
def _unixdll_getnode():
|
||||
"""Get the hardware address on Unix using ctypes."""
|
||||
--
|
||||
2.14.1
|
||||
2.33.1
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
--- a/Lib/py_compile.py
|
||||
+++ b/Lib/py_compile.py
|
||||
@@ -139,3 +139,4 @@
|
||||
source_stats = loader.path_stats(file)
|
||||
+ source_mtime = 1 if 'DETERMINISTIC_BUILD' in os.environ else source_stats['mtime']
|
||||
bytecode = importlib._bootstrap_external._code_to_bytecode(
|
||||
- code, source_stats['mtime'], source_stats['size'])
|
||||
+ code, source_mtime, source_stats['size'])
|
||||
--- a/Lib/importlib/_bootstrap_external.py
|
||||
+++ b/Lib/importlib/_bootstrap_external.py
|
||||
@@ -485,5 +485,5 @@
|
||||
if source_stats is not None:
|
||||
try:
|
||||
- source_mtime = int(source_stats['mtime'])
|
||||
+ source_mtime = 1
|
||||
except KeyError:
|
||||
pass
|
|
@ -1,51 +0,0 @@
|
|||
From 918201682127ed8a270a4bd1a448b490019e4ada Mon Sep 17 00:00:00 2001
|
||||
From: Frederik Rietdijk <fridh@fridh.nl>
|
||||
Date: Thu, 14 Sep 2017 10:00:31 +0200
|
||||
Subject: [PATCH] ctypes.util: support LD_LIBRARY_PATH
|
||||
|
||||
Backports support for LD_LIBRARY_PATH from 3.6
|
||||
---
|
||||
Lib/ctypes/util.py | 26 +++++++++++++++++++++++++-
|
||||
1 file changed, 25 insertions(+), 1 deletion(-)
|
||||
|
||||
diff --git a/Lib/ctypes/util.py b/Lib/ctypes/util.py
|
||||
index e9957d7951..9926f6c881 100644
|
||||
--- a/Lib/ctypes/util.py
|
||||
+++ b/Lib/ctypes/util.py
|
||||
@@ -219,8 +219,32 @@ elif os.name == "posix":
|
||||
def _findSoname_ldconfig(name):
|
||||
return None
|
||||
|
||||
+ def _findLib_ld(name):
|
||||
+ # See issue #9998 for why this is needed
|
||||
+ expr = r'[^\(\)\s]*lib%s\.[^\(\)\s]*' % re.escape(name)
|
||||
+ cmd = ['ld', '-t']
|
||||
+ libpath = os.environ.get('LD_LIBRARY_PATH')
|
||||
+ if libpath:
|
||||
+ for d in libpath.split(':'):
|
||||
+ cmd.extend(['-L', d])
|
||||
+ cmd.extend(['-o', os.devnull, '-l%s' % name])
|
||||
+ result = None
|
||||
+ try:
|
||||
+ p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
|
||||
+ stderr=subprocess.PIPE,
|
||||
+ universal_newlines=True)
|
||||
+ out, _ = p.communicate()
|
||||
+ res = re.search(expr, os.fsdecode(out))
|
||||
+ if res:
|
||||
+ result = res.group(0)
|
||||
+ except Exception as e:
|
||||
+ pass # result will be None
|
||||
+ return result
|
||||
+
|
||||
def find_library(name):
|
||||
- return _findSoname_ldconfig(name) or _get_soname(_findLib_gcc(name))
|
||||
+ # See issue #9998
|
||||
+ return _findSoname_ldconfig(name) or \
|
||||
+ _get_soname(_findLib_gcc(name) or _findLib_ld(name))
|
||||
|
||||
################################################################
|
||||
# test code
|
||||
--
|
||||
2.14.1
|
||||
|
|
@ -1,164 +0,0 @@
|
|||
From 590c46bb04f79ab611b2f8fd682dd7e43a01f268 Mon Sep 17 00:00:00 2001
|
||||
From: Frederik Rietdijk <fridh@fridh.nl>
|
||||
Date: Mon, 28 Aug 2017 09:24:06 +0200
|
||||
Subject: [PATCH] Don't use ldconfig and speed up uuid load
|
||||
|
||||
---
|
||||
Lib/ctypes/util.py | 70 ++----------------------------------------------------
|
||||
Lib/uuid.py | 49 --------------------------------------
|
||||
2 files changed, 2 insertions(+), 117 deletions(-)
|
||||
|
||||
diff --git a/Lib/ctypes/util.py b/Lib/ctypes/util.py
|
||||
index 7684eab81d..e9957d7951 100644
|
||||
--- a/Lib/ctypes/util.py
|
||||
+++ b/Lib/ctypes/util.py
|
||||
@@ -95,46 +95,7 @@ elif os.name == "posix":
|
||||
import re, tempfile
|
||||
|
||||
def _findLib_gcc(name):
|
||||
- # Run GCC's linker with the -t (aka --trace) option and examine the
|
||||
- # library name it prints out. The GCC command will fail because we
|
||||
- # haven't supplied a proper program with main(), but that does not
|
||||
- # matter.
|
||||
- expr = os.fsencode(r'[^\(\)\s]*lib%s\.[^\(\)\s]*' % re.escape(name))
|
||||
-
|
||||
- c_compiler = shutil.which('gcc')
|
||||
- if not c_compiler:
|
||||
- c_compiler = shutil.which('cc')
|
||||
- if not c_compiler:
|
||||
- # No C compiler available, give up
|
||||
- return None
|
||||
-
|
||||
- temp = tempfile.NamedTemporaryFile()
|
||||
- try:
|
||||
- args = [c_compiler, '-Wl,-t', '-o', temp.name, '-l' + name]
|
||||
-
|
||||
- env = dict(os.environ)
|
||||
- env['LC_ALL'] = 'C'
|
||||
- env['LANG'] = 'C'
|
||||
- try:
|
||||
- proc = subprocess.Popen(args,
|
||||
- stdout=subprocess.PIPE,
|
||||
- stderr=subprocess.STDOUT,
|
||||
- env=env)
|
||||
- except OSError: # E.g. bad executable
|
||||
- return None
|
||||
- with proc:
|
||||
- trace = proc.stdout.read()
|
||||
- finally:
|
||||
- try:
|
||||
- temp.close()
|
||||
- except FileNotFoundError:
|
||||
- # Raised if the file was already removed, which is the normal
|
||||
- # behaviour of GCC if linking fails
|
||||
- pass
|
||||
- res = re.search(expr, trace)
|
||||
- if not res:
|
||||
- return None
|
||||
- return os.fsdecode(res.group(0))
|
||||
+ return None
|
||||
|
||||
|
||||
if sys.platform == "sunos5":
|
||||
@@ -256,34 +217,7 @@ elif os.name == "posix":
|
||||
else:
|
||||
|
||||
def _findSoname_ldconfig(name):
|
||||
- import struct
|
||||
- if struct.calcsize('l') == 4:
|
||||
- machine = os.uname().machine + '-32'
|
||||
- else:
|
||||
- machine = os.uname().machine + '-64'
|
||||
- mach_map = {
|
||||
- 'x86_64-64': 'libc6,x86-64',
|
||||
- 'ppc64-64': 'libc6,64bit',
|
||||
- 'sparc64-64': 'libc6,64bit',
|
||||
- 's390x-64': 'libc6,64bit',
|
||||
- 'ia64-64': 'libc6,IA-64',
|
||||
- }
|
||||
- abi_type = mach_map.get(machine, 'libc6')
|
||||
-
|
||||
- # XXX assuming GLIBC's ldconfig (with option -p)
|
||||
- regex = os.fsencode(
|
||||
- '\s+(lib%s\.[^\s]+)\s+\(%s' % (re.escape(name), abi_type))
|
||||
- try:
|
||||
- with subprocess.Popen(['/sbin/ldconfig', '-p'],
|
||||
- stdin=subprocess.DEVNULL,
|
||||
- stderr=subprocess.DEVNULL,
|
||||
- stdout=subprocess.PIPE,
|
||||
- env={'LC_ALL': 'C', 'LANG': 'C'}) as p:
|
||||
- res = re.search(regex, p.stdout.read())
|
||||
- if res:
|
||||
- return os.fsdecode(res.group(1))
|
||||
- except OSError:
|
||||
- pass
|
||||
+ return None
|
||||
|
||||
def find_library(name):
|
||||
return _findSoname_ldconfig(name) or _get_soname(_findLib_gcc(name))
|
||||
diff --git a/Lib/uuid.py b/Lib/uuid.py
|
||||
index e96e7e034c..31160ace95 100644
|
||||
--- a/Lib/uuid.py
|
||||
+++ b/Lib/uuid.py
|
||||
@@ -455,58 +455,9 @@ def _netbios_getnode():
|
||||
continue
|
||||
return int.from_bytes(bytes, 'big')
|
||||
|
||||
-# Thanks to Thomas Heller for ctypes and for his help with its use here.
|
||||
|
||||
-# If ctypes is available, use it to find system routines for UUID generation.
|
||||
-# XXX This makes the module non-thread-safe!
|
||||
_uuid_generate_time = _UuidCreate = None
|
||||
-try:
|
||||
- import ctypes, ctypes.util
|
||||
- import sys
|
||||
|
||||
- # The uuid_generate_* routines are provided by libuuid on at least
|
||||
- # Linux and FreeBSD, and provided by libc on Mac OS X.
|
||||
- _libnames = ['uuid']
|
||||
- if not sys.platform.startswith('win'):
|
||||
- _libnames.append('c')
|
||||
- for libname in _libnames:
|
||||
- try:
|
||||
- lib = ctypes.CDLL(ctypes.util.find_library(libname))
|
||||
- except Exception:
|
||||
- continue
|
||||
- if hasattr(lib, 'uuid_generate_time'):
|
||||
- _uuid_generate_time = lib.uuid_generate_time
|
||||
- break
|
||||
- del _libnames
|
||||
-
|
||||
- # The uuid_generate_* functions are broken on MacOS X 10.5, as noted
|
||||
- # in issue #8621 the function generates the same sequence of values
|
||||
- # in the parent process and all children created using fork (unless
|
||||
- # those children use exec as well).
|
||||
- #
|
||||
- # Assume that the uuid_generate functions are broken from 10.5 onward,
|
||||
- # the test can be adjusted when a later version is fixed.
|
||||
- if sys.platform == 'darwin':
|
||||
- import os
|
||||
- if int(os.uname().release.split('.')[0]) >= 9:
|
||||
- _uuid_generate_time = None
|
||||
-
|
||||
- # On Windows prior to 2000, UuidCreate gives a UUID containing the
|
||||
- # hardware address. On Windows 2000 and later, UuidCreate makes a
|
||||
- # random UUID and UuidCreateSequential gives a UUID containing the
|
||||
- # hardware address. These routines are provided by the RPC runtime.
|
||||
- # NOTE: at least on Tim's WinXP Pro SP2 desktop box, while the last
|
||||
- # 6 bytes returned by UuidCreateSequential are fixed, they don't appear
|
||||
- # to bear any relationship to the MAC address of any network device
|
||||
- # on the box.
|
||||
- try:
|
||||
- lib = ctypes.windll.rpcrt4
|
||||
- except:
|
||||
- lib = None
|
||||
- _UuidCreate = getattr(lib, 'UuidCreateSequential',
|
||||
- getattr(lib, 'UuidCreate', None))
|
||||
-except:
|
||||
- pass
|
||||
|
||||
def _unixdll_getnode():
|
||||
"""Get the hardware address on Unix using ctypes."""
|
||||
--
|
||||
2.14.1
|
||||
|
|
@ -1,21 +0,0 @@
|
|||
Backport from CPython 3.8 of a good list of tests to run for PGO.
|
||||
|
||||
Upstream commit:
|
||||
https://github.com/python/cpython/commit/4e16a4a31
|
||||
|
||||
Upstream discussion:
|
||||
https://bugs.python.org/issue36044
|
||||
|
||||
diff --git a/Makefile.pre.in b/Makefile.pre.in
|
||||
index 00fdd21ce..713dc1e53 100644
|
||||
--- a/Makefile.pre.in
|
||||
+++ b/Makefile.pre.in
|
||||
@@ -259,7 +259,7 @@ TCLTK_LIBS=
|
||||
# The task to run while instrumented when building the profile-opt target.
|
||||
# We exclude unittests with -x that take a rediculious amount of time to
|
||||
# run in the instrumented training build or do not provide much value.
|
||||
-PROFILE_TASK=-m test.regrtest --pgo -x test_asyncore test_gdb test_multiprocessing_fork test_multiprocessing_forkserver test_multiprocessing_main_handling test_multiprocessing_spawn test_subprocess
|
||||
+PROFILE_TASK=-m test.regrtest --pgo test_array test_base64 test_binascii test_binop test_bisect test_bytes test_bz2 test_cmath test_codecs test_collections test_complex test_dataclasses test_datetime test_decimal test_difflib test_embed test_float test_fstring test_functools test_generators test_hashlib test_heapq test_int test_itertools test_json test_long test_lzma test_math test_memoryview test_operator test_ordered_dict test_pickle test_pprint test_re test_set test_sqlite test_statistics test_struct test_tabnanny test_time test_unicode test_xml_etree test_xml_etree_c
|
||||
|
||||
# report files for gcov / lcov coverage report
|
||||
COVERAGE_INFO= $(abs_builddir)/coverage.info
|
|
@ -1,237 +0,0 @@
|
|||
Source: https://bugs.python.org/file47046/python-3.x-distutils-C++.patch
|
||||
--- a/Lib/distutils/cygwinccompiler.py
|
||||
+++ b/Lib/distutils/cygwinccompiler.py
|
||||
@@ -125,8 +125,10 @@
|
||||
# dllwrap 2.10.90 is buggy
|
||||
if self.ld_version >= "2.10.90":
|
||||
self.linker_dll = "gcc"
|
||||
+ self.linker_dll_cxx = "g++"
|
||||
else:
|
||||
self.linker_dll = "dllwrap"
|
||||
+ self.linker_dll_cxx = "dllwrap"
|
||||
|
||||
# ld_version >= "2.13" support -shared so use it instead of
|
||||
# -mdll -static
|
||||
@@ -140,9 +142,13 @@
|
||||
self.set_executables(compiler='gcc -mcygwin -O -Wall',
|
||||
compiler_so='gcc -mcygwin -mdll -O -Wall',
|
||||
compiler_cxx='g++ -mcygwin -O -Wall',
|
||||
+ compiler_so_cxx='g++ -mcygwin -mdll -O -Wall',
|
||||
linker_exe='gcc -mcygwin',
|
||||
linker_so=('%s -mcygwin %s' %
|
||||
- (self.linker_dll, shared_option)))
|
||||
+ (self.linker_dll, shared_option)),
|
||||
+ linker_exe_cxx='g++ -mcygwin',
|
||||
+ linker_so_cxx=('%s -mcygwin %s' %
|
||||
+ (self.linker_dll_cxx, shared_option)))
|
||||
|
||||
# cygwin and mingw32 need different sets of libraries
|
||||
if self.gcc_version == "2.91.57":
|
||||
@@ -166,8 +172,12 @@
|
||||
raise CompileError(msg)
|
||||
else: # for other files use the C-compiler
|
||||
try:
|
||||
- self.spawn(self.compiler_so + cc_args + [src, '-o', obj] +
|
||||
- extra_postargs)
|
||||
+ if self.detect_language(src) == 'c++':
|
||||
+ self.spawn(self.compiler_so_cxx + cc_args + [src, '-o', obj] +
|
||||
+ extra_postargs)
|
||||
+ else:
|
||||
+ self.spawn(self.compiler_so + cc_args + [src, '-o', obj] +
|
||||
+ extra_postargs)
|
||||
except DistutilsExecError as msg:
|
||||
raise CompileError(msg)
|
||||
|
||||
@@ -302,9 +312,14 @@
|
||||
self.set_executables(compiler='gcc -O -Wall',
|
||||
compiler_so='gcc -mdll -O -Wall',
|
||||
compiler_cxx='g++ -O -Wall',
|
||||
+ compiler_so_cxx='g++ -mdll -O -Wall',
|
||||
linker_exe='gcc',
|
||||
linker_so='%s %s %s'
|
||||
% (self.linker_dll, shared_option,
|
||||
+ entry_point),
|
||||
+ linker_exe_cxx='g++',
|
||||
+ linker_so_cxx='%s %s %s'
|
||||
+ % (self.linker_dll_cxx, shared_option,
|
||||
entry_point))
|
||||
# Maybe we should also append -mthreads, but then the finished
|
||||
# dlls need another dll (mingwm10.dll see Mingw32 docs)
|
||||
--- a/Lib/distutils/sysconfig.py
|
||||
+++ b/Lib/distutils/sysconfig.py
|
||||
@@ -184,9 +184,11 @@
|
||||
_osx_support.customize_compiler(_config_vars)
|
||||
_config_vars['CUSTOMIZED_OSX_COMPILER'] = 'True'
|
||||
|
||||
- (cc, cxx, opt, cflags, ccshared, ldshared, shlib_suffix, ar, ar_flags) = \
|
||||
- get_config_vars('CC', 'CXX', 'OPT', 'CFLAGS',
|
||||
- 'CCSHARED', 'LDSHARED', 'SHLIB_SUFFIX', 'AR', 'ARFLAGS')
|
||||
+ (cc, cxx, cflags, ccshared, ldshared, ldcxxshared, shlib_suffix, ar, ar_flags) = \
|
||||
+ get_config_vars('CC', 'CXX', 'CFLAGS', 'CCSHARED', 'LDSHARED', 'LDCXXSHARED',
|
||||
+ 'SHLIB_SUFFIX', 'AR', 'ARFLAGS')
|
||||
+
|
||||
+ cxxflags = cflags
|
||||
|
||||
if 'CC' in os.environ:
|
||||
newcc = os.environ['CC']
|
||||
@@ -201,19 +204,27 @@
|
||||
cxx = os.environ['CXX']
|
||||
if 'LDSHARED' in os.environ:
|
||||
ldshared = os.environ['LDSHARED']
|
||||
+ if 'LDCXXSHARED' in os.environ:
|
||||
+ ldcxxshared = os.environ['LDCXXSHARED']
|
||||
if 'CPP' in os.environ:
|
||||
cpp = os.environ['CPP']
|
||||
else:
|
||||
cpp = cc + " -E" # not always
|
||||
if 'LDFLAGS' in os.environ:
|
||||
ldshared = ldshared + ' ' + os.environ['LDFLAGS']
|
||||
+ ldcxxshared = ldcxxshared + ' ' + os.environ['LDFLAGS']
|
||||
if 'CFLAGS' in os.environ:
|
||||
- cflags = opt + ' ' + os.environ['CFLAGS']
|
||||
+ cflags = os.environ['CFLAGS']
|
||||
ldshared = ldshared + ' ' + os.environ['CFLAGS']
|
||||
+ if 'CXXFLAGS' in os.environ:
|
||||
+ cxxflags = os.environ['CXXFLAGS']
|
||||
+ ldcxxshared = ldcxxshared + ' ' + os.environ['CXXFLAGS']
|
||||
if 'CPPFLAGS' in os.environ:
|
||||
cpp = cpp + ' ' + os.environ['CPPFLAGS']
|
||||
cflags = cflags + ' ' + os.environ['CPPFLAGS']
|
||||
+ cxxflags = cxxflags + ' ' + os.environ['CPPFLAGS']
|
||||
ldshared = ldshared + ' ' + os.environ['CPPFLAGS']
|
||||
+ ldcxxshared = ldcxxshared + ' ' + os.environ['CPPFLAGS']
|
||||
if 'AR' in os.environ:
|
||||
ar = os.environ['AR']
|
||||
if 'ARFLAGS' in os.environ:
|
||||
@@ -222,13 +233,17 @@
|
||||
archiver = ar + ' ' + ar_flags
|
||||
|
||||
cc_cmd = cc + ' ' + cflags
|
||||
+ cxx_cmd = cxx + ' ' + cxxflags
|
||||
compiler.set_executables(
|
||||
preprocessor=cpp,
|
||||
compiler=cc_cmd,
|
||||
compiler_so=cc_cmd + ' ' + ccshared,
|
||||
- compiler_cxx=cxx,
|
||||
+ compiler_cxx=cxx_cmd,
|
||||
+ compiler_so_cxx=cxx_cmd + ' ' + ccshared,
|
||||
linker_so=ldshared,
|
||||
linker_exe=cc,
|
||||
+ linker_so_cxx=ldcxxshared,
|
||||
+ linker_exe_cxx=cxx,
|
||||
archiver=archiver)
|
||||
|
||||
compiler.shared_lib_extension = shlib_suffix
|
||||
--- a/Lib/distutils/unixccompiler.py
|
||||
+++ b/Lib/distutils/unixccompiler.py
|
||||
@@ -52,14 +52,17 @@
|
||||
# are pretty generic; they will probably have to be set by an outsider
|
||||
# (eg. using information discovered by the sysconfig about building
|
||||
# Python extensions).
|
||||
- executables = {'preprocessor' : None,
|
||||
- 'compiler' : ["cc"],
|
||||
- 'compiler_so' : ["cc"],
|
||||
- 'compiler_cxx' : ["cc"],
|
||||
- 'linker_so' : ["cc", "-shared"],
|
||||
- 'linker_exe' : ["cc"],
|
||||
- 'archiver' : ["ar", "-cr"],
|
||||
- 'ranlib' : None,
|
||||
+ executables = {'preprocessor' : None,
|
||||
+ 'compiler' : ["cc"],
|
||||
+ 'compiler_so' : ["cc"],
|
||||
+ 'compiler_cxx' : ["c++"],
|
||||
+ 'compiler_so_cxx' : ["c++"],
|
||||
+ 'linker_so' : ["cc", "-shared"],
|
||||
+ 'linker_exe' : ["cc"],
|
||||
+ 'linker_so_cxx' : ["c++", "-shared"],
|
||||
+ 'linker_exe_cxx' : ["c++"],
|
||||
+ 'archiver' : ["ar", "-cr"],
|
||||
+ 'ranlib' : None,
|
||||
}
|
||||
|
||||
if sys.platform[:6] == "darwin":
|
||||
@@ -108,12 +111,19 @@
|
||||
|
||||
def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts):
|
||||
compiler_so = self.compiler_so
|
||||
+ compiler_so_cxx = self.compiler_so_cxx
|
||||
if sys.platform == 'darwin':
|
||||
compiler_so = _osx_support.compiler_fixup(compiler_so,
|
||||
cc_args + extra_postargs)
|
||||
+ compiler_so_cxx = _osx_support.compiler_fixup(compiler_so_cxx,
|
||||
+ cc_args + extra_postargs)
|
||||
try:
|
||||
- self.spawn(compiler_so + cc_args + [src, '-o', obj] +
|
||||
- extra_postargs)
|
||||
+ if self.detect_language(src) == 'c++':
|
||||
+ self.spawn(compiler_so_cxx + cc_args + [src, '-o', obj] +
|
||||
+ extra_postargs)
|
||||
+ else:
|
||||
+ self.spawn(compiler_so + cc_args + [src, '-o', obj] +
|
||||
+ extra_postargs)
|
||||
except DistutilsExecError as msg:
|
||||
raise CompileError(msg)
|
||||
|
||||
@@ -171,22 +181,16 @@
|
||||
ld_args.extend(extra_postargs)
|
||||
self.mkpath(os.path.dirname(output_filename))
|
||||
try:
|
||||
- if target_desc == CCompiler.EXECUTABLE:
|
||||
- linker = self.linker_exe[:]
|
||||
+ if target_lang == "c++":
|
||||
+ if target_desc == CCompiler.EXECUTABLE:
|
||||
+ linker = self.linker_exe_cxx[:]
|
||||
+ else:
|
||||
+ linker = self.linker_so_cxx[:]
|
||||
else:
|
||||
- linker = self.linker_so[:]
|
||||
- if target_lang == "c++" and self.compiler_cxx:
|
||||
- # skip over environment variable settings if /usr/bin/env
|
||||
- # is used to set up the linker's environment.
|
||||
- # This is needed on OSX. Note: this assumes that the
|
||||
- # normal and C++ compiler have the same environment
|
||||
- # settings.
|
||||
- i = 0
|
||||
- if os.path.basename(linker[0]) == "env":
|
||||
- i = 1
|
||||
- while '=' in linker[i]:
|
||||
- i += 1
|
||||
- linker[i] = self.compiler_cxx[i]
|
||||
+ if target_desc == CCompiler.EXECUTABLE:
|
||||
+ linker = self.linker_exe[:]
|
||||
+ else:
|
||||
+ linker = self.linker_so[:]
|
||||
|
||||
if sys.platform == 'darwin':
|
||||
linker = _osx_support.compiler_fixup(linker, ld_args)
|
||||
--- a/Lib/_osx_support.py
|
||||
+++ b/Lib/_osx_support.py
|
||||
@@ -14,13 +14,13 @@
|
||||
# configuration variables that may contain universal build flags,
|
||||
# like "-arch" or "-isdkroot", that may need customization for
|
||||
# the user environment
|
||||
-_UNIVERSAL_CONFIG_VARS = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS', 'BASECFLAGS',
|
||||
- 'BLDSHARED', 'LDSHARED', 'CC', 'CXX',
|
||||
- 'PY_CFLAGS', 'PY_LDFLAGS', 'PY_CPPFLAGS',
|
||||
- 'PY_CORE_CFLAGS')
|
||||
+_UNIVERSAL_CONFIG_VARS = ('CFLAGS', 'CXXFLAGS', 'LDFLAGS', 'CPPFLAGS',
|
||||
+ 'BASECFLAGS', 'BLDSHARED', 'LDSHARED', 'LDCXXSHARED',
|
||||
+ 'CC', 'CXX', 'PY_CFLAGS', 'PY_LDFLAGS',
|
||||
+ 'PY_CPPFLAGS', 'PY_CORE_CFLAGS')
|
||||
|
||||
# configuration variables that may contain compiler calls
|
||||
-_COMPILER_CONFIG_VARS = ('BLDSHARED', 'LDSHARED', 'CC', 'CXX')
|
||||
+_COMPILER_CONFIG_VARS = ('BLDSHARED', 'LDSHARED', 'LDCXXSHARED', 'CC', 'CXX')
|
||||
|
||||
# prefix added to original configuration variable names
|
||||
_INITPRE = '_OSX_SUPPORT_INITIAL_'
|
||||
--- a/Makefile.pre.in
|
||||
+++ b/Makefile.pre.in
|
||||
@@ -538,7 +538,7 @@
|
||||
*\ -s*|s*) quiet="-q";; \
|
||||
*) quiet="";; \
|
||||
esac; \
|
||||
- $(RUNSHARED) CC='$(CC)' LDSHARED='$(BLDSHARED)' OPT='$(OPT)' \
|
||||
+ $(RUNSHARED) CC='$(CC)' LDSHARED='$(BLDSHARED)' CFLAGS='$(PY_CFLAGS)' \
|
||||
_TCLTK_INCLUDES='$(TCLTK_INCLUDES)' _TCLTK_LIBS='$(TCLTK_LIBS)' \
|
||||
$(PYTHON_FOR_BUILD) $(srcdir)/setup.py $$quiet build
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue