docs: move usage before how it works
Some checks failed
Conventional Commits / Conventional Commits (pull_request) Has been cancelled
Some checks failed
Conventional Commits / Conventional Commits (pull_request) Has been cancelled
Signed-off-by: Christina Sørensen <christina@cafkafk.com>
This commit is contained in:
parent
cc785d16d4
commit
95d53c5c9a
1 changed files with 13 additions and 14 deletions
27
README.md
27
README.md
|
@ -21,20 +21,6 @@ weather`](https://guix.gnu.org/manual/en/html_node/Invoking-guix-weather.html).
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
## How It Works
|
|
||||||
|
|
||||||
The basic idea is that we construct a set of all requisites to build the
|
|
||||||
top-level of a NixOS configuration, and then query Nix cache(s) for the narinfo.
|
|
||||||
By doing this in a high concurrency, parallel task runner (i.e. tokio async
|
|
||||||
runtime), and only querying the headers for status codes, we can reach
|
|
||||||
impressive speeds, typically around 45~ network time.
|
|
||||||
|
|
||||||
One of the biggest limiting factors regarding speed is building the
|
|
||||||
`config.system.toplevel`, and finding the necessary requisites with `nix-store`.
|
|
||||||
Caching the requisites is a future goal, so that we only have to build the
|
|
||||||
`toplevel`, and then match against its derivation in cache, which should cut
|
|
||||||
down the nix part of the runtime by ~80%.
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
> **Note**
|
> **Note**
|
||||||
|
@ -49,3 +35,16 @@ nix-weather --name myhost --config ~/git/my-nixos-config
|
||||||
Here, we specify the name of the host, as specified at the flake output
|
Here, we specify the name of the host, as specified at the flake output
|
||||||
`nixosConfiguration`, and a path to the NixOS configuration flake.
|
`nixosConfiguration`, and a path to the NixOS configuration flake.
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
The basic idea is that we construct a set of all requisites to build the
|
||||||
|
top-level of a NixOS configuration, and then query Nix cache(s) for the narinfo.
|
||||||
|
By doing this in a high concurrency, parallel task runner (i.e. tokio async
|
||||||
|
runtime), and only querying the headers for status codes, we can reach
|
||||||
|
impressive speeds, typically around 45~ network time.
|
||||||
|
|
||||||
|
One of the biggest limiting factors regarding speed is building the
|
||||||
|
`config.system.toplevel`, and finding the necessary requisites with `nix-store`.
|
||||||
|
Caching the requisites is a future goal, so that we only have to build the
|
||||||
|
`toplevel`, and then match against its derivation in cache, which should cut
|
||||||
|
down the nix part of the runtime by ~80%.
|
||||||
|
|
Loading…
Reference in a new issue