aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlex Auvolat <alex@adnab.me>2022-04-20 18:14:56 +0200
committerAlex Auvolat <alex@adnab.me>2022-04-20 18:14:56 +0200
commitc99c0ffd30c3a6f3ea67323437f1a9773c3e283e (patch)
treea461ee8d2375aedeec459f99aea0e3b96d4537f4
parent2685970256d5b87462baa4b95056ace3af6a29a7 (diff)
downloadnixcfg-c99c0ffd30c3a6f3ea67323437f1a9773c3e283e.tar.gz
nixcfg-c99c0ffd30c3a6f3ea67323437f1a9773c3e283e.zip
udpate README
-rw-r--r--README.md18
1 files changed, 7 insertions, 11 deletions
diff --git a/README.md b/README.md
index 854ee41..a2b5e8f 100644
--- a/README.md
+++ b/README.md
@@ -8,15 +8,6 @@ It sets up the following:
- Consul, with TLS
- Nomad, with TLS
-The following scripts are available here:
-
-- `deploy_nixos`, the main script that updates the NixOS config
-- `genpki.sh`, a script to generate Consul and Nomad's TLS PKI (run this once only)
-- `deploy_pki`, a script that sets up all of the TLS secrets
-- `upgrade_nixos`, a script to upgrade NixOS
-- `tlsproxy.sh`, a script that allows non-TLS access to the TLS-secured Consul and Nomad, by running a simple local proxy with socat
-- `tlsenv.sh`, a script to be sourced (`source tlsenv.sh`) that configures the correct environment variables to use the Nomad and Consul CLI tools with TLS
-
## Configuring the OS
This repo contains a bunch of scripts to configure NixOS on all cluster nodes.
@@ -27,12 +18,17 @@ Most scripts are invoked with the following syntax:
- `./deploy_<something> <cluster_name>` to run the deployment script on all nodes of the cluster `<cluster_name>`
- `./deploy_<something> <cluster_name> <node1> <node2> ...` to run the deployment script only on nodes `node1, node2, ...` of cluster `<cluster_name>`.
+All deployment scripts can use the following parameters passed as environment variables:
+
+- `SUDO_PASS`: optionnally, the password for `sudo` on cluster nodes. If not set, it will be asked at the begninning.
+- `SSH_USER`: optionnally, the user to try to login using SSH. If not set, the username from your local machine will be used.
### Assumptions (how to setup your environment)
- you have an SSH access to all of your cluster nodes (listed in `cluster/<cluster_name>/ssh_config`)
-- your account is in group `wheel` and you know its password (you need it to become root using `sudo`)
+- your account is in group `wheel` and you know its password (you need it to become root using `sudo`);
+ the password is the same on all cluster nodes (see below for password management tools)
- you have a clone of the secrets repository in your `pass` password store, for instance at `~/.password-store/deuxfleurs`
(scripts in this repo will read and write all secrets in `pass` under `deuxfleurs/cluster/<cluster_name>/`)
@@ -109,7 +105,7 @@ Then, deploy the PKI on all nodes with:
**When adding a node to the cluster:** just do `./deploy_pki <cluster_name> <name_of_new_node>`
-### Adding administrators
+### Adding administrators and password management
Adminstrators are defined in the `cluster.nix` file for each cluster (they could also be defined in the site-specific Nix files if necessary).
This is where their public SSH keys for remote access are put.