Introduction
There has not been any third party security audit for this project.
Use this software at your own risk!
This project is currently pre v1.0, which means, even though it is not expected, breaking changes might come with new versions.
The docs are not fully complete yet, some parts are missing and are being worked on.
What it is
Rauthy is an OpenID Connect (OIDC) Provider and Single Sign-On solution written in Rust.
Secure by default
It tries to be as secure as possible by default while still providing all the options needed to
be compatible with
older systems. For instance, if you create a new OIDC client, it activates ed25519
as the default algorithm for
token signing and S256 PKCE flow. This will not work with old clients, which do not support it,
but you can of course
deactivate this to your liking.
MFA and Passwordless Login
Option 1:
Password + Security Key (without User Verification):
Rauthy provides FIDO 2 / Webauthn login flows. If you once logged in on a new client with your
username + password,
you
will get an encrypted cookie which will allow you to log in without a password from that moment
on. You only need to
have a FIDO compliant Passkey being registered for your account.
Option 2:
Passkey-Only Accounts:
Rauthy supports Passkey-Only-Accounts: you basically just provide your E-Mail address and log in
with
your FIDO 2 Passkey. Your account will not even have / need a password. This login flow is
restricted though to only
those passkeys, that can provide User Verification (UV) to always have at least 2FA security.
Discoverable credentials are discouraged with Rauthy. This means you will need to enter your E-Mail for the login (which will be auto-filled after the first one), but Rauthy passkeys do not use any storage on your device. For instance when you have a Yubikey which can store 25 passkeys, it will not use a single slot there even having full support.
Fast and efficient
The main goal was to provide an SSO solution like Keycloak and others while using a way lower footprint and being more efficient with resources. For instance, Rauthy can easily run a fully blown SSO provider on just a Raspberry Pi. It makes extensive use of caching to be as fast as possible in cases where your database is further away or just a bit slower, because it is maybe running on an SBC from an SD card or in the cloud with the lowest storage bandwidth. Most things are even cached for several hours and special care has been taken into account in case of cache eviction and invalidation.
A Rauthy deployment with the embedded SQLite, filled caches and a small set of clients and users configured typically only uses between 20 and 25 MB of memory! This is pretty awesome when comparing it to other existing solutions out there. If a password from a login is hashed, the memory consumption will of course go up way higher than this depending on your configured Argon2ID parameters, which you got fully under control.
For achieving the speed and efficiency, some additional design tradeoffs were made. For instance, some things you configure statically via config file and not dynamically via UI, while most of them are configured once and then never touched again.
Highly Available
Even though it makes extensive use of caching, you can run it in HA mode. It uses its own embedded distributed HA cache called redhac, which cares about cache eviction on remote hosts. You can choose between a SQLite for single instance deployments and a Postgres, if you need HA. MySQL support might come in the future.
Admin UI + User Account Dashboard
Unlike many other options, rauthy
does have an Admin UI which can be used to
basically do almost any operation you
might need to administrate the whole application and its users. There is also an account
dashboard for each individual
user, where users will get a basic overview over their account and can self-manage som values,
password, passkeys, and
so son.
Some Screenshots and further introduction will follow in the future.
Client Branding
You have a simple way to create some kind of branding or stylized look for the Login page for
each client.
The whole color theme can be changed and each client can have its own custom logo.
Additionally, if you modify the branding for the default rauthy
client, it will not
only change the look for the Login
page, but also for the Account and Admin page.
Events and Auditing
Rauthy comes with an Event and Alerting-System. Events are generated in all kinds of scenarios. They can be sent via E-Mail, Matrix or Slack, depending on the severity and the configured level. You will see them in the Admin UI in real time, or you can subscribe to the events stream and externally handle them depending on your own business logic.
Brute-Force and basic DoS protection
Rauthy has brute force and basic DoS protection for the login endpoint. Your timeout will be artificially delayed after enough invalid logins. It does auto-balacklist IP's that exceeded too many invalid logins, with automatic expiry of the blacklisting. You can, if you like, manually blacklist certain IP's as well via the Admin UI.
IoT Ready
With the possibility to run on devices with very limited resources and having compatibility for
the OAuth Device
Authorization Grant device_code
flow, Rauthy would be a very good choice for IoT
projects. The IdP itself can easily
run on a Raspberry Pi and all headless devices can be authenticated via the
device_code
flow. The rauthy-client
has everything built-in and ready, if you want to use Rust on the IoT devices as well. It has
not been checked in a
no_std
environment yet, but the client implementation is pretty simple.
Scales to millions of users
Benchmarks for v1.0.0 have not been done yet, but after some first basic tests and generating a
lot of dummy data, I
can confirm that Rauthy has no issues handling millions of users. The first very basic tests
have been done with SQLite
and ~11 million users. All parts and functions kept being fast and responsive with the only
exception that the
user-search in the admin UI was slowed down with such a high user count. It took ~2-3 seconds at
that point to get a
result, which should be no issue at all so far (Postgres tests have not been done yet).
The only limiting factor at that point will be your configuration and needs for password hashing
security. It really
depends on how many resources you want to use for hashing (more resources == more secure) and
how many concurrent logins
at the exact same time you need to support.
Already in production
Rauthy is already being used in production, and it works with all typical OIDC clients (so far).
It was just not an
open source project for quite some time.
Keycloak was a rough inspiration in certain places and if something is working with Keycloak, it
does with rauthy
too
(again, so far).
Features List
- Fully working OIDC provider
- SQLite or Postgres as database
- Fast and efficient with minimal footprint
- Highly configurable
- Secure default values
- True passwordless accounts with E-Mail + Magic Link + Passkey
- Dedicated Admin UI
- Account dashboard UI for each user with self-service
- OpenID Connect Dynamic Client Registration
- OAuth 2.0 Device Authorization Grant flow
- Upstream Authentication Providers (Login with ...)
- Supports DPoP tokens for decentralized login flows
- Supports ephemeral, dynamic clients for decentralized login flows
- All End-User facing sites support i18n server-side translation with the possibility to add more languages
- Simple per client branding for the login page
- Custom roles
- Custom groups
- Custom scopes
- Custom user attributes
- User attribute binding to custom scopes
- Configurable password policy
- Admin API Keys with fine-grained access rights
- Events and alerting system
- Optional event persistence
-
Dedicated
forward_auth
endpoint, in addition to the existing userinfo, with support for configurable trusted auth headers - Optional event notifications via: E-Mail, Matrix, Slack
- Optional Force MFA for the Admin UI
- Optional Force MFA for each individual client
- Additional encryption inside the database for the most critical entries
- Automatic database backups with configurable retention and auto-cleanup (SQLite only)
- auto-encrypted backups (SQLite)
- Ability to push SQLite backups to S3 storage
- auto-restore SQLite backups from file and s3
- High-Availability
- HA cache layer with its own (optional) mTLS connection
- Username enumeration prevention
- Login / Password hashing rate limiting
- Session client peer IP binding
- IP blacklisting feature
- Auto-IP blacklisting for login endpoints
- Argon2ID with config helper UI utility
- Housekeeping schedulers and cron jobs
- JSON Web Key Set (JWKS) autorotation feature
- Account conversions between traditional password and Passkey only
- Optional open user registration
- Optional user registration domain restriction
- App version update checker
- SwaggerUI documentation
- Configurable E-Mail templates for NewPassword + ResetPassword events
-
Prometheus
/metrics
endpoint on separate port - No-Setup migrations between different databases (Yes, even between SQLite and Postgres)
-
Can serve a basic
webid
document - Experimental FedCM support
Getting Started
Choose A Database
You only need to answer a single question to decide, which database you should use:
Do you want / need a HA deployment?
If the answer is Yes, choose Postgres, otherwise choose SQLite.
SQLite is no performance bottleneck at all. After some first very rough tests, it does not have problems with even millions of users. The bottleneck will always be the password hashing algorithm settings, your needs for how secure it should be and how many concurrent logins you want to be able to handle (more on that later).
If you want to migrate from Postgres to SQLite at a later point, you can do this at any
time very easily.
Just take a look at the Reference
Config and the variable MIGRATE_DB_FROM
.
Container Images
Rauthy comes with different container images. The difference between them is not only x86 vs arm64, but the database driver under the hood as well. The reason is, that almost all SQL queries are checked at compile time. To make this possible, different images need to be created. Apart from the database driver, there is no difference between them. You also can't use the "wrong" image by accident. If you try to use a Postgres image with a SQLite database URL and vice versa, Rauthy will yell at you at startup and panic on purpose.
- The "normal" container images can be used for Postgres
- The
*-lite
images use an embedded SQLite - The
MIGRATE_DB_FROM
(explained later) can be used with any combination of image / database
At the time of writing, you can run Rauthy either with Docker
or inside Kubernetes.
Both Getting Started guides do not cover all set up you might want to do for going into
production. Especially the
Docker guide is more for testing.
You should take a look at the Reference
Config to see, what you might want to configure
additionally. A dedicated Going into production guide will be written in the future.
Docker
Testing / Local Evaluation
For getting a first look at Rauthy, you can start it with docker (or any other container runtime)
on your localhost.
The image contains a basic default config which is sufficient for local testing. Rauthy has
pretty strict cookie
settings and not all browsers treat localhost
as being secure, therefore you should
allow insecure cookies for
testing locally:
docker run --rm \
-e COOKIE_MODE=danger-insecure \
-p 8080:8080 \
--name rauthy \
ghcr.io/sebadob/rauthy:0.26.2-lite
This will start the container in interactive mode with an in-memory SQLite database. Just take a look at the log at the logs to see the URL and first password.
If you want to test a bit more in depth, you can change to an on-disk database easily:
docker run -d \
-e COOKIE_MODE=danger-insecure \
-e DATABASE_URL=sqlite:data/rauthy.db \
-p 8080:8080 \
--name rauthy \
ghcr.io/sebadob/rauthy:0.26.2-lite
The second command does not start in interactive mode and it does not delete the
container on exit.
This means the data will be persisted, as long as the container itself is not erased and
you can shutdown and
restart to your liking without using test data.
To see the logs and the new admin password, take a look with
docker logs -f rauthy
To delete the container, if you do not need it anymore, execute
docker stop rauthy && docker rm rauthy
To proceed, go to First Start
Production Setup
For going to production or to test more in-depth, you need to apply a config that matches your environment.
The first thing you might want to do is to add a volume mount for the database.
The second thing is to provide a more custom config.
Rauthy can either be configured via environment variables only, or you can provide a config file.
You can add environment variables to the startup command with the -e
option, like
shown in the on-disk SQLite
command.
A better approach, when you have a bigger config file, would be to have all of them in a config
file.
The following commands will work on Linux and Mac OS (even though not tested on Mac OS).
If you are on Windows,
you might replace the pwd
command and just paste in the path directly.
Since I am no Windows user myself, I
cannot provide tested commands in this case.
1. We want to create a new directory for rauthy's persistent data
mkdir rauthy
2. Add the new config file.
This documentation is in an early version and remote links are not available yet, they will be
added at a later
point. For now, create a new file and paste the reference
config
vim rauthy/rauthy.cfg
3. Create a sub-directory for the Database files
mkdir rauthy/data
The rauthy container by default runs everything with user:group 10001:10001 for security reasons.
To make this work with the default values, you have 2 options:
- Change the access rights:
chmod 0640 rauthy/rauthy.cfg
chmod 0700 -R rauthy/data
sudo chown -R 10001:10001 rauthy
- The other solution, if you do not have sudo rights, would be to change the owner of the whole directory.
chmod a+w rauthy/data
This will make the directory writeable for everyone, so rauthy can create the database files inside the container with 10001:10001 again.
The safest approach would be to change the owner and group for these files on the host
system. This needs sudo
to edit the config, which may be a bit annoying, but at the same time it makes sure,
that you can only read
the secrets inside it with sudo
too.
4. Adopt the config to your liking.
Make sure to adjust the volume mount for the sqlite directory in step 5, if it differs from
sqlite:data/rauthy.db
5. Start the container with volume mounts
docker run -d \
-v $(pwd)/rauthy/rauthy.cfg:/app/rauthy.cfg \
-v $(pwd)/rauthy/data:/app/data \
-p 8080:8080 \
--name rauthy \
ghcr.io/sebadob/rauthy:0.26.2-lite
6. Restrict DB files access even more
After rauthy has done the first start, you could harden the access rights of the SQLite files
even more.
This would make sure, that no one without sudo
could just copy and read in the
SQLite in some other place.
Just execute once more:
sudo chmod 0700 -R rauthy/data
7. You can now proceed with the First Start steps.
Kubernetes
At the time of writing, there is no Helm Chart or Kustomize files available yet. The whole setup is pretty simple on purpose though, so it should not be a big deal to get it running inside Kubernetes.
Single Instance
Since rauthy uses pretty aggressive caching for different reasons, you cannot just have a single
deployment and
scale up the replicas without enabling HA_MODE
. How to deploy a HA version is
described below.
The steps to deploy on Kubernetes are pretty simple.
- Create namespace
- Create and apply the config
- Create and apply secrets
- Create and apply the stateful set
Create Namespace
For the purpose of this documentation, we assume that rauthy will be deployed in the
rauthy
namespace.
If this is not the case for you, change the following commands accordingly.
kubectl create ns rauthy
Create and apply the config
This documentation will manage the Kubernetes files in a folder called rauthy
.
mkdir rauthy && cd rauthy
Create the config file, paste the reference
config and adjust it to your needs.
There is no "nice 1-liner" available yet.
echo 'apiVersion: v1
kind: ConfigMap
metadata:
name: rauthy-config
namespace: rauthy
data:
rauthy.cfg: |+
PASTE CONFIG HERE - WATCH THE INDENTATION' > config.yaml
Open the config with your favorite editor and paste the reference config in place.
Make sure to watch the indentation.
Do not include sensitive information like for instance the ENC_KEYS inside the normal
Config.
Use the secrets from the next step for this.
If you use SQLite, you can include the DATABASE_URL in the config, since it does not
contain a password, but
never do this for Postgres!
I recommend to just always set HQL_NODE_ID_FROM=k8s
when deploying a
StatefulSet. This will parse the Raft NodeID
automatically from the K8s Pod / Hostname and you don't have to worry about the HQL_NODE_ID
.
For instance, a Pod
named rauthy-0
will be translated to HQL_NODE_ID=1
automatically.
Create and apply secrets
touch secrets.yaml
Paste the following content into the secrets.yaml
file:
apiVersion: v1
kind: Secret
metadata:
name: rauthy-secrets
namespace: rauthy
type: Opaque
stringData:
# Secret token, which is used to authenticate the cache members.
# Only necessary when `HA_MODE=true`
#CACHE_AUTH_TOKEN=
# The database driver will be chosen at runtime depending on
# the given DATABASE_URL format. Examples:
# Sqlite: 'sqlite:data/rauthy.db' or 'sqlite::memory:'
# Postgres: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName'
#
# NOTE: The password in this case should be alphanumeric.
# Special characters could cause problems in the connection
# string.
DATABASE_URL:
# Secrets for Raft internal authentication as well as for the Hiqlite API.
# These must be at least 16 characters long and you should provide
# different ones for both variables.
HQL_SECRET_RAFT:
HQL_SECRET_API:
# You need to define at least one valid encryption key.
# These keys are used in various places, like for instance
# encrypting confidential client secrets in the database, or
# encryption cookies, and so on.
#
# The format must match:
# ENC_KEYS: |-
# q6u26vXV/M0NFQzhSSldCY01rckJNa1JYZ3g2NUFtSnNOVGdoU0E=
# bVCyaggQ/UzluN29DZW41M3hTSkx6Y3NtZmRuQkR2TnJxUTYzcjQ=
ENC_KEYS: |-
# This identifies the key ID from the `ENC_KEYS` list, that
# should actively be used for new encryption's.
ENC_KEY_ACTIVE:
# Needed for sending E-Mails for password resets and so on
SMTP_PASSWORD:
# The Webhook for Slack Notifications.
# If left empty, no messages will be sent to Slack.
#EVENT_SLACK_WEBHOOK=
# Matrix variables for event notifications.
# `EVENT_MATRIX_USER_ID` and `EVENT_MATRIX_ROOM_ID` are mandatory.
# Depending on your Matrix setup, additionally one of
# `EVENT_MATRIX_ACCESS_TOKEN` or `EVENT_MATRIX_USER_PASSWORD`
# is needed. If you log in to Matrix with User + Password, you
# may use `EVENT_MATRIX_USER_PASSWORD`.
# If you log in via OIDC SSO (or just want to use a session token
# you can revoke), you should provide `EVENT_MATRIX_ACCESS_TOKEN`.
# If both are given, the `EVENT_MATRIX_ACCESS_TOKEN` will be preferred.
#
# If left empty, no messages will be sent to Slack.
# Format: `@<user_id>:<server address>`
#EVENT_MATRIX_USER_ID=
# Format: `!<random string>:<server address>`
#EVENT_MATRIX_ROOM_ID=
#EVENT_MATRIX_ACCESS_TOKEN=
#EVENT_MATRIX_USER_PASSWORD=
All variables specified here should be out-commented in the rauthy-config
from
above.
Make sure that things like CACHE_AUTH_TOKEN
and ENC_KEYS
are generated
in a secure random way.
The DATABASE_URL
with SQLite, like used in this example, does not contain sensitive
information, but we will
create it as a secret anyway to have an easier optional migration to postgres later on.
Generate a new encryption key with ID in the correct format.
echo "$(openssl rand -hex 4)/$(openssl rand -base64 32)"
Paste the String quoted in the secrets for ENC_KEYS
.
The ENC_KEY_ID
are the characters in the beginning until the first /
,
for instance when
❯ echo "$(openssl rand -hex 4)/$(openssl rand -base64 32)"
d4d1a581/mNIqEpxz4UudPggRpF1QJtjVdZ6JEeVAHepDLZZYI2M=
The ENC_KEY_ID
would be
d4d1a581
You can generate safe values for both ´HQL_SECRET_RAFTand
HQL_SECRET_API` in many
ways. You can just provide a random
alphanumeric value, which for instance:
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c48
or you can use the above openssl
command again, even though Hiqlite does not need or
utilize base64:
openssl rand -base64 48
Create and apply the stateful set
touch sts.yaml
Paste the following content into the sts.yaml
file:
apiVersion: v1
kind: Service
metadata:
name: rauthy
namespace: rauthy
spec:
selector:
app: rauthy
ports:
# If you use the HA feature later on, the port over which the cache layer does
# communicate.
- name: cache
port: 8000
targetPort: 8000
# Assuming that this example file will run behind a Kubernetes ingress and does
# use HTTP internally.
- name: http
port: 8080
targetPort: 8080
# Uncomment, if you change to direct HTTPS without a reverse proxy
#- name: https
# port: 8443
# targetPort: 8443
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rauthy
namespace: rauthy
labels:
app: rauthy
spec:
serviceName: rauthy
# Do not just scale up replicas without a proper HA Setup
replicas: 1
selector:
matchLabels:
app: rauthy
template:
metadata:
labels:
app: rauthy
spec:
securityContext:
fsGroup: 10001
containers:
- name: rauthy
image: ghcr.io/sebadob/rauthy:0.26.2-lite
imagePullPolicy: IfNotPresent
securityContext:
# User ID 10001 is actually built into the container at the creation for
# better security
runAsUser: 10001
runAsGroup: 10001
allowPrivilegeEscalation: false
ports:
- containerPort: 8000
# You may need to adjust this, if you decide to start in https only mode
# or use another port
- containerPort: 8080
- containerPort: 8443
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: rauthy-secrets
key: DATABASE_URL
# You must set both Hiqlite secrets even for a single node deployment
- name: HQL_SECRET_RAFT
valueFrom:
secretKeyRef:
name: rauthy-secrets
key: HQL_SECRET_RAFT
- name: HQL_SECRET_API
valueFrom:
secretKeyRef:
name: rauthy-secrets
key: HQL_SECRET_API
# Encryption keys used for encryption in many places
- name: ENC_KEYS
valueFrom:
secretKeyRef:
name: rauthy-secrets
key: ENC_KEYS
- name: ENC_KEY_ACTIVE
valueFrom:
secretKeyRef:
name: rauthy-secrets
key: ENC_KEY_ACTIVE
- name: SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: rauthy-secrets
key: SMTP_PASSWORD
#- name: EVENT_SLACK_WEBHOOK
# valueFrom:
# secretKeyRef:
# name: rauthy-secrets
# key: EVENT_SLACK_WEBHOOK
#- name: EVENT_MATRIX_USER_ID
# valueFrom:
# secretKeyRef:
# name: rauthy-secrets
# key: EVENT_MATRIX_USER_ID
#- name: EVENT_MATRIX_ROOM_ID
# valueFrom:
# secretKeyRef:
# name: rauthy-secrets
# key: EVENT_MATRIX_ROOM_ID
#- name: EVENT_MATRIX_ACCESS_TOKEN
# valueFrom:
# secretKeyRef:
# name: rauthy-secrets
# key: EVENT_MATRIX_ACCESS_TOKEN
#- name: EVENT_MATRIX_USER_PASSWORD
# valueFrom:
# secretKeyRef:
# name: rauthy-secrets
# key: EVENT_MATRIX_USER_PASSWORD
volumeMounts:
- mountPath: /app/data
name: rauthy-data
readOnly: false
- mountPath: /app/rauthy.cfg
subPath: rauthy.cfg
name: rauthy-config
readOnly: true
readinessProbe:
httpGet:
# You may need to adjust this, if you decide to start in https only
# mode or use another port
scheme: HTTP
port: 8080
#scheme: HTTPS
#port: 8443
path: /auth/v1/ping
initialDelaySeconds: 1
periodSeconds: 10
livenessProbe:
httpGet:
# You may need to adjust this, if you decide to start in https only
# mode or use another port
scheme: HTTP
port: 8080
#scheme: HTTPS
#port: 8443
path: /auth/v1/health
initialDelaySeconds: 1
periodSeconds: 30
resources:
requests:
# Tune the memory requests value carefully. Make sure, that the
# pods request at least:
# `ARGON2_M_COST` / 1024 * `MAX_HASH_THREADS` Mi + idle memory
memory: 164Mi
# The CPU needs to be adjusted during runtime. This heavily
# depends on your use case.
cpu: 100m
limits:
# Be careful with the memory limit. You must make sure, that the
# (very costly) password hashing has enough memory available. If not,
# the application will crash. You do not really need a memory limit,
# since Rust is not a garbage collected language. Better take a close
# look at what the container actually needs during
# prime time and set the requested resources above properly.
#memory:
# A CPU limit may make sense in case of DDoS attacks or something
# like this, if you do not have external rate limiting or other
# mechanisms. Otherwise, `MAX_HASH_THREADS` is the main mechanism
# to limit resources.
cpu: 1000m
volumes:
- name: rauthy-config
configMap:
name: rauthy-config
volumeClaimTemplates:
- metadata:
name: rauthy-data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 128Mi
#storageClassName: provideIfNeeded
Ingress
This example assumes, that the deployment will run behind a Kubernetes ingress resource of your choice.
It uses Traefik with the IngressRoute
CRD.
Nevertheless, the ingress is really simple, and it should be very easy to adopt anything else.
Create the ingress.yaml
touch ingress.yaml
Paste the following content into the ingress.yaml
file and adjust to your needs
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: rauthy-https
namespace: rauthy
spec:
entryPoints:
- websecure
routes:
- match: Host(`auth.example.com`)
kind: Rule
services:
- name: rauthy
port: 8080
Deploy
We are now ready to deploy:
kubectl apply -f .
And then to observe the deployment:
kubectl -n rauthy get pod -w
You can now proceed with the First Start steps.
Production Setup
Config
Going to production does not need too many additional steps.
TLS Certificates
The thing you need will be valid TLS certificates, of course. To get these, there are a lot of
existing mechanisms.
If you use an internal Certificate Authority (CA), you do have you own tools to work with this
anyway. If, however, you
want to use something like Let's Encrypt, I suggest to
use the
cert-manager, which is easy and straight forward to use.
An example, how to add a certificate for the Traefik IngressRoute from above:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: rauthy-https
namespace: rauthy
spec:
entryPoints:
- websecure
tls:
# Paste the name of the TLS secret here
secretName: secret-name-of-your-tls-certificate
routes:
- match: Host(`auth.example.com`)
kind: Rule
services:
- name: rauthy
port: 8080
You may want to add an HTTPS redirect as well:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: https-only
namespace: rauthy
spec:
redirectScheme:
scheme: https
permanent: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: rauthy-https
namespace: rauthy
spec:
entryPoints:
- web
routes:
- match: Host(`auth.example.com`)
kind: Rule
middlewares:
- name: https-only
services:
- name: rauthy
port: 8080
Additional steps
There are a few more things to do when going into production, but these are the same for Kubernetes and Docker and will be explained in later chapters.
You can now proceed with the First Start steps.
First Start
Initial admin password and login
With the very first start of rauthy, or better with an empty database, when rauthy is starting, it does not only create all the necessary schemas and initial data, but also some sensitive information will be generated safely. This includes a set of Json Web Keys (JWKS) for the token signing and some secrets.
The most important of these newly generated secrets is the default admin user's password.
When this is securely generated with the very first start, it will be logged into the console.
This will only
happen once and never again.
Logs with docker
docker logs -f rauthy
Logs with Kubernetes
kubectl -n rauthy logs -f rauthy-0
If you do a Kubernets HA deployment directly, only the Pod rauthy-0
will log
the initial password.
If you missed this log entry, you will not be able to log in.
If this is the case, you can delete the database / volume and just restart rauthy.
The log message contains a link to the accounts page, where you then should log in to immediately
set a new password.
Follow the link, use as the default admin admin@localhost.de
and as password the
copied value from the log.
- When logged into the account, click
EDIT
andCHANGE PASSWORD
to set a new password - Log out of the account and try to log in to the admin ui with the new password
Custom rauthy admin user
It is a good idea, to either keep the admin@localhost.de
as a fallback user with
just a very long password, or
disable it, after a custom admin has been added.
When logged in to the admin UI, you can add a new user. When the SMTP
settings are
correctly configured in the config,
which we can test right now, you will receive an E-Mail with the very first password reset.
If you do not receive an E-Mail after the first user registration, chances are you may
have a problem with the SMTP
setup.
To debug this, you can set LOG_LEVEL=debug
in the config and then watch the
logs after a restart.
rauthy_admin
user
role
The role, which allows a user to access the admin UI, is the rauthy_admin
.
If the user has this role assigned, he will be seen as an admin.
Under the hood, rauthy itself uses the OIDC roles and groups in the same way, as all clients
would do. This means you
should not neither delete the rauthy
default client, nor the
rauthy_admin
role. There are mechanisms to prevents
this happening by accident via UI, but you could possibly do this via a direct API call.
There are some anti-lockout mechanisms in place in the backend, which will be executed with
every start, but being
careful at this point is a good idea anyway.
Production Config
If you want to go into production, the configuration from the Getting Started section is most probably not enough.
The best thing you could do is just taking a look at the Reference Config and reading through all the possible options.
However, this section will give you a head start with the minimum you should set up. Depending on if you started with Docker or Kubernetes so far, some values might have been already set.
Password Hashing Setup
Even though the options and tools in the Admin UI should be fully documented, I wanted to mention argon2id tuning here.
Rauthy uses the argon2id hashing algorithm for passwords. This is the most expensive and
compute-heavy operation
done by the application and the variables need to be tuned for every deployment to provide the
best compromise
of security, resource usage and user experience.
The default values are a good start at the lower end of the "being safe scale" for a production
deployment.
However, you should at least take a look at them and verify, that they work for you.
The Admin UI provides a utility which helps you find the best values for your deployment quickly. What and how to do is described in the Admin UI itself, I just want to guide you to this utility especially, since it is an important step security wise.
When you are logged in to the Admin UI, please navigate to Config
-> Argon2
Parameters
to find your values.
After they have been found, apply them to the rauthy config and restart the deployment.
Keep in mind, that if you run the application in a way, where memory is limited, for instance
inside Kubernetes with
resource limits set too low, that it will crash, if either ARGON2_M_COST
is set too
high or the memory limit too low.
There is one additional, really important config variable need to be taken into account for the
tuning.
Since this operation is really ressource intense, you can limit the amount of threads, which can
run in parallel doing
hashing operations. This is really important when we think about constrained memory again.
MAX_HASH_THREADS
limits the maximum amount of parallel password hashes at the exact
same time to never exceed system
memory while still allowing a good amount of memory.
The default value is 2.
The rule is simple: Allow as many resources as possible for hashing to have the maximum amount of security, while restricting it as much as necessary.
For smaller deployments, set MAX_HASH_THREADS=1
, which will technically allows only
one user login at the exact same
time. This value makes an external rate limiting for the login obsolete (while you may add some
for the others).
Passkeys / WebAuthn
Passkey Only Accounts
Rauthy provides the ability to create Passkey only accounts.
These accounts do not have any password at all. The user can login via E-Mail + providing the
Passkey. Only keys
with additional user verification (UV) will be accepted for these accounts. This makes sure,
that they are 2FA / MFA
secured (depending on the device) all the time.
You can choose the account type either during the initial password reset link you get via E-Mail, or you can convert a traditional password account to a passkey only account in your account view, if you have at least one Passkey with additional UV registered.
Passkey only accounts provide a few benefits:
- no need to remember or store any passwords
- way easier and faster logging in
- always 2FA / MFA
- strongest type of authentication
- no need to satisfy password policies
- no need to reset your password after it has been expired
Passkey only accounts cannot use the traditional password reset E-Mails.
This is a drawback and a benefit at the same time:
No way to take over an account if the E-Mail account has been compromised, but at the
same time the user
relies on an Admin to reset the MFA devices, if no backup exists or all are lost.
Unfortunately, passkeys have one big drawback when you use the discoverable credentials / redident keys feature. This is really cool and the best possible UX at first glance, because the user does not need to provide even a username / email manually, but has one very big drawback.
The amount of resident keys a passkey can store is always limited.
There are devices, that can get locked up completely if you try to register resident keys beyond the capacity of the device itself. For this reason, Rauthy strictly discourages the use of resident keys and will always request the user to provide the email, so you can never end up in such a situation with Rauthy. Non-discoverable credentials are not limited per device.
Rauthy remembers the last email that has been used for a login on each machine. This provides the same best possible UX in the end that you would get with discoverable credentials but without the drawbacks of the limited amount.
Android has finally added support for biometric UV in September 2023.
This has made is possible to implement this feature into Rauthy without sacrificing
security.
However, at the time of writing (2024/06/05), PIN / biometric UV via NFC keys like Yubikeys does still not work.
Sadly, Google just decided to not implement this feature for other keys than their own implementation of it.
If you want to register an Android device for a Passkey only account, but you are using for instance Yubikeys with PIN UV, you can do the following trick to get it done (works only with the latest Play store version):
- Create a password for your account, if it does not exist yet
- Remove all registered passkeys
- Log in to your account view on your Android device and another device that works with your Yubikey
- With both devices logged in at the same time:
- Register a new passkey with Android and choose "this device", which will create a Passkey flow with your fingerprint
- Register the Yubikey on the other device
- You should now have 2 Passkeys: Android + Yubikey
- Navigate to the Password page inside your account on any device and convert it to Passkey only
- You should now be able to log in on your Android device with Passkey only and with your Yubikey
Config
You should use Passkeys / Webauthn in production for 2FA / MFA. To make sure it works, you need to check your the config.
Adjust the following variables in your config, to make sure it works correctly.
RP_ID
This is the Relaying Party (RP) ID, which should be your effective domain name.
Let's say our application is available at auth.example.com
, then this should be:
RP_ID=auth.example.com
When the RP_ID
changes, already registered devices will stop working and
users cannot log in anymore!
Be very careful, if you want / need to do this in production.
RP_ORIGIN
The second important variable is the RP_ORIGIN
. This needs to be set to the scheme +
URL containing the effective
domain name + port.
The RP_ORIGIN
must always include the port number, even if it is just the
default 443 for HTTPS.
In this example, assuming Rauthy will be available at port 443, correct would be:
RP_ORIGIN=https://auth.example.com:443
RP_NAME
This variable can be set to anything "pretty".
This may be shown to the user in a way like "RP_BNAE
requests your security key
...". If this is shown depends on the
OS and the browser the client uses. Firefox, for instance, does not show this at the time of
writing.
You can change the RP_NAME
later on without affecting the validation of already
registered keys.
WEBAUTHN_RENEW_EXP
For all non Passkey only accounts, Rauthy will always prompt a user at least once for the password on a new machine, even with active passkeys. The keys are used either as a strong second factor, when they do not work with a PIN, or bump up the whole login to real MFA, if the OS / Browser / Key does support this.
When a user has logged in successfully on a new device and active 2FA / MFA, Rauthy will set an
encrypted cookie as a
"remember me". The lifetime of this cookie can be configured with
WEBAUTHN_RENEW_EXP
. The default of this value
is 2160 hours.
As long as this cookie is present and can be decrypted by the backend, the user can log in from this very device with the registered Passkey key only, which makes a very good user experience for the whole login flow. The E-Mail will already be filled automatically and only a single click on the login button is necessary.
Encryption
Rauthy does additional encryption for values in lots of places, like for instance client_secret
s
in the database
or session cookies. In the Getting Started for Kubernetes, we have set up the
ENC_KEYS
and ENC_KEY_ACTIVE
.
The ENC_KEYS
defines the static keys used for additional data encryption in a few
places. This values may contain
multiple keys, if you want to rotate them at some point without breaking the decryption of all
already existing secrets.
ENC_KEY_ACTIVE
defines the key inside ENC_KEYS
which will be used as
the default. This means that all new / current
encryption's performed by the backend will use the key with the given ID.
Setup
If you followed the Getting Started for Kubernetes, you already completed this step. If not, this is how you can generate an encryption key.
echo "$(openssl rand -hex 4)/$(openssl rand -base64 32)"
The first part until the first /
is the key ID. This has to be between 2 and 20
characters and should not contain any
special ones.
The second part after the first /
is the key itself, encoded as base64. This value
must exactly 32 bytes long. If
it is
not, Rauthy will yell at startup and panic early.
If you have generated a key, lets say the output was
❯ echo "$(openssl rand -hex 4)/$(openssl rand -base64 32)"
90eb6d69/U9wZG4GS/94pVh6iTH1ijf+kj+tXJHKkQNsp5eImMQI=
Your config values should look like this:
ENC_KEYS="
90eb6d69/U9wZG4GS/94pVh6iTH1ijf+kj+tXJHKkQNsp5eImMQI=
"
ENC_KEY_ACTIVE=90eb6d69
You can add more keys if you like, separated by new lines, which is needed for
the key rotation described below.
The ENC_KEY_ACTIVE
will be the key being used for all new encryption's.
Key Rotation
A good practice is to rotate encryption keys from time to time. Rauthy does auto-rotate the keys for signing tokens, but the admin is responsible for the encryption keys, since they have a greater impact.
1. Add a new key to the ENC_KEYS
in you secrets
You must not remove a current key, before the migration has been done
via the UI.
If the old key is gone, the migration will fail.
2. Generate a new key + id
echo "$(openssl rand -hex 4)/$(openssl rand -base64 32)"
The part before the first /
is the ENC_KEY_ID
and the part afterwards
is the key base64 encoded.
You might set the new ENC_KEY_ACTIVE
to the newly generated key ID.
Keep in mind, you need to ADD this to your existing keys and not just replace them! If you just replace them, almost all things will break and fall apart.
The final format of the ENC_KEYS
should look something like this, for instance:
ENC_KEYS="
Wx1zrbLF/5vTaB7LdUSg1aTecmqHJOu2+RnU6zgTwNkDQU52Y3JM=
6uf5QebA/9DsKMoq8A+Gn2WQrTcSpz5sg751yYs3IJlkw3dn0rII=
"
In this example, if the first key should be the new active default, set your ENC_KEY
ACTIVE`
to
ENC_KEY_ACTIVE=Wx1zrbLF
3. Set the ENC_KEY_ACTIVE
to the ID of your newly generated key
This will make sure, that all new encryptions will use the new key. If you do not care about
removing the old keys,
because you maybe just want to rotate because its good practice, the secrets will migrate "by
themselves" over time.
If Rauthy finds any secrets during its normal operation, that have been encrypted with an older
key than the current
ENC_KEY_ACTIVE
, it will re-encrypt these secrets and update the values.
This means, you may just stop at this point, if this is good enough for you.
4. Migrate Keys
If you however want to trigger a re-encryption of all existing secrets on purpose, there is a small tool in the Admin UI which helps you with this.
Log in to the Admin UI and navigate to Config
-> Encryption
Keys
.
You will see the currently recognized keys and the currently active ID.
You can then make sure, that the ID you want to migrate secrets to is selected and execute the migrations. Please keep in mind, that if you have a lot of data, it might take a few seconds to perform this operation. This will migrate all encrypted data for existing OIDC clients, all JWKs, and so on, with the new key.
5. Remove old keys
After a successful migration via the UI tool, you could remove old keys from the
ENC_KEYS
value, but it is not
recommended as long as you did not have a known data breach. Just keep them around for some time
because of
encrypted cookies with older keys.
All cookies are encrypted with the ENC_KEY_ACTIVE
. This means, if you remove
something from the ENC_KEYS
which has
been used to encrypt cookies, the user will be prompted to log in again, even if cookies
have not expired yet.
Rauthy uses ChaCha20Poly1305 for any encryption. AES GCM is not used on purpose, because it has some attack vectors if its used without hardware acceleration.
Usually, devices these days all come with dedicated AES acceleration, even embedded ones. However, with VM's this is often a different story and its not guaranteed, that you will have AES acceleration when you spin up a VM in some cloud. Rauthy tries to be as secure as possible by default and therefore ChaCha20Poly1305 has a slight advantage over AES.
Logging and Auditing
Rauthy logs most things into the console, depending on the configuration of different log levels.
In addition, more
important events can be sent to Matrix, Slack or via E-Mail. All of this is highly configurable
and you should be able
to achieve whatever you need. All events are logged into the console as well with their
configured level. This means,
if Rauthy fires an event of type NewUser
with the level info
and you
have configured a log level of at least the
same, you will see the event in the console as well. So you could only use log aggregation and
use existing tools
without configuring other targets and still catch everything.
Logging
You can configure not only different levels for logging, but also different targets / situations.
LOG_LEVEL
The LOG_LEVEL
variable configures the default logging in most situations. This
defines the logging for instance
for logging information from different function runs or things that have been triggered.
# This is the log level for stdout logs
# Accepts: error, info, debug, trace (default: info)
LOG_LEVEL=info
LOG_LEVEL_ACCESS
For changing the logging behavior for access logs to the API endpoints, you will need to set the
LOG_LEVEL_ACCESS
.
If you have access logging configured at your firewall or reverse proxy, you can disable the
LOG_LEVEL_ACCESS
fully
to reduce duplicated log outputs.
# This is a special config which allows the configuration of
# customized access logs. These logs will be logged with each
# request in addition to the normal LOG_LEVEL logs.
# The following values are valid:
# - Debug
# CAUTION: The Debug setting logs every information available
# to the middleware which includes SENSITIVE HEADERS
# DO NOT use the Debug level in a working production environment!
# - Verbose
# Verbose logging without headers - generates huge outputs
# - Basic
# Logs access to all endpoints apart from the Frontend ones
# which all js, css, ...
# - Modifying
# Logs only requests to modifying endpoints and skips all GET
# - Off
# default: Modifying
LOG_LEVEL_ACCESS=Basic
Events
Events are used for auditing and never miss anything. If something important happens, you usually need to inspect logs to catch it, but why should you, if you did not notice any problems? This is where Rauthy Events are helping you out. You need to set up basic configuration for Event targets and then you could customize the different levels.
Basic Setup
Rauthy has the following Event targets built-in:
- Matrix
- Slack
To be able to receive Events via E-Mail, you need to have set up an SMTP server and have a
working connection. With
a working SMTP, you only need to set EVENT_EMAIL
, that's it.
# The E-Mail address event notifications should be sent to.
EVENT_EMAIL=admin@localhost.de
Matrix
Matrix is often deployed with home servers that may not even have real TLS certificates or if just running behind closed doors, may use self-signed certificates. To make it work in all of these situations, you can configure quite a lot for the connection to Matrix. In the end, you will only need to have some credentials and a room ID, so Rauthy knows where it should post the events.
# Matrix variables for event notifications.
# `EVENT_MATRIX_USER_ID` and `EVENT_MATRIX_ROOM_ID` are mandatory.
# Depending on your Matrix setup, additionally one of
# `EVENT_MATRIX_ACCESS_TOKEN` or `EVENT_MATRIX_USER_PASSWORD` is needed.
# If you log in to Matrix with User + Password, you may use `EVENT_MATRIX_USER_PASSWORD`.
# If you log in via OIDC SSO (or just want to use a session token you can revoke),
# you should provide `EVENT_MATRIX_ACCESS_TOKEN`.
# If both are given, the `EVENT_MATRIX_ACCESS_TOKEN` will be preferred.
#
# If left empty, no messages will be sent to Matrix.
# Format: `@<user_id>:<server address>`
#EVENT_MATRIX_USER_ID=
# Format: `!<random string>:<server address>`
#EVENT_MATRIX_ROOM_ID=
#EVENT_MATRIX_ACCESS_TOKEN=
#EVENT_MATRIX_USER_PASSWORD=
# Optional path to a PEM Root CA certificate file for the Matrix client.
#EVENT_MATRIX_ROOT_CA_PATH=path/to/my/root_ca_cert.pem
# May be set to disable the TLS validation for the Matrix client.
# default: false
#EVENT_MATRIX_DANGER_DISABLE_TLS_VALIDATION=false
# The default behavior is, that Rauthy will panic at startup if it cannot connect
# to a configured Matrix server. The reason is that event notifications cannot be
# dropped silently.
# However, if you use a self-hosted Matrix server which uses Rauthy as its OIDC
# provider and both instances went offline, you will have a chicken and egg problem:
# - Rauthy cannot connect to Matrix and will panic
# - Your Matrix server cannot connect to Rauthy and will panic
# To solve this issue, you can temporarily set this value to 'true' and revert
# back, after the system is online again.
# default: false
#EVENT_MATRIX_ERROR_NO_PANIC=false
I suggest that you create a separate room for these events. If you experience issues with Matrix encryption, you should maybe disbale the encryption for the Events room. I came across some weird errors from Matrix in the past when I was using a session token which has been created on another machine.
Slack
To receive messages via Slack, you need to create a legacy webhook inside your Slack account. This is then the only config variable you need to set:
# The Webhook for Slack Notifications.
# If left empty, no messages will be sent to Slack.
#EVENT_SLACK_WEBHOOK=
Custom Target
If you need your events to be sent somewhere custom, you can always create an API key with read
access for Events.
Then write a small app that simply listens to the events stream, which can process or just
forwards the events to where
ever you need them.
- Log in to the Admin UI and create an API key with
read
access for events.
- Save the key, expand the new entry, navigate to
Secret
andGenerate New
- Copy the new Secret from the hidden input that will appear.
You will also see 2 prepared
curl
statements to test your new API key. From the test, you should get an output similar to this:
{
"name": "events",
"created": 1720428190,
"expires": null,
"access": [
{
"group": "Events",
"access_rights": [
"read"
]
}
]
}
- Listen to the events stream.
You need to add the API key in the
Authorization
header with the prefixAPI-Key
, for instance like
Authorization: API-Key events$SUcpBxcPmfwH9z1Kb4ExOUYDSXpxOj9mFLadjuQ1049XaWzdWB328aa97k2nj21E
You can either periodically fetch via the /auth/v1/events
endpoint (see Swagger
documentation in the Admin UI), or
by listening to the /auth/v1/events/stream
, which will be a Server Sent Events
stream.
The events will be sent in JSON format and have the following content:
#![allow(unused)] fn main() { struct Event { pub id: String, pub timestamp: i64, pub level: EventLevel, pub typ: EventType, pub ip: Option<String>, pub data: Option<i64>, pub text: Option<String>, } // the `EventLevel`s will be converted to lower case enum EventLevel { Info, Notice, Warning, Critical, } enum EventType { InvalidLogins, IpBlacklisted, IpBlacklistRemoved, JwksRotated, NewUserRegistered, NewRauthyAdmin, NewRauthyVersion, PossibleBruteForce, RauthyStarted, RauthyHealthy, RauthyUnhealthy, SecretsMigrated, UserEmailChange, UserPasswordReset, Test, } }
Keep in mind, that depending on proxies you might have in between, they might kill your connection to the events stream after some timeout without messages. Your app should work around this and either re-connect on connection loss, or you could fix it with a different proxy config or keep alive messages.
Testing
You can test your Event handler / pipeline setup when you log in to the Admin UI and simply press
the Test
button
in the upper right corner in the Events sidebar. This will send a Test event to all configured
targets and listeners
independent of any configured event level restrictions. Test events will always be sent as long
as everything is working
properly.
EVENT_NOTIFY_LEVEL
You can set different levels for each target. By default, Only events with warning
or higher are sent via E-Mail
while Matrix / Slack would receive Events with notice
or higher. If you want a
different behavior, you can get this:
# The notification level for events. Works the same way as a logging level.
# For instance: 'notice' means send out a notifications for all events with
# the notice level or higher.
# Possible values:
# - info
# - notice
# - warning
# - critical
#
# default: 'warning'
EVENT_NOTIFY_LEVEL_EMAIL=warning
# default: 'notice'
EVENT_NOTIFY_LEVEL_MATRIX=notice
# default: 'notice'
EVENT_NOTIFY_LEVEL_SLACK=notice
Event Persistence
Rauthy's Admin UI has a component for inspecting Events from the past for analytical purposes. By
default, events with
the level info
or higher are persisted for 31
days. After this period,
they will be removed from the database to
keep it clean.
You can configure both the level which should be persisted, for instance set "only persist Events with level warning or higher" and the days how long they should be kept.
# Define the level from which on events should be persisted inside the
# database. All events with a lower level will be lost, if there is no
# active event subscriber.
# Possible values:
# - info
# - notice
# - warning
# - critical
# default: 'info'
#EVENT_PERSIST_LEVEL=info
# Define the number of days when events should be cleaned up from the database.
# default: 31
#EVENT_CLEANUP_DAYS=31
EVENT_LEVEL_
Values
There are a lot of values starting with EVENT_LEVEL_
. These can be used to configure
the level for different kinds
of event being fired by Rauthy.
For instance, let's say you only want to receive events with a level of warning
or
higher, but you also want to
receive a notification when there are more than 7 failed logins from an IP. By default, 7 failed
logins would trigger an
event with the level of notice
. You can then set
EVENT_LEVEL_FAILED_LOGINS_7=warning
to also receive these while still only receiving warning
events.
The full list of these configurable levels is the following:
# The level for the generated Event after a new user has been registered.
# default: info
EVENT_LEVEL_NEW_USER=info
# The level for the generated Event after a user has changed his E-Mail
# default: notice
EVENT_LEVEL_USER_EMAIL_CHANGE=notice
# The level for the generated Event after a user has reset its password
# default: notice
EVENT_LEVEL_USER_PASSWORD_RESET=notice
# The level for the generated Event after a user has been given the
# 'rauthy_admin' role
# default: notice
EVENT_LEVEL_RAUTHY_ADMIN=notice
# The level for the generated Event after a new App version has been found
# default: notice
EVENT_LEVEL_RAUTHY_VERSION=notice
# The level for the generated Event after the JWKS has been rotated
# default: notice
EVENT_LEVEL_JWKS_ROTATE=notice
# The level for the generated Event after DB secrets have been migrated
# to a new key
# default: notice
EVENT_LEVEL_SECRETS_MIGRATED=notice
# The level for the generated Event after a Rauthy instance has been
# started
# default: info
EVENT_LEVEL_RAUTHY_START=info
# The level for the generated Event after a Rauthy entered a healthy
# state (again)
# default: notice
EVENT_LEVEL_RAUTHY_HEALTHY=notice
# The level for the generated Event after a Rauthy entered an unhealthy
#state
# default: critical
EVENT_LEVEL_RAUTHY_UNHEALTHY=critical
# The level for the generated Event after an IP has been blacklisted
# default: warning
EVENT_LEVEL_IP_BLACKLISTED=warning
# The level for the generated Event after certain amounts of false
# logins from an IP
# default: critical
EVENT_LEVEL_FAILED_LOGINS_25=critical
# default: critical
EVENT_LEVEL_FAILED_LOGINS_20=critical
# default: warning
EVENT_LEVEL_FAILED_LOGINS_15=warning
# default: warning
EVENT_LEVEL_FAILED_LOGINS_10=warning
# default: notice
EVENT_LEVEL_FAILED_LOGINS_7=notice
# default: info
EVENT_LEVEL_FAILED_LOGIN=info
Backups
Postgres
If you are using Postgres as the main database, Rauthy does not do any backups.
There are a lot of way better tools out there to handle this task.
SQLite
If Rauthy is using a SQLite, it does automatic backups, which can be configured with:
# Cron job for automatic data store backups (default: "0 0 4 * * * *")
# sec min hour day_of_month month day_of_week year
BACKUP_TASK="0 0 4 * * * *"
# The name for the data store backups. The current timestamp will always be appended automatically.
# default: rauthy-backup-
BACKUP_NAME="rauthy-backup-"
# All backups older than the specified hours will be cleaned up automatically (default: 720)
BACKUP_RETENTION_LOCAL=24
All these backups are written inside the pod / container into /app/data/backup
.
The database itself will be saved in /app/data
by default.
This difference makes it possible, that you could add a second volume mount to the
container.
You then have the database itself on a different disk than the backups, which is the most simple
and straight forward
approach to have a basic backup strategy.
The SQLite backups are done with VACUUM
, which means you can just use the
backups as a normal database again.
This makes it possible, to just use the Database
Migration feature to apply backups very easily.
Remote Backups to S3 Storage
SQLite backups can be pushed to an S3 bucket after creation. This way you can keep only very low amount of local backups and older ones on cheaper object storage.
Rauthy has been tested against MinIO and Garage S3 storage and is working fine with both, so I
expect and standard S3
API to just work out of the box. You need to provide an Access Key + Secret with write access to
an existing bucket
and Rauthy will take care of the rest. All backups pushed to S3 will automatically encrypted
with the currently active
ENC_KEY_ACTIVE
from the Rauthy config.
The configuration is done with the following values:
# The following section will only be taken into account, when
# SQLite is used as the main database. If you use Postgres, you
# should use Postgres native tooling like for instance `pgbackrest`
# to manage your backups.
# If S3 access is configured, your SQLite backups will be encrypted
# and pushed into the configured bucket.
#S3_URL=
#S3_REGION=
#S3_PATH_STYLE=false
#S3_BUCKET=my_s3_bucket_name
#S3_ACCESS_KEY=
#S3_ACCESS_SECRET=
#S3_DANGER_ACCEPT_INVALID_CERTS=false
Disaster Recovery
If you really lost all your data, you can easily restore automatically from the latest backup.
This works with either a
local file
backup or with an encrypted remote backup on s3
storage (as
long as you still have the ENC_KEY_ACTIVE
that has been used for the remote backup).
This, again, works only for SQLite. When you are using Postgres, you really should use native
tooling which is way
better at this.
The process is really simple:
- set an environment variable before the start
- start up Rauthy
- check the logs and wait for the backup to be finished
- after a successful restore, Rauthy will start its normal operation
After a successful restore, you MUST remove the env var again!
If you don't do it, Rauthy will re-apply the same backup with the next restart.
You only need to set this single value:
# Restores the given backup
#
# CAUTION: Be very careful with this option - it will overwrite
# any existing database! The best way to use this option is to
# provide it as an environment variable for a single start up
# and then remove it directly after success.
#
# This only works when you are using a SQLite database!
# If you are running on Postgres, you must use Postgres-native
# tooling to handle your backups.
#
# You can either restore a local backup, or an encrypted one
# from S3 storage.
#
# For restoring from a local backup, provide the folder name
# of the backup you want to restore. Local SQLite backups are
# always in `./data/backup/rauthy-backup-TIMESTAMP/` folders.
# You only provide the backup folder name itself, in this case
# it would be `rauthy-backup-TIMESTAMP` like this:
# RESTORE_BACKUP=file:rauthy-backup-TIMESTAMP
#
# If you want to restore an encrypted backup from S3 storage,
# you must provide the object name in the configured bucket.
# For instance, let's say we have an object named
# `rauthy-0.20.0-1703243039.cryptr` in our bucket, then the
# format would be:
# RESTORE_BACKUP=s3:rauthy-0.20.0-1703243039.cryptr
#
#RESTORE_BACKUP=
TLS
If you do have TLS certificates from another source already, skip directly to Config.
Generating Certificates
We are using another project of mine called Nioca for an easy creation of a fully functioning and production ready private Root Certificate Authority (CA).
I suggest to use docker
for this task. Otherwise, you can use the nioca
binary directly on any linux machine.
If you want a permanent way of generating certificates for yourself, take a look at Rauthys
justfile
and copy
and adjust the recipes create-root-ca
and create-end-entity-tls
to
your liking.
If you just want to get everything started quickly, follow these steps:
Create
an alias for the docker
command
alias nioca='docker run --rm -it -v ./:/ca -u $(id -u ${USER}):$(id -g ${USER}) ghcr.io/sebadob/nioca'
To see the full feature set for more customization than mentioned below:
nioca x509 -h
Generate full certificate chain
To make your browser happy, your need to have at least one --alt-name-dns
for the
URL of your application.
You can define as many of them as you like.
nioca x509 \
--cn 'Rauthy Default' \
--o 'Rauthy OIDC' \
--alt-name-dns localhost \
--alt-name-dns rauthy.rauthy \
--alt-name-dns rauthy.rauthy.svc.cluster.local \
--usages-ext server-auth \
--usages-ext client-auth \
--stage full \
--clean
You will be asked 6 times (yes, 6) for an at least 16 character password:
- The first 3 times, you need to provide the encryption password for your Root CA
- The last 3 times, you should provide a different password for your Intermediate CA
When everything was successful, you will have a new folder named x509
with sub
folders root
, intermediate
and end_entity
in your current one.
From these, you will need the following files:
cp x509/intermediate/ca-chain.pem . && \
cp x509/end_entity/$(cat x509/end_entity/serial)/cert-chain.pem . && \
cp x509/end_entity/$(cat x509/end_entity/serial)/key.pem .
- You should have 3 files in
ls -l
:
ca-chain.pem
cert-chain.pem
key.pem
Config
The reference config contains a TLS
section with all the values you can set.
For this example, we will be using the same certificates for both the internal cache mTLS connections and the public facing HTTPS server.
Cache
The cache layer (optionally) uses TLS, if you provide certificates. Simply provide the following
values from the TLS
section in the reference config:
# If given, these keys / certificates will be used to establish
# TLS connections between nodes.
HQL_TLS_RAFT_KEY=tls/hiqlite/tls.key
HQL_TLS_RAFT_CERT=tls/hiqlite/tls.crt
#HQL_TLS_RAFT_DANGER_TLS_NO_VERIFY=false
HQL_TLS_API_KEY=tls/hiqlite/tls.key
HQL_TLS_API_CERT=tls/hiqlite/tls.crt
#HQL_TLS_API_DANGER_TLS_NO_VERIFY=false
There is no problem using the same certificates for both networks, but you can optionally even separate them if you need to. You could even re-use the Server TLS, if your DNS setup allows for this.
At the time of writing, it does not accept a custom Root CA yet. In this case you have to
set the
*_DANGER_TLS_NO_VERIFY
to true
Rauthy Server / API
By default, rauthy will expect a certificate and a key file in /app/tls/tls.key
and
/app/tls/tls.crt
, which is the
default naming for a Kubernetes TLS secret. The expected format is PEM, but you could provide
the key in DER format too,
if you rename the file-ending to *.der
.
You can change the default path for the files with the config variables TLS_CERT
and
TLS_KEY
.
Kubernetes
If you did not follow the above procedure to generate the CA and certificates, you may need to rename the files in the following command, to create the Kubernetes secrets.
Secrets - Rauthy Server / API
kubectl -n rauthy create secret tls rauthy-tls --key="key.pem" --cert="cert-chain.pem"
Secrets - hiqlite
cache
kubectl -n rauthy create secret tls hiqlite-tls --key="key.pem" --cert="cert-chain.pem"
We create the hiqlite-tls
here with the exact same values. You could of
course either use different certificates, or
not create a separate secret at all and just re-use the Rauthy TLS certificates, if you
DNS setup allows for proper
validation in this case.
Config Adjustments - REST API
We need to configure the newly created Kubernetes secrets in the sts.yaml
from the
Kubernetes
setup.
- In the
spec.template.spec.volumes
section, we need to mount the volumes from secrets:
REST API:
- name: rauthy-tls
secret:
secretName: rauthy-tls
hiqlite
cache:
- name: hiqlite-tls
secret:
secretName: hiqlite-tls
- In the
spec.template.spec.containers.[rauthy].volumeMounts
section, add::
REST API:
- mountPath: /app/tls/
name: rauthy-tls
readOnly: true
hiqlite
cache:
- mountPath: /app/tls/hiqlite/
name: hiqlite-tls
readOnly: true
After having modified the config from above and the sts.yaml
now, just apply both:
kubectl apply -f config.yaml
kubectl apply -f sts.yaml
The rauthy
pods should restart now and TLS is configured.
Sessions
When you log in to anything, be it your account dashboard, the admin UI, or a downstream application, you will get a Rauthy session. This is independent of any client / application login. These sessions are used to authorize against Rauthy only for things like account management. Depending on your configuration, users may be logged in to a downstream client immediately, if they already have a still valid session.
You can configure quite a lot like session timeouts and so on, but the default are safe. However, there is one really important thing:
You need to make sure, that Rauthy can extract the connecting clients IP properly. This is very important for failed login counters, login delays, blacklisting, and so on.
If your instance is exposed directly, in most situations the IP extractions works just fine. This may change though when running behind a reverse proxy or a CDN.
To check which IP Rauthy will extract for your requests, you don't to search through logs. You can use the whoami endpoint. This is unauthenticated and will just return your current IP from the request headers. If the returned IP is correct, your setup is fine.
The endpoint is reachable via: /auth/v1/whoami
Running behind a reverse proxy
If you are running behind a reverse proxy, you need to set at least 2 config variable
properly.
First, you need to set
PROXY_MODE=true
Secondly, you need to tell Rauthy which proxy source IP's it can trust. This is important,
because when behind a reverse
proxy, Rauthy will only see the IP of the proxy itself by default, which would be the same for
each client connecting
though it. However, a reverse proxy adds headers which contain the clients real IP, like e.g.
the X-FORWARED-FOR
header and maybe others (depending on the proxy).
These headers can be spoofed from an attacker, if the source IP is not validated. This is what Rauthy needs you to set the trusted proxies config for:
# A `\n` separated list of trusted proxy CIDRs.
# When `PROXY_MODE=true` or `PEER_IP_HEADER_NAME` is set,
# these are mandatory to be able to extract the real client
# IP properly and safely to prevent IP header spoofing.
# All requests with a different source will be blocked.
TRUSTED_PROXIES="
192.168.0.1/32
192.168.100.0/24
"
The more you can narrow down the CIDR for your reverse proxy, the better. For instance if you
know your proxy is your
firewall at the same time, which always will have the IP 192.168.0.1
, then add the
/32
subnet to it. If you are
running in a more dynamic environment like Docker or Kubernetes, where your proxy could get an
IP dynamically from an
internal pool, you need to add all the possible IPs as trustworthy.
When Rauthy is running in proxy mode, it will block every incoming request that does not
match the
TRUSTED_PROXIES
IP pool. This means if you have internal tooling set up
like health checks, monitoring or
metrics, which do not connect via the proxy, you need to add these source IPs to the
TRUSTED_PROXIES
list.
Running behind a CDN
If you are running behind a CDN which proxies your requests like for instance cloudflare, you have a reverse proxy setup again, just so that cloudflare is (another) reverse proxy for you. This means you need to set up the above configuration at least.
In addition, you would maybe end up seeing the CDN proxy IP when you do a GET
/auth/v1/whoami
. If this is the case,
There is an additional variable you can set:
# Can be set to extract the remote client peer IP from a custom header name
# instead of the default mechanisms. This is needed when you are running
# behind a proxy which does not set the `X-REAL-IP` or `X-FORWARDED-FOR` headers
# correctly, or for instance when you proxy your requests through a CDN like
# Cloudflare, which adds custom headers in this case.
# For instance, if your requests are proxied through cloudflare, your would
# set `CF-Connecting-IP`.
PEER_IP_HEADER_NAME="CF-Connecting-IP"
The CDN usually adds some other headers than the default X-FORWARED-FOR
headers,
like in this example
CF-Connecting-IP
to the request. If this is the case, you can tell Rauthy to always
check for this header first and
only use the other methods as fallback, if this does not exist.
Session peer IP binding
You most probably do not need to care about this configuration, but depending on your application you may want to disable it.
Whenever you get a session from Rauthy and you authenticate succesfully, your current IP will be extracted and persisted. By default, Rauthy will check your origin IP and compare it to the one you had when creating the session with each single request. If your IP does not match the original one, the session will be ignored and the request will be treated as being unauthenticated.
This prevents scenarios where an attacker would be able to steal session data from your machine, copy the information and use it on their own. This means even if you would send your session cookie and CSRF token to someone, they would not be able to use it, as long as the requests are not coming from the exact same source IP.
Rauthy has lots of mechanisms in place to prevent things like cookie stealing, session takeover, and so on , but it can't do anything about it, when the client's OS itself is infected. All these mechanisms add up to the defense in depth, but at the end of the day, when the clients machine itself is infected, there is not much any application can do about it. There just is no silver bullet.
This defense is a really nice thing, but it may annoy your users, depending on where your deployed Rauthy, because this also means that each time when a client's IP changes, like for instance when you are in a mobile network or in a WIFI and often reconnect, your session will not be accepted. With as passkey added to your account, the login will take only seconds and another touch on the device, but you may still want to disable it. In this case, here is the configuration:
# If set to 'true', this will validate the remote peer IP address with
# each request and compare it with the IP which was used during the initial
# session creation / login. If the IP is different, the session will be
# rejected. This is a security hardening and prevents stolen access credentials,
# for instance if an attacker might have copied the encrypted session cookie
# and the XSRF token from the local storage from a user. However, this event
# is really unlikely, since it may only happen if an attacker has direct access
# to the machine itself.
#
# If your users are using mobile networks and get new IP addresses all the time,
# this means they have to do a new login each time. This is no big deal at all with
# Webauthn / FIDO keys anyway and should not be a reason to deactivate this feature.
#
# Caution: If you are running behind a reverse proxy which does not provide the
# X-FORWARDED-FOR header correctly, or you have the PROXY_MODE in this config
# disabled, this feature will not work. You can validate the IPs for each session
# in the Admin UI. If these are correct, your setup is okay.
#
# (default: true)
#SESSION_VALIDATE_IP=true
Lifetimes and Timeouts
The default session lifetimes and timeouts are pretty secure, but you may find them to be too strict. You can adjust them with the following config variables:
# Session lifetime in seconds - the session can not be
# extended beyond this time and a new login will be forced.
# This is the session for the authorization code flow.
# default: 14400
SESSION_LIFETIME=14400
# If 'true', a 2FA / MFA check will be done with each automatic
# token generation, even with an active session, which kind of
# makes the session useless with Webauthn enabled, but provides
# maximum amount of security.
# If 'false', the user will not get an MFA prompt with an active
# session at the authorization endpoint.
# default: false
SESSION_RENEW_MFA=false
# Session timeout in seconds
# When a new token / login is requested before this timeout hits
# the limit, the user will be authenticated without prompting for
# the credentials again.
# This is the value which can extend the session, until it hits
# its maximum lifetime set with SESSION_LIFETIME.
# default: 5400
SESSION_TIMEOUT=5400
Security
You usually don't need to configure anything about session security or CSRF protection, all of it happens automatically. This section is more informational about what Rauthy does in this case.
Session Cookies
Rauthy stores sessions as encrypted cookies. Depending on the situation, configuration and the users account, it will set multiple cookies inside your browser for different purposes like if you are allowed to do a direct session refresh with an MFA account for instance.
Apart from the locale preference for the UI, each cookie is stored encrypted. This makes sure,
that you can't tamper
with the data. The cookies are stored as host only cookies with the most secure settings in
today's browsers by default.
This means they are host only and use the __Host-
prefix to tell the browser to do
additional checks. The SameSite
attribute is set to Lax
for all of them.
Apart from local testing, you should never get in the situation where you want to disable the default secure cookie settings. But if you really need to (and you know what you are doing), you have the following option:
# You can set different security levels for Rauthy's cookies.
# The safest option would be 'host', but may not be desirable when
# you host an application on the same origin behind a reverse proxy.
# In this case you might want to restrict to 'secure', which will then
# take the COOKIE_PATH from below into account.
# The last option is 'danger-insecure' which really should never be used
# unless you are just testing on localhost and you are using Safari.
#COOKIE_MODE=host
# If set to 'true', Rauthy will bind the cookie to the `/auth` path.
# You may want to change this only for very specific reasons and if
# you are in such a situation, where you need this, you will know it.
# Otherwise don't change this value.
# default: true
#COOKIE_SET_PATH=true
CSRF Protection
CSRF protection happens in multiple ways:
CORS
/Origin
headers- classic synchronizer token pattern
Sec-
headers checks
In today's browsers, you could use the Sec-
headers only and be safe, or actually
even only stick to the secure
Cookie settings we have, and call it a day. The additional checks Rauthy does in this case are
there to catch unusual
situations, where someone maybe uses an older browser or one which has a security issue. All of
these techniques are
defenses in depth.
The synchronizer token pattern stores the additional CSRF token in local storage. Yes, this is
not "secure" in a way
that a malicious browser extension can read it, but it could read the DOM as well, which means
it could also just
read a meta tag or extract it from a hidden form field. The backend expects the CSRF token to be
added as a header
with each non-GET
request.
A new token will be created when you get a fresh session. Generating a new token after each request would improve the security but badly hurt the UX, because the browsers back button would simply not work anymore in most cases. In a perfect world where all users only use modern browsers that fully respect today's cookie settings, we would not even need this token, so a new token with each new session is fine.
The Sec-
headers middleware has been added recently to the mix. Desktop browsers do
add these headers since ~3 years
by now, but they only have been added pretty recently to mobile browsers as well. This
middleware on its own would be
a full CSRF protection even without additional cookie settings or a synchronizer token, but
these headers are just
way too fresh on mobile browsers to only rely on them right now.
The Sec-
middleware is pretty new to Rauthy, so it might be too restrictive in some
situations where I forgot to add
an exception for. By default, it blocks any non-user initiated or navigating cross-origin
request and I added exceptions
for routes, which should be available cross-origin. If you experience issues with it, you might
want to disable it and
set it to warn-only mode. Please open an
issue about this though so it can
be fixed, if it makes sense, because this option will probably be removed in a future version:
# If set to true, a violation inside the CSRF protection middleware based
# on Sec-* headers will block invalid requests. Usually you always want this
# enabled. You may only set it to false during the first testing phase if you
# experience any issues with an already existing Rauthy deployment.
# In future releases, it will not be possible the disable these blocks.
# default: true
SEC_HEADER_BLOCK=true
Only change any of the above mentioned session security settings if you really know what you are doing and if you have a good reason to do so.
User Registration
By default, new users can only be added by an admin. This is safe and secure, and your database cannot ever be spammed. However, you might want users to register themselves for whatever reason. In that case, follow the next step.
Open Registration
To open the registration to anyone, just set
# If the User Registration endpoint should be accessible by anyone.
# If not, an admin must create each new user.
# (default: false)
OPEN_USER_REG=true
This will open the registration endpoint and make it accessible without upfront
authentication.
You will now see a new button at the page root which directs you to the very simple registration
form.
After successful registration, the user will receive an E-Mail with an activation link. Clicking on this link will direct the user on a page, where a new password or passkey can be set. At the same time, the account will be activated and the E-Mail will be validated as well of course.
You must have configured your E-Mail sending beforehand. Otherwise the registration process will not work. You need to configure the following values to be able to send out E-Mails:
- SMTP_URL
- SMTP_USERNAME
- SMTP_PASSWORD
Captcha / Proof of Work
The registration form uses a Proof of Work (PoW) behind the scenes. This is basically an invisible captcha without the user solving weird image puzzles that sometimes even humans cannot solve. It is done with the help of a tiny crate I wrote myself as well.
This will of course not prevent real humans from registering fake accounts, but until now, I never had issues with any bots, so it does what it should while providing a way better UX than any traditional captcha challenge.
If you are interested in how it works, take a look at spow.
This tiny crate has very few external dependencies and is really easy to use. There is a special version for Leptos as well: leptos-captcha
Restricted Registration
You may want your users to register themselves, but at the same time restrict the E-Mail domains they are using. For instance, when you deploy Rauthy at your company for all internal applications, you may only want users to register with their work E-Mail:
# Can be used when 'OPEN_USER_REG=true' to restrict the domains for a registration.
# For instance, set it to
# 'USER_REG_DOMAIN_RESTRICTION=github.com'
# to allow only registrations with 'whatever-user@github.com'.
# default: (not set)
USER_REG_DOMAIN_RESTRICTION=@some-mail-domain.com
Domain Blacklisting
If you have opened your registration to anyone, you will get into the situation at some point, where evil people will create accounts only for checking out your security and trying to break in, execute XSS, and so on. These are often real people, which means any Captcha or PoW will not prevent them from registering of course.
The best thing you can do in that case, if your deployment allows this, is Geoblocking certain regions at your firewall / reverse proxy level. Attackers would then switch to origins in a country you allow, but this is usually more costly for them and more hassle, so it is a good first line of defense. At the same time, it reduces the visibility to those regions, and you might not get targeted in the first place because bots cannot scan you from there.
When such attacks happen, there is no magic rule or setting to defend against them. This is really individual each time and depends on a lot of things out of the scope of this documentation.
Nevertheless, Rauthy provides a small feature that might help you here: E-Mail Domain Blacklisting
Let's say you can't Geoblock or you found out, that specific E-Mail providers are being used during these attacks. If you have such a list of evil providers, you can blacklist and exclude them from the open registration. Existing users will keep working and an Admin can still create users with these domains. They just cannot be used for self-registration anymore.
You have the following config option:
# If `OPEN_USER_REG=true`, you can blacklist certain domains
# on the open registration endpoint.
# Provide the domains as a `\n` separated list.
USER_REG_DOMAIN_BLACKLIST="
example.com
evil.net
"
If you get targeted by something like fake accounts for testing your security and so on, don't panic. These attacks usually stop after 1-2 weeks most often. When attackers did not find a way in, they loose interest.
Downstream Application Integration
You can integrate the registration into a downstream application on 2 different ways.
Either use the existing one with redirect hints for improved UX, or fully build your own
registration frontend.
Redirect Hints
Let's say you have an application that uses Rauthy for user management. You want to improve the registration flow and your users being redirected back to your downstream app after registration / password set instead of them "being stuck" on Rauthy's UI, which would be the default flow.
Your app can show a link to the existing registration form with an appended query param to control redirects. When you append
?redirect_uri=https%3A%2F%2Fgithub.com
to the link, so you end up with for instance
https://iam.example.com/auth/v1/users/register?redirect_uri=https%3A%2F%2Fgithub.com
The following things will happen:
- After a successful registration, the user will be redirected back to the given
redirect_uri
. - After the password / passkey has been set using the E-Mail link, instead of being redirected to the Rauthy account dashboard, the user will be redirected to the same URI again.
This makes it possible to use Rauthy as your upstream provider without the user really needing to interact with or know about it in detail, which again leads to less confusion.
If you want to complete this improved UX setup, you should set a Client URI for the client in the admin dashboard. When there is a valid value, a small home icon will be shown inside the login form, so a user can get back to the client's URI without possibly screwing up with incorrectly using the browsers back button.
Custom Frontend
Depending on your application, you may want to create your own frontend for the registration. For speed and efficiency reasons, Rauthy does not allow you to overwrite the existing templates, but you can host your own UI of course.
The registration page is super simple and you can take a look at what it does here: page.svelte
The registration endpoint allows CORS requests. The only thing you need to care about is the PoW calculation.
- Accept the input from the user via any form inputs
- When the user clicks submit, fetch a new PoW from Rauthy via
POST /auth/v1/pow
- Solve the PoW using spow
- As soon as the PoW is solved, do a
POST /auth/v1/users/register
with the payload
#![allow(unused)] fn main() { struct NewUserRegistrationRequest { #[validate(email)] email: String, /// Validation: `[a-zA-Z0-9À-ÿ-\\s]{1,32}` #[validate(regex(path = "*RE_USER_NAME", code = "[a-zA-Z0-9À-ÿ-\\s]{1,32}"))] family_name: String, /// Validation: `[a-zA-Z0-9À-ÿ-\\s]{1,32}` #[validate(regex(path = "*RE_USER_NAME", code = "[a-zA-Z0-9À-ÿ-\\s]{1,32}"))] given_name: String, /// Validation: `[a-zA-Z0-9,.:/_\-&?=~#!$'()*+%]+` #[validate(regex(path = "*RE_URI", code = "[a-zA-Z0-9,.:/_\\-&?=~#!$'()*+%]+"))] pow: String, /// Validation: `[a-zA-Z0-9,.:/_\-&?=~#!$'()*+%]+` #[validate(regex(path = "*RE_URI", code = "[a-zA-Z0-9,.:/_\\-&?=~#!$'()*+%]+"))] redirect_uri: Option<String>, } }
Do not fetch and solve a PoW when the user has not submitted the form yet!
The PoWs have a very short lifetime by default to prevent them from being used multiple times. Rauthy has additional re-use prevention, but a POST to get a PoW will modify the backend state. This is unnecessary if the user decides to not submit the form at all.
You can configure PoWs with POW_DIFFICULTY
and POW_EXP
.
Keep in mind, that the POW_EXP
should be a high as necessary but always as
low as possible.
High Availability
Rauthy is capable of running in a High Availability Mode (HA).
Some values, like authentication codes for instance, do live in the cache only. Additionally, there might come an option with a future version which offers a special in-memory only mode in some situations.
Because of this, all instances create and share a single HA cache layer, which means at the same time, that you cannot just scale up the replicas infinitely without adjusting the config. The optimal amount of replicas for a HA mode would be 3, or if you need even higher resilience 5. More replicas should work just fine, but this has never been really tested and the performance will degrade at some point.
The Cache layer uses another project of mine called Hiqlite. It uses the Raft algorithm under the hood to achieve consistency.
Even though everything is authenticated, you should not expose the
Hiqlite ports to the public, if not really necessary for some reason. You configure
these ports with the HQL_NODES
config value in the CACHE
section.
Configuration
Earlier versions of Rauthy have been using redhac
for the HA cache layer. While
redhac
was working fine, it had a few design issues I wanted to get rid of. Since
v0.26.0
, Rauthy uses the above
mentioned Hiqlite instead. You only need to
configure a few variables:
HQL_NODE_ID
The HQL_NODE_ID
is mandatory, even for a single replica deployment with only a
single node in HQL_NODES
.
If you deploy Rauthy as a StatefulSet inside Kubernetes, you can ignore this value and just set
HQL_NODE_ID_FROM
below. If you deploy anywere else or you are not using a StatefulSet, you need to set the HQL_NODE_ID
to tell Rauthy
which node of the Raft cluster it should be.
# The node id must exist in the nodes and there must always be
# at least a node with ID 1
# Will be ignored if `HQL_NODE_ID_FROM=k8s`
HQL_NODE_ID=1
HQL_NODE_ID_FROM
If you deploy to Kubernetes as a StatefulSet, you should ignore the HQL_NODE_ID
and
just set HQL_NODE_ID_FROM=k8s
.
This will parse the correct NodeID from the Pod hostname, so you don't have to worry about it.
# Can be set to 'k8s' to try to split off the node id from the hostname
# when Hiqlite is running as a StatefulSet inside Kubernetes.
#HQL_NODE_ID_FROM=k8s
HQL_NODES
Using this value, you defined the Cache / Raft members. This must be given even if you just deploy a single instance. The description from the reference config should be clear enough:
# All cluster member nodes.
# To make setting the env var easy, the values are separated by `\s`
# while nodes are separated by `\n`
# in the following format:
#
# id addr_raft addr_api
# id addr_raft addr_api
# id addr_raft addr_api
#
# 2 nodes must be separated by 2 `\n`
HQL_NODES="
1 localhost:8100 localhost:8200
"
HQL_SECRET_RAFT
+ HQL_SECRET_API
Since you need both HQL_SECRET_RAFT
and HQL_SECRET_API
in any case,
there is nothing to change here. These define
the secrets being used internally to authenticate against the Raft or the API server for Hiqlite
.
You can generate safe values with for instance
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c48
TLS
If you are using a service mesh like for instance linkerd which
creates mTLS connections between
all pods by default, you can use the HA cache with just plain HTTP, since linkerd
will encapsulate the traffic anyway.
In this case, there is nothing to do.
However, if you do not have encryption between pods by default, I would highly recommend, that you use TLS.
Forward Authentication
If you want to secure applications running behind a reverse proxy that do not have authn / authz on their own in terms of being able to use OIDC flows or similar with them, you can use something like the Traefik middleware ForwardAuth. Other proxies support this as well.
Incoming requests can be intercepted and forwarded to Rauthy first. It will check for a valid
Bearer
token in the
Authorization
header. If it is valid, Rauthy will return an HTTP 200
Ok
and will append additional headers with
information about the user to the request. These headers could easily be read by a downstream
application.
The forward auth headers are disabled by default, because they can leak information, if you do not set up your internal environment carefully. You can enable the feature with
# You can enable authn/authz headers which would be added to the response
# of the `/auth/v1/oidc/forward_auth` endpoint. With `AUTH_HEADERS_ENABLE=true`,
# the headers below will be added to authenticated requests. These could
# be used on legacy downstream applications, that don't support OIDC on
# their own.
# However, be careful when using this, since this kind of authn/authz has
# a lot of pitfalls out of the scope of Rauthy.
# default: false
AUTH_HEADERS_ENABLE=false
You can also change the header names containing the information, if you need to support some older application that needs special naming for them:
# Configure the header names being used for the different values.
# You can change them to your needs, if you cannot easily change your
# downstream apps.
# default: x-forwarded-user
AUTH_HEADER_USER=x-forwarded-user
# default: x-forwarded-user-roles
AUTH_HEADER_ROLES=x-forwarded-user-roles
# default: x-forwarded-user-groups
AUTH_HEADER_GROUPS=x-forwarded-user-groups
# default: x-forwarded-user-email
AUTH_HEADER_EMAIL=x-forwarded-user-email
# default: x-forwarded-user-email-verified
AUTH_HEADER_EMAIL_VERIFIED=x-forwarded-user-email-verified
# default: x-forwarded-user-family-name
AUTH_HEADER_FAMILY_NAME=x-forwarded-user-family-name
# default: x-forwarded-user-given-name
AUTH_HEADER_GIVEN_NAME=x-forwarded-user-given-name
# default: x-forwarded-user-mfa
AUTH_HEADER_MFA=x-forwarded-user-mfa
This feature makes it really easy for any application behind your reverse proxy to serve protected resources, but you really only use it, if you cannot use a proper OIDC client or something like that.
Auth forward has many pitfalls and you need to be careful with your whole setup when using it. A direct token validation inside your downstream application should always be preferred.
Bootstrapping
Rauthy supports some basic bootstrapping functionality to make it easier to use in some kind of automated environment.
Admin User
By default, the admin user will always be admin@localhost.de
and the password will
be auto-generated and shown
only once in the logs. This is the easiest way if you want to set up a new instance manually.
However, you can change the default admin username and also provide a bootstrap password for this user, if Rauthy starts with an empty database for the very first time. This makes it possible to either generate a secure password with external tooling and have more control over it, or just use the same one all the time, because you may spin up an instance in test pipelines.
You can change the default admin user E-Mail with:
# If set, the email of the default admin will be changed
# during the initialization of an empty production database.
BOOTSTRAP_ADMIN_EMAIL="alfred@batcave.io"
The password can be bootstrapped in 2 ways:
- Provide a plain test password
- Provide the password hashed with argon2id
# If set, this plain text password will be used for the
# initial admin password instead of generating a random
# password.
BOOTSTRAP_ADMIN_PASSWORD_PLAIN="123SuperSafe"
# If set, this will take the argon2id hashed password
# during the initialization of an empty production database.
# If both BOOTSTRAP_ADMIN_PASSWORD_PLAIN and
# BOOTSTRAP_ADMIN_PASSWORD_ARGON2ID are set, the hashed version
# will always be prioritized.
BOOTSTRAP_ADMIN_PASSWORD_ARGON2ID='$argon2id$v=19$m=32768,t=3,p=2$mK+3taI5mnA+Gx8OjjKn5Q$XsOmyvt9fr0V7Dghhv3D0aTe/FjF36BfNS5QlxOPep0'
When you provide the password hashed with argon2id, you need to properly quote the string
like in this example with
single ''
. This will probably change in a future version, so you would need
to encode it as base64, but this is not
the case right now (v0.24.0).
API Key
In automated environments, you would usually not use a normal user to access the Rauthy API, since it is a bit more cumbersome to use from outside the browser because of additional CSRF tokens and security features. If you want to automatically set up Rauthy with external tooling after the first startup, you would want to do this with an API key most probably.
If Rauthy starts up with an empty database, you can bootstrap a single API key with providing a base64 encoded json.
An example json, which would create a key named bootstrap
with access to clients,
roles, groups
with all
read, write, update, delete
could look like this:
{
"name": "bootstrap",
"exp": 1735599600,
"access": [
{
"group": "Clients",
"access_rights": [
"read",
"create",
"update",
"delete"
]
},
{
"group": "Roles",
"access_rights": [
"read",
"create",
"update",
"delete"
]
},
{
"group": "Groups",
"access_rights": [
"read",
"create",
"update",
"delete"
]
}
]
}
The config documentation for the bootstrap value should explain all further questions:
# You can provide an API Key during the initial prod database
# bootstrap. This key must match the format and pass validation.
# You need to provide it as a base64 encoded JSON in the format:
#
# ```
# struct ApiKeyRequest {
# /// Validation: `^[a-zA-Z0-9_-/]{2,24}$`
# name: String,
# /// Unix timestamp in seconds in the future (max year 2099)
# exp: Option<i64>,
# access: Vec<ApiKeyAccess>,
# }
#
# struct ApiKeyAccess {
# group: AccessGroup,
# access_rights: Vec<AccessRights>,
# }
#
# enum AccessGroup {
# Blacklist,
# Clients,
# Events,
# Generic,
# Groups,
# Roles,
# Secrets,
# Sessions,
# Scopes,
# UserAttributes,
# Users,
# }
#
# #[serde(rename_all="lowercase")]
# enum AccessRights {
# Read,
# Create,
# Update,
# Delete,
# }
# ```
#
# You can use the `api_key_example.json` from `/` as
# an example. Afterwards, just `base64 api_key_example.json | tr -d '\n'`
BOOTSTRAP_API_KEY="ewogICJuYW1lIjogImJvb3RzdHJhcCIsCiAgImV4cCI6IDE3MzU1OTk2MDAsCiAgImFjY2VzcyI6IFsKICAgIHsKICAgICAgImdyb3VwIjogIkNsaWVudHMiLAogICAgICAiYWNjZXNzX3JpZ2h0cyI6IFsKICAgICAgICAicmVhZCIsCiAgICAgICAgImNyZWF0ZSIsCiAgICAgICAgInVwZGF0ZSIsCiAgICAgICAgImRlbGV0ZSIKICAgICAgXQogICAgfSwKICAgIHsKICAgICAgImdyb3VwIjogIlJvbGVzIiwKICAgICAgImFjY2Vzc19yaWdodHMiOiBbCiAgICAgICAgInJlYWQiLAogICAgICAgICJjcmVhdGUiLAogICAgICAgICJ1cGRhdGUiLAogICAgICAgICJkZWxldGUiCiAgICAgIF0KICAgIH0sCiAgICB7CiAgICAgICJncm91cCI6ICJHcm91cHMiLAogICAgICAiYWNjZXNzX3JpZ2h0cyI6IFsKICAgICAgICAicmVhZCIsCiAgICAgICAgImNyZWF0ZSIsCiAgICAgICAgInVwZGF0ZSIsCiAgICAgICAgImRlbGV0ZSIKICAgICAgXQogICAgfQogIF0KfQ=="
The secret needs to be set with a second variable. Just make sure it contains at least 64 alphanumeric characters.
# The secret for the above defined bootstrap API Key.
# This must be at least 64 alphanumeric characters long.
# You will be able to use that key afterwards with setting
# the `Authorization` header:
#
# `Authorization: API-Key <your_key_name_from_above>$<this_secret>`
BOOTSTRAP_API_KEY_SECRET=twUA2M7RZ8H3FyJHbti2AcMADPDCxDqUKbvi8FDnm3nYidwQx57Wfv6iaVTQynMh
With the values from this example, you will then be able to use the API key with providing the
Authorization
header
with each request in the following format:
Authorization: API-Key <API Key name>$<API Key secret>
Authorization: API-Key bootstrap$twUA2M7RZ8H3FyJHbti2AcMADPDCxDqUKbvi8FDnm3nYidwQx57Wfv6iaVTQynMh
API Keys are not nicely documented in this book yet, but should be self-explanatory when you take a look at the section in the Admin UI.
Database Migrations
You can migrate easily between SQLite and Postgres, or just between different instances of them.
Let's say you started out by evaluating Rauthy with a SQLite and a single instance deployment. Later on, you want to migrate to a HA setup, which requires you to use a Postgres.
Solution: MIGRATE_DB_FROM
If you set the MIGRATE_DB_FROM
in Rauthy's config, it will perform a migration at
the next restart.
The way it works is the following:
- At startup, have a look if
MIGRATE_DB_FROM
is configured - If yes, then connect to the given database
- At the same time, connect to the database specified in the
DATABASE_URL
- Overwrite all existing data in
DATABASE_URL
with the data from theMIGRATE_DB_FROM
database - Close the connection to
MIGRATE_DB_FROM
- Use the
DATABASE_URL
as the new database and start normal operation
MIGRATE_DB_FROM
overwrites any data in the target database! Be very careful
with this option.
If you do not remove the MIGRATE_DB_FROM
after the migration has been done,
it will overwrite the target again with
the next restart of the application. Remove the config variable immediately after the
migration has finished.
v0.14 and beyond: if you want to migrate to a different database, for instance from SQLite to Postgres, you need to switch to the correct rauthy image as well. Rauthy v0.14 and beyond has different container images for the databases.
The easiest to do is to just set MIGRATE_DB_FROM
as an environmant variable,
which is easier and quicker to remove
again afterwards.
UNIX Domain Sockets
Rauthy supports listening on a UNIX domain socket. To enable this feature, you will need to specify the following configurations.
LISTEN_SCHEME=unix_http/unix_https
LISTEN_ADDRESS=path_to_server.sock
PROXY_MODE=true
Note that unix_https
does not make a socket with TLS. When
unix_https
is used, the socket will still be plain HTTP!
unix_https
specifies the scheme to be prepended to PUB_URL
, so
whether to useunix_http
and unix_https
depends on the scheme
you are using on the reverse-proxy.
UNIX domain sockets should be used with a reverse proxy. Without such a proxy, Rauthy could not know the IP address of visitors, and it will see all visitors as 192.0.0.8, the IPv4 dummy address. Rate-limits and IP blacklists will apply on all visitors at the same time.
After setting up a reverse proxy, please also remember to set your proxy to send peer IP in
headers (for example, X-Forwarded-For
), and set PROXY_MODE
to let
Rauthy read the IP. Once everything is set, check the /auth/v1/whoami
endpoint to
see if your IP address is shown correctly.
It is also recommended to use POSIX ACLs to limit access to Rauthy socket to your reverse proxy, so other UNIX users won't be able to connect to the socket directly and bypass the reverse proxy.
Upstream Authentication Providers
This section is about upstream authentication providers, for instance things like Sign in with Github.
This feature can be considered "in beta state" and the only setup guide is for Github for
now.
Additional ones may come, when people are using other providers and give feedback or actually
contribute to these docs.
Github Provider
To set up Github as an upstream provider for Rauthy, you have pretty little things to do.
Rauthy includes a Template for Github already, which makes this whole process pretty simple.
Create an OAuth App
- Log in to your Github account
- Navigate to Settings -> Developer Settings -> OAuth Apps -> Register a new application
- Provide any name for your application. This is not important for the flow, only for your users to recognize
- Homepage URL should be the URL of your Rauthy deployment
Authorization callback URL
is the only really important one here. For instance, if your Rauthy instance would live athttps://iam.example.com
, your callback URL would behttps://iam.example.com/auth/v1/providers/callback
- Don't enable Device Flow and
Register application
On the following page, you can upload a logo and so on, but the important information is:
Client ID
Generate a new client secret
Both of these values need to be inserted into Rauthy's Admin UI in the next step, so lets open this in a new tab.
Rauthy Provider Config
- Log in to your Rauthy Admin UI, navigate to
Providers
and add a new provier. - Choose
Github
as theType
and leave the Custom Root CA and insecure TLS unchecked.
- The template pre-populates most of the inputs already for you. The only information you need
to add here is
Client ID
-> copyClient ID
from the Github OAuth App page inside hereClient Secret
-> on Github,Generate a new client secret
and copy & paste the value- you may edit the
Client Name
if you want something else thanGithub
- Leave the rest untouched and hit
Save
- You should expand the saved Github provider in the Rauthy UI again after saving and upload a
Logo for the Client.
Currently, Rauthy does not provide default Logos, though these might be added at some point.
Just
UPLOAD LOGO
from your local disk.
Finish
That's it. Now log out and try to log in to your Account view. You should see your new upstream provider on the Login page.
Working with Rauthy
This section will describe a few things you should now when working with Rauthy
API Keys
API Keys allow you to interact with Rauthy's REST API in a programmatic way. They allow you to do almost all the same things as a logged in Admin via the UI is allowed to do. There are only very few exceptions, like for instance an API Key is not allowed to manage other API Keys to reduce the impact of leaked credentials somewhere in your applications.
Each API Key can be configured with very fine-grained access rights, so you only allow actions it actually needs to perform for improved security.
Setting these access rights should be very intuitive. For instance, if you want to use your API
Key to listen to the
events stream, you need to allow Events: read, which makes sense. If you want to read
users and maybe update them in
some external application, you would need Users: read, update.
Most access rights requests mimic their HTTP methods:
GET
->read
POST
->create
PUT
->update
DELETE
->delete
To use an API Key, you need to provide is with the HTTP Authorization
header, but
instead of Bearer
as prefix, you
set it to API-Key
, so Rauthy can efficiently distinguish if you try to access the
API with a JWT token or an API Key:
Authorization: API-Key <API Key name>$<API Key secret>
Authorization: API-Key my_key$twUA2M7RZ8H3FyJHbti2AcMADPDCxDqUKbvi8FDnm3nYidwQx57Wfv6iaVTQynMh
The correct format will be generated automatically, when you create a new secret for an API Key via the Admin UI, so you only need to copy & paste it.
The only actions API Keys are not allowed to do is interacting with other API Keys and external SSO Providers for logins to limit the likelyhood of priviledge escalation in case of leaked credentials.
Creating a new API Key
Apart from the initial bootstrap, the creation and modification of API Keys is only allowed via the Admin UI.
Navigate to the API Keys
section, click New Key
, fill out the inputs
and Save
. You can set an optional key expiry.
By default, API Keys never expire.
The secret is created in a second step. Whenever you generate a new secret, the old one will stop working immediately.
After generating a new secret, you will never be able to see it again. If you loose it, you must
generate a new one.
Rauthy will format the API key in the correct format and shows a curl
command for
making sure your key works fine.
You can copy & paste the pre-configured curl
for the test endpoint. The result
should look something like this:
If you try to access an endpoint with an API Key that has insufficient access rights, Rauthy will return a proper error message with description, which access rights you actually need.
When you set up a fresh Rauthy instance, you have the option to bootstrap an API Key, which is the only situation where you are allowed to do it without an active Rauthy admin session.
Custom Scopes and Attributes
OpenID Connect defines quite a few very useful claims to be included in the id_token
as additional information about
a user. However, there are times when you need other data as well, and maybe it is data that
multiple downstream
applications need, like for instance an internal employee ID.
Rauthy can handle this as well. There are custom scopes you can create. These do not get mixed up with the default scopes, so they can't interfere, which makes the whole setup pretty easy.
Values are mapped as claims into tokens only if allowed an requested for an individual client. This means even if you set a value for a user, it will not just end up in the token automatically. Let's walk through how you can achieve that.
Custom User Attribute
The first thing we will do is to create a custom user attribute. Rauthy does not allow you to just enter anything you like. All values are strictly typed and must be pre-configured to prevent user error during normal operation. For instance, you won't be able to do typos, don't need copy & paste from other entries, and so on.
In the Admin UI, navigate to Attributes
-> Add New User Attribute
The description is optional and can be changed at any time without any impact. The
name
of the attribute will be the
exact same in the JWT token later on.
Set User Values
Now that we created our custom attribute in the step above, we can set them for users. Navigate
to a user of your
choice, expand the entry and choose the Attributed
tab.
The custom values for users are always interpreted, saved and returned
String
s without any further validation.
Create a Custom Scope
The next thing we need is a custom scope we can map attributes to. You are not allowed to modify the OIDC default scopes, but you can add your own ones, as many as you like. This is the first step we need to do. Navigate to the
Admin UI -> Scopes
-> Add New Scopes
After saving, expand your just created scope. You want to map your attribute from the step before
into JWT tokens
when a client requests (and is allowed to) this scope. For instance, let's add our new attribute
to the id_token
with this custom scope.
We are almost done.
Allow Scope for Client
The very last step is one you probably know. A client needs to be allowed to request certain scopes in the first place. From this point on, the new custom scope behaves like any default one. We need to allow it for a client to be requested.
Open the configuration for any Client
and scroll down until you see the Allowed
Scopes
and Default Scopes
section.
The Allowed Scopes
are the ones a client is allowed to request when redirecting a
user to the authorization endpoint
to initiate the authorization_code
flow. The Default Scopes
are the
ones that Rauthy will simply always add. This is
useful for instance when you are using client_credentials
or another flow, because
only the authorization_code
flow can request specific scopes while all others can't.
Finish
That is it. It seems like many steps to follow, but it is to your advantage. With this setup you have type-safety and the most flexibility about which client can see which values in which context, and so on.
When you have done everything right, you will see your new custom value in the
custom
section of the JWT token claims.
I used the rauthy
client in this example (which you should not mess with when
deployed).
{
"iat": 1721722389,
"exp": 1721722399,
"nbf": 1721722389,
"iss": "http://localhost:8080/auth/v1",
"sub": "za9UxpH7XVxqrtpEbThoqvn2",
"aud": "rauthy",
"nonce": "Fars0wPqrm9f6XimDKD08CPl",
"azp": "rauthy",
"typ": "Id",
"amr": [
"pwd"
],
"auth_time": 1721722389,
"at_hash": "Hd5ugcSzxwl1epF7Il3pEpq0gznnqs2SnSVZCdNw0EI",
"preferred_username": "admin@localhost.de",
"roles": [
"rauthy_admin",
"admin"
],
"custom": {
"my_attr": "This is Batman!"
}
}
Ephemeral Clients
In addition to static, upfront registered clients, which should always be preferred if they work out for your, and dynamically registered clients via OIDC DCR, Rauthy supports an additional feature, which is Ephemeral Clients. Originally implemented when the support for SolidOIDC came, this feature is fully working on its own. These are very easy to use and provide a completely new functionality, but keep in mind that these are the least efficient of the 3, as they require an additional network round trip to where ever their document is hosted (apart from caching).
Just to make clear what I am talking about, a high level comparison of these different clients:
Static clients are the ones that you register and configure via the Admin UI. These are the most efficient and secure ones. They require less work in the backend and exist inside the Rauthy database.
Then there is OIDC Dynamic Client Registration (DCR), which is an OIDC extension that Rauthy supports as well. If a downstream application has support for this feature, it can self-register a client, which then can be used for the login afterward. This sound very nice in the beginning, but brings quite a few problems:
- If the endpoint is open, anyone can register a client, also bots and spammers
- If the endpoint is secured an additional token for the registration, there is not much benefit from it, because the token must be communicated upfront anyway. If so, you could simply register a faster and more efficient static client.
- The downstream application must manage its own client. If they don't do it properly, they can spam Rauthy's database again.
- As mentioned already, dynamic clients are a bit less efficient than static ones and require at least one additional database round trip during each login.
In case of an open, unauthenticated endpoint for DCR, Rauthy has a simple internal cleanup logic. For instance, if a dynamic client is registered by bots or spammers and is not used within the first hour after the registration, it will be deleted automatically. Usually, when an app registers via DCR, the client will be used immeditately for at least one login.
As the third option, Rauthy gives you the ability to use Ephemeral Clients. These are disabled by default, because they are not a default OIDC feature yet (There is an open Draft to include them though). Ephemeral Clients provide way fewer configuration options, but make it possible that user can self-manage their own clients without any additional upfront tasks on Rauthy's side. This means as long as you have an account on a Rauthy instance with this feature enabled, you would not even need an admin to be able to connect any new application to this Rauthy instance.
The client definition exists in some JSON document that is hosted on some domain the user
controls, which must be
reachable by Rauthy. The format must be parsable by Rauthy of course, which I will mention
later. Whatever information
you provide in this JSON document, Rauthy will use it to authenticate your login request with
that very client. The
most important part about this is, that the client_id
must match
the exact URI the document is hosted on.
For instance, let's say you have a JSON on a Github repo at
https://github.com/sebadob/rauthy/blob/main/ephemeral_client_example.json
, then the client_id
must be
"client_id": "https://github.com/sebadob/rauthy/blob/main/ephemeral_client_example.json"
It is very important that the domain for an ephemeral client JSON is under your control
and no one else you don't fully
trust has access. Otherwise, someone else could modify this JSON which affects your
login flow. For instance, add an
additional allowed redirect_uri
and trick you into a scam website.
To keep the whole setup as simple as possible, apart from the client_id
, there is
only one other mandatory field
in this document, which is redirect_uris
. This must be an array with at least one
allowed redirect URI which is
allowed during the login flow. Rauthy allows a wildcard *
at the end of given
redirect uri, but be careful with these.
Wildcards should only be used during initial setup and testing, when you don't know the final
URI yet.
Technically, it is possible to just use a single *
as a
redirect_uri
. This would allow any
redirect URI without further validation. You must never use this apart from maybe some
internal DEV or testing setups!
The JSON document for the ephemeral clients follows the same rules and works in the same ways any other OIDC client.
Configuration
The support for ephemeral clients is opt-in. You need to set at least
# Can be set to 'true' to allow the dynamic client lookup via URLs as
# 'client_id's during authorization_code flow initiation.
# default: false
ENABLE_EPHEMERAL_CLIENTS=true
Apart from this, there are more options you can modify. Ephemeral clients are the least flexible, and they share some common restrictions. These are valid for all ephemeral clients used with this instance, no matter if they define something else in their JSON document. This is important so a Rauthy admin can configure a security standard that can't be broken or ignored.
# If set to 'true', MFA / Passkeys will be forced for ephemeral clients.
# default: false
#EPHEMERAL_CLIENTS_FORCE_MFA=false
# The allowed flows separated by ' ' for ephemeral clients.
# default: "authorization_code"
#EPHEMERAL_CLIENTS_ALLOWED_FLOWS="authorization_code refresh_token"
# The allowed scopes separated by ' ' for ephemeral clients.
# default: "openid profile email webid"
#EPHEMERAL_CLIENTS_ALLOWED_SCOPES="openid profile email webid"
If you need support for Solid OIDC, you need to at least enable web IDs and the solid
aud
:
# Can be set to 'true' to enable WebID functionality like needed
# for things like Solid OIDC.
# default: false
#ENABLE_WEB_ID=false
# If set to 'true', 'solid' will be added to the 'aud' claim from the ID token
# for ephemeral clients.
# default: false
#ENABLE_SOLID_AUD=true
The last option is the caching timeout. The cache for these documents is very important for performance. Without any cache, Rauthy would need to do an additional network round trip to the client JSON with each login. This is very inefficient, because usually these documents rarely change after the initial setup.
# The lifetime in seconds ephemeral clients will be kept inside the cache.
# default: 3600
#EPHEMERAL_CLIENTS_CACHE_LIFETIME=3600
If you are developing or testing, set the EPHEMERAL_CLIENTS_CACHE_LIFETIME
to a very low value. Otherwise,
Rauthy would not see any changes you do to this document as long as the internal cache
has not expired.
Minimal Client JSON Document
The minimal requirements for the client JSON were described above already. They are client_id
and redirect_uris
.
This means a minimal example JSON could look like this
{
"client_id": "https://example.com/my_ephemeral_client.json",
"redirect_uris": [
"https://this.is.where/my/website/is"
]
}
If this is all you care about, you're done.
If you want to configure some more things, keep reading.
Advanced Client JSON Document
All values in the client document are validated of course. Below you see a simplified version of
the current Rust
struct
Rauthy tries to deserialize the document into:
#![allow(unused)] fn main() { struct EphemeralClientRequest { /// Validation: `^[a-zA-Z0-9,.:/_\\-&?=~#!$'()*+%]{2,256}$` client_id: String, /// Validation: `[a-zA-Z0-9À-ÿ-\\s]{2,128}` client_name: Option<String>, /// Validation: `[a-zA-Z0-9,.:/_-&?=~#!$'()*+%]+$` client_uri: Option<String>, /// Validation: `Vec<^[a-zA-Z0-9\+.@/]{0,48}$>` contacts: Option<Vec<String>>, /// Validation: `Vec<^[a-zA-Z0-9,.:/_\\-&?=~#!$'()*+%]+$>` redirect_uris: Vec<String>, /// Validation: `Vec<^[a-zA-Z0-9,.:/_\\-&?=~#!$'()*+%]+$>` post_logout_redirect_uris: Option<Vec<String>>, /// Validation: `Vec<^(authorization_code|client_credentials|password|refresh_token)$>` grant_types: Option<Vec<String>>, /// Validation: `60 <= access_token_lifetime <= 86400` default_max_age: Option<i32>, /// Validation: `[a-z0-9-_/:\s*]{0,512}` scope: Option<String>, require_auth_time: Option<bool>, /// Validation: `^(RS256|RS384|RS512|EdDSA)$` access_token_signed_response_alg: Option<JwkKeyPairAlg>, /// Validation: `^(RS256|RS384|RS512|EdDSA)$` id_token_signed_response_alg: Option<JwkKeyPairAlg>, } }
The grant_types
exist for completeness only. They are actually ignored and
overridden by the
EPHEMERAL_CLIENTS_ALLOWED_FLOWS
config variable.
With this information, we can now build a complete example JSON with all the possible values you can set:
{
"client_id": "https://example.com/my_ephemeral_client.json",
"client_name": "My Ephemeral Client",
"client_uri": "https://this.is.where",
"contacts": [
"mail@ephemeral.info"
],
"redirect_uris": [
"https://this.is.where/my/website/is"
],
"post_logout_redirect_uris": [
"https://this.is.where"
],
"default_max_age": 300,
"scope": "openid email custom",
"require_auth_time": true,
"access_token_signed_response_alg": "RS256",
"id_token_signed_response_alg": "RS256"
}
Rauthy fetches the document with an Accept: application/json
header. This
makes it possible to host your
file on Github for instance and receive a valid JSON.
E-Mail Templates
The templates for E-Mails being sent by Rauthy are baked into the binary. This is the most efficient and fastest way of handling them. It simplifies the container images and speeds up the internal generation of new ones from templates compared to dynamic ones. At the same time, Rauthy checks these templates for correctness at compile time. It makes sure, that each templated value exists in the internal application logic and vice versa.
If you need to modify the content of these E-Mails, you have 2 options:
- Use the pre-defined config variables
- Clone the repo, modify the templates and build it from source
You can only modify the New Password and Password Reset E-Mail currently. There was no need yet for any of the others.
Existing Config Values
Modifying the E-Mail contents via config variables is very straight forward. You don't need to compile from source, therefore can't make any mistakes and for all variables that are not set, Rauthy will simply fall back to the default.
At the time of writing, translations do exist for english (default) and german only. Values
starting with TPL_EN
will set the english translation while TPL_DE
will set german.
You can take a look at the raw templates directly in the repo, so you can see which value will be mapped to which location in the final message.
The values to modify the sections are the following:
# New Password E-Mail
#TPL_EN_PASSWORD_NEW_SUBJECT="New Password"
#TPL_EN_PASSWORD_NEW_HEADER="New password for"
#TPL_EN_PASSWORD_NEW_TEXT=""
#TPL_EN_PASSWORD_NEW_CLICK_LINK="Click the link below to get forwarded to the password form."
#TPL_EN_PASSWORD_NEW_VALIDITY="This link is only valid for a short period of time for security reasons."
#TPL_EN_PASSWORD_NEW_EXPIRES="Link expires:"
#TPL_EN_PASSWORD_NEW_BUTTON="Set Password"
#TPL_EN_PASSWORD_NEW_FOOTER=""
#TPL_DE_PASSWORD_NEW_SUBJECT="Passwort Reset angefordert"
#TPL_DE_PASSWORD_NEW_HEADER="Passwort Reset angefordert für"
#TPL_DE_PASSWORD_NEW_TEXT=""
#TPL_DE_PASSWORD_NEW_CLICK_LINK="Klicken Sie auf den unten stehenden Link für den Passwort Reset."
#TPL_DE_PASSWORD_NEW_VALIDITY="Dieser Link ist aus Sicherheitsgründen nur für kurze Zeit gültig."
#TPL_DE_PASSWORD_NEW_EXPIRES="Link gültig bis:"
#TPL_DE_PASSWORD_NEW_BUTTON="Passwort Setzen"
#TPL_DE_PASSWORD_NEW_FOOTER=""
# Password Reset E-Mail
#TPL_EN_RESET_SUBJECT="Password Reset Request"
#TPL_EN_RESET_HEADER="Password reset request for"
#TPL_EN_RESET_TEXT=""
#TPL_EN_RESET_CLICK_LINK="Click the link below to get forwarded to the password request form."
#TPL_EN_RESET_VALIDITY="This link is only valid for a short period of time for security reasons."
#TPL_EN_RESET_EXPIRES="Link expires:"
#TPL_EN_RESET_BUTTON="Reset Password"
#TPL_EN_RESET_FOOTER=""
#TPL_DE_RESET_SUBJECT="Passwort Reset angefordert"
#TPL_DE_RESET_HEADER="Passwort Reset angefordert für"
#TPL_DE_RESET_TEXT=""
#TPL_DE_RESET_CLICK_LINK="Klicken Sie auf den unten stehenden Link für den Passwort Reset."
#TPL_DE_RESET_VALIDITY="Dieser Link ist aus Sicherheitsgründen nur für kurze Zeit gültig."
#TPL_DE_RESET_EXPIRES="Link gültig bis:"
#TPL_DE_RESET_BUTTON="Passwort Zurücksetzen"
#TPL_DE_RESET_FOOTER=""
Modify Templates Directly
Modifying templates directly gives you the most amount of control of course. You can change everything you like about them. However, you need to clone the whole repo, modify the templates in templates/email/, and then rebuild the whole project from source on your own. The existing setup should make it pretty easy to do. To get started, take a look at the CONTRIBUTING.md.
IP Blacklisting
Rauthy can blacklist certain IP that should be prevented from even trying to access it.
Each blacklisting will always have an expiry. This is important because most client IPs will be
ephemeral.
Automatic Blacklisting
Blacklisting with different timeouts will happen automatically, for instance when thresholds for invalid logins have been reached or it is obvious that someone is scanning Rauthy's API. The scan detection is very basic, but will catch suspicious requests and bots automatically.
With failed logins, not only the timeout will be increased to send an answer back to the client to prevent brute-force attacks. Rauthy also has the following default thresholds for blacklisting IPs:
Failed logins -> Blacklist duration
- 7 fails -> 1 minute
- 10 fails -> 10 minutes
- 15 fails -> 15 minutes
- 20 fails -> 1 hour
- 25 fails -> 1 day
- after 25 logins, each additional fail will result in an additional day of blacklisting.
In addition to blacklisting, the timeout's for failed logins in between these steps will be longer the higher the failed attempts counter is.
Suspicious Request Blacklisting
As mentioned already, Rauthy has basic capabilities to detect API scanners and bots. These are called suspicious requests internally.
By default, Rauthy will immediately block such IPs if it detects them. However, you may not want this for whatever reason. You can modify this behavior with the following config setting:
# The "catch all" route handler on `/` will compare the request path
# against a hardcoded list of common scan targets from bots and attackers.
# If the path matches any of these targets, the IP will be blacklisted
# preemptively for the set time in minutes.
# You can disable it with setting it to `0`.
# default: 1440
SUSPICIOUS_REQUESTS_BLACKLIST=1440
# This will emit a log with level of warning if a request to `/` has
# been made that has not been caught by any of the usual routes and
# and handlers. Apart from a request to just `/` which will end in
# a redirect to `/auth/v1`, all additional path's will be logged.
# This can help to improve the internal suspicious blocklist in the
# future.
# default: false
SUSPICIOUS_REQUESTS_LOG=true
At the time of writing, events for suspicious requests do not exist yet. This might change in the future.
Manual Blacklisting
You can also manually blacklist an IP, either via the Admin UI or with an API Key with the correct
access rights. Just navigate to Blacklist
in the Admin UI and click Blacklist
IP
.
Persistence
The blacklist currently is in-memory only. This means you loose all blacklisted IPs when Rauthy restarts.
The reason behind this is that blacklisting usually happens in scenarios under attack, when you want to do as little work as possible, for instance to not end up with a DoS. The blacklisting middleware is also the very first one in the API stack, even before access logging, to make sure Rauthy has the least amount of work blocking blacklisted IP's.
Expiry
After a blacklisting expires, the entry will be fully removed from Rauthy and you will not see it anymore.
Blacklist Events
You may or may not be notified about different blacklisting events. All auto-blacklistings will trigger an event. The levels change depending on the failed login counter. You can adopt the levels to your likings by setting:
# The level for the generated Event after certain amounts of false logins from an IP
# default: critical
EVENT_LEVEL_FAILED_LOGINS_25=critical
# default: critical
EVENT_LEVEL_FAILED_LOGINS_20=critical
# default: warning
EVENT_LEVEL_FAILED_LOGINS_15=warning
# default: warning
EVENT_LEVEL_FAILED_LOGINS_10=warning
# default: notice
EVENT_LEVEL_FAILED_LOGINS_7=notice
# default: info
EVENT_LEVEL_FAILED_LOGIN=info
The event level for all other blacklistings can be set with this value:
# The level for the generated Event after an IP has been blacklisted
# default: warning
EVENT_LEVEL_IP_BLACKLISTED=warning
JSON Web Keys
JSON Web Keys (JWKs) are used to sign JWT tokens. Multiple JWKs form a JSON Web Key Set (JWKS). These are private / public key pairs Rauthy generates internally in a cryptographically secure way. All of them are generated at once, one for each algorithm Rauthy supports signing tokens with, which are
- RS256
- RS384
- RS512
- EdDSA
The RSA algorithms exist for compatibility. The RS256
is the only
mandatory algorithm by the OIDC RFC and
the RS384
and RS512
basically come for free, when you implement RS256
.
However, these algorithms, produce
pretty big signatures and are very slow to generate.
Rauthy does not support for ECDSA keys, because they are the worst option in
this scenario in my opinion.
ECDSA
keys have the advantage that they produce way smaller signatures than RSA
keys and can be generated pretty
fast, but are slow at token validations. Tokens need to be validated with each single request,
so you want to this
to be as fast as possible (without sacrificing security of course).
The best option is EdDSA, which uses ed25519
keys. It is the
fastest option at signing and validation, fast to
generate and produces the smallest signatures and therefore total token size. These are the
default when you create
a new client, but some applications do not support them. If you have trouble logging in because
of a problem with the
signature, try to change it to RS256
.
Key Rotation
As long as your private key's do not leak, you technically do not need to rotate keys and
generate new ones.
However, it is good practice to do this regularly. Usually, you don't need to care about this.
Rauthy rotates its JWKS
automatically with default settings. This will happen on each 1. of the month
at 03:30 in the morning.
You can change this behavior with the following setting:
# JWKS auto rotate cronjob. This will (by default) rotate all JWKs every
# 1. day of the month. If you need smaller intervals, you may adjust this
# value. For security reasons, you cannot fully disable it.
# In a HA deployment, this job will only be executed on the current cache
# leader at that time.
# Format: "sec min hour day_of_month month day_of_week year"
# default: "0 30 3 1 * * *"
JWK_AUTOROTATE_CRON="0 30 3 1 * * *"
If you however had a secret's leak or something like this, you can of course rotate manually.
You just need to log
in to the Admin UI:
Config
-> JWKS
-> Rotate Keys
When Rauthy does a key rotation, the old keys will not be deleted immediately, because they will be needed for some additional time to verify existing tokens.
A rotation generates new keys which will then be used for any upcoming token signatures. The old keys will be auto-deleted after 30 days, you don't need to care about cleanup.
Rotation Event
Rauthy creates a new event when keys are rotated. The default level is notice
. If
you want to change this,
you can do so with:
# The level for the generated Event after the JWKS has been rotated
# default: notice
EVENT_LEVEL_JWKS_ROTATE=notice
I18n
Rauthy supports translated views for all user-facing UI and E-Mails.
At the time of writing, there are english (default) and german. If you need another language, it
can be implemented
pretty easily, as the project is prepared for that. You can either open a PR and provide all the
necessary translations,
or if you are a company, you can reach out to me: mail@sebadob.dev
.
What will never happen though is the implementation of an automated approach, because all these generated translations are usually very bad. Also, when the migration to Leptos has been done at some point, adding new translations will get easier, as I will reorganize the way it's done. Offline preparations have been made already.
Reference Config
This shows a full example config with (hopefully) every value nicely described.
When you go into production, make sure that you provide the included secrets / sensistive information in this file in an appropriate way. With docker, you can leave them inside this file, but when deploying with Kubernetes, extract these values, create Kubernetes Secrets and provide them as environment variables.
#####################################
############## ACCESS ###############
#####################################
# If the User Registration endpoint should be accessible by anyone.
# If not, an admin must create each new user. (default: false)
#OPEN_USER_REG=true
# If set to true, the `/userinfo` endpoint will do additional validations.
# The non-strict mode will fetch the user by id from the `sub` claim and make
# sure it still exists and is enabled. The strict validation will do additional
# database fetches and validates every possible value.
# Additionally, it will look up a possibly linked user device from the `did` claim
# and make sure it still exists. It will also extract the `client_id` the token
# has been originally issued for from the `azp` claim, fetch it and make sure it
# still exists and is enabled.
# If you don't need the extra validations, you can set this to `false` to
# save some resources, if your clients to a lot of `/userinfo` lookups.
# default: true
#USERINFO_STRICT=true
# Can be set to `true` to disable authorization on `/oidc/introspect`.
# This should usually never be done, but since the auth on that endpoint is not
# really standardized, you may run into issues with your client app.
# If so, please open an issue about it.
# default: false
#DANGER_DISABLE_INTROSPECT_AUTH=false
# Can be used when 'OPEN_USER_REG=true' to restrict the domains
# for a registration. For instance, set it to
# 'USER_REG_DOMAIN_RESTRICTION=gmail.com' to allow only
# registrations with 'user@gmail.com' (default: '')
#USER_REG_DOMAIN_RESTRICTION=some-domain.com
# If `OPEN_USER_REG=true`, you can blacklist certain domains
# on the open registration endpoint.
# Provide the domains as a `\n` separated list.
#USER_REG_DOMAIN_BLACKLIST="
#example.com
#evil.net
#"
# If set to true, a violation inside the CSRF protection middleware based
# on Sec-* headers will block invalid requests. Usually you always want this
# enabled. You may only set it to false during the first testing phase if you
# experience any issues with an already existing Rauthy deployment.
# In future releases, it will not be possible the disable these blocks.
# default: true
#SEC_HEADER_BLOCK=true
# If set to 'true', this will validate the remote peer IP address with
# each request and compare it with the IP which was used during the initial
# session creation / login. If the IP is different, the session will be
# rejected. This is a security hardening and prevents stolen access credentials,
# for instance if an attacker might have copied the encrypted session cookie
# and the XSRF token from the local storage from a user. However, this event
# is really unlikely, since it may only happen if an attacker has direct access
# to the machine itself.
#
# If your users are using mobile networks and get new IP addresses all the time,
# this means they have to do a new login each time. This is no big deal at all with
# Webauthn / FIDO keys anyway and should not be a reason to deactivate this feature.
#
# Caution: If you are running behind a reverse proxy which does not provide the
# X-FORWARDED-FOR header correctly, or you have the PROXY_MODE in this config
# disabled, this feature will not work. You can validate the IPs for each session
# in the Admin UI. If these are correct, your setup is okay.
#
# (default: true)
#SESSION_VALIDATE_IP=true
# By default, Rauthy will log a warning into the logs, if an active password
# reset form is being access multiple times from different hosts. You can set
# this to `true` to actually block any following request after the initial one.
# This hardens the security of the password reset form a bit more, but will
# create problems with E-Mail providers like Microsoft, which cans the customers
# E-Mails and even uses links inside, which make them unusable with this set to
# true.
# This feature works by setting an encrypted cookie to the host whichever opens
# the password reset form for the very first time. All subsequent requests either
# need to provide that cookie or would otherwise be rejected.
#PASSWORD_RESET_COOKIE_BINDING=true
# Can be set to extract the remote client peer IP from a custom header name
# instead of the default mechanisms. This is needed when you are running
# behind a proxy which does not set the `X-REAL-IP` or `X-FORWARDED-FOR` headers
# correctly, or for instance when you proxy your requests through a CDN like
# Cloudflare, which adds custom headers in this case.
# For instance, if your requests are proxied through cloudflare, your would
# set `CF-Connecting-IP`.
PEER_IP_HEADER_NAME="CF-Connecting-IP"
# You can enable authn/authz headers which would be added to the response
# of the `/auth/v1/oidc/forward_auth` endpoint. With `AUTH_HEADERS_ENABLE=true`,
# the headers below will be added to authenticated requests. These could
# be used on legacy downstream applications, that don't support OIDC on
# their own.
# However, be careful when using this, since this kind of authn/authz has
# a lot of pitfalls out of the scope of Rauthy.
# default: false
AUTH_HEADERS_ENABLE=true
# Configure the header names being used for the different values.
# You can change them to your needs, if you cannot easily change your
# downstream apps.
# default: x-forwarded-user
AUTH_HEADER_USER=x-forwarded-user
# default: x-forwarded-user-roles
AUTH_HEADER_ROLES=x-forwarded-user-roles
# default: x-forwarded-user-groups
AUTH_HEADER_GROUPS=x-forwarded-user-groups
# default: x-forwarded-user-email
AUTH_HEADER_EMAIL=x-forwarded-user-email
# default: x-forwarded-user-email-verified
AUTH_HEADER_EMAIL_VERIFIED=x-forwarded-user-email-verified
# default: x-forwarded-user-family-name
AUTH_HEADER_FAMILY_NAME=x-forwarded-user-family-name
# default: x-forwarded-user-given-name
AUTH_HEADER_GIVEN_NAME=x-forwarded-user-given-name
# default: x-forwarded-user-mfa
AUTH_HEADER_MFA=x-forwarded-user-mfa
# You can set different security levels for Rauthy's cookies.
# The safest option would be 'host', but may not be desirable when
# you host an application on the same origin behind a reverse proxy.
# In this case you might want to restrict to 'secure', which will then
# take the COOKIE_PATH from below into account.
# The last option is 'danger-insecure' which really should never be used
# unless you are just testing on localhost and you are using Safari.
#COOKIE_MODE=host
# If set to 'true', Rauthy will bind the cookie to the `/auth` path.
# You may want to change this only for very specific reasons and if
# you are in such a situation, where you need this, you will know it.
# Otherwise don't change this value.
# default: true
#COOKIE_SET_PATH=true
# The "catch all" route handler on `/` will compare the request path
# against a hardcoded list of common scan targets from bots and attackers.
# If the path matches any of these targets, the IP will be blacklisted
# preemptively for the set time in minutes.
# You can disable it with setting it to `0`.
# default: 1440
#SUSPICIOUS_REQUESTS_BLACKLIST=1440
# This will emit a log with level of warning if a request to `/` has
# been made that has not been caught by any of the usual routes and
# and handlers. Apart from a request to just `/` which will end in
# a redirect to `/auth/v1`, all additional path's will be logged.
# This can help to improve the internal suspicious blocklist in the
# future.
# default: false
#SUSPICIOUS_REQUESTS_LOG=false
#####################################
############# BACKUPS ###############
#####################################
# Cron job for automatic data store backups (default: "0 0 4 * * * *")
# sec min hour day_of_month month day_of_week year
#BACKUP_TASK="0 0 4 * * * *"
# The name for the data store backups. The current timestamp
# will always be appended automatically. (default: rauthy-backup-)
#BACKUP_NAME="rauthy-backup-"
# All backups older than the specified hours will be cleaned up
# automatically (default: 720)
#BACKUP_RETENTION_LOCAL=720
# The following section will only be taken into account, when
# SQLite is used as the main database. If you use Postgres, you
# should use Postgres native tooling like for instance `pgbackrest`
# to manage your backups.
# If S3 access is configured, your SQLite backups will be encrypted
# and pushed into the configured bucket.
#S3_URL=
#S3_REGION=
#S3_PATH_STYLE=false
#S3_BUCKET=my_s3_bucket_name
#S3_ACCESS_KEY=
#S3_ACCESS_SECRET=
#S3_DANGER_ALLOW_INSECURE=false
# Restores the given backup
#
# CAUTION: Be very careful with this option - it will overwrite
# any existing database! The best way to use this option is to
# provide it as an environment variable for a single start up
# and then remove it directly after success.
#
# This only works when you are using a SQLite database!
# If you are running on Postgres, you must use Postgres-native
# tooling to handle your backups.
#
# You can either restore a local backup, or an encrypted one
# from S3 storage.
#
# For restoring from a local backup, provide the folder name
# of the backup you want to restore. Local SQLite backups are
# always in `./data/backup/rauthy-backup-TIMESTAMP/` folders.
# You only provide the backup folder name itself, in this case
# it would be `rauthy-backup-TIMESTAMP` like this:
# RESTORE_BACKUP=file:rauthy-backup-TIMESTAMP
#
# If you want to restore an encrypted backup from S3 storage,
# you must provide the object name in the configured bucket.
# For instance, let's say we have an object named
# `rauthy-0.20.0-1703243039.cryptr` in our bucket, then the
# format would be:
# RESTORE_BACKUP=s3:rauthy-0.20.0-1703243039.cryptr
#
#RESTORE_BACKUP=
#####################################
############ BOOTSTRAP ##############
#####################################
# If set, the email of the default admin will be changed
# during the initialization of an empty production database.
#BOOTSTRAP_ADMIN_EMAIL=admin@localhost.de
# If set, this plain text password will be used for the
# initial admin password instead of generating a random
# password.
#BOOTSTRAP_ADMIN_PASSWORD_PLAIN="123SuperSafe"
# If set, this will take the argon2id hashed password
# during the initialization of an empty production database.
# If both BOOTSTRAP_ADMIN_PASSWORD_PLAIN and
# BOOTSTRAP_ADMIN_PASSWORD_ARGON2ID are set, the hashed version
# will always be prioritized.
#BOOTSTRAP_ADMIN_PASSWORD_ARGON2ID='$argon2id$v=19$m=32768,t=3,p=2$xr23OhOHw+pNyy3dYKZUcA$CBO4NpGvyi6gvrb5uNrnsS/z/Ew+IuS0/gVqFmLKncA'
# You can provide an API Key during the initial prod database
# bootstrap. This key must match the format and pass validation.
# You need to provide it as a base64 encoded JSON in the format:
#
# ```
# struct ApiKeyRequest {
# /// Validation: `^[a-zA-Z0-9_-/]{2,24}$`
# name: String,
# /// Unix timestamp in seconds in the future (max year 2099)
# exp: Option<i64>,
# access: Vec<ApiKeyAccess>,
# }
#
# struct ApiKeyAccess {
# group: AccessGroup,
# access_rights: Vec<AccessRights>,
# }
#
# enum AccessGroup {
# Blacklist,
# Clients,
# Events,
# Generic,
# Groups,
# Roles,
# Secrets,
# Sessions,
# Scopes,
# UserAttributes,
# Users,
# }
#
# #[serde(rename_all = "lowercase")]
# enum AccessRights {
# Read,
# Create,
# Update,
# Delete,
# }
# ```
#
# You can use the `api_key_example.json` from `/` as
# an example. Afterwards, just `base64 api_key_example.json | tr -d '\n'`
#BOOTSTRAP_API_KEY="ewogICJuYW1lIjogImJvb3RzdHJhcCIsCiAgImV4cCI6IDE3MzU1OTk2MDAsCiAgImFjY2VzcyI6IFsKICAgIHsKICAgICAgImdyb3VwIjogIkNsaWVudHMiLAogICAgICAiYWNjZXNzX3JpZ2h0cyI6IFsKICAgICAgICAicmVhZCIsCiAgICAgICAgImNyZWF0ZSIsCiAgICAgICAgInVwZGF0ZSIsCiAgICAgICAgImRlbGV0ZSIKICAgICAgXQogICAgfSwKICAgIHsKICAgICAgImdyb3VwIjogIlJvbGVzIiwKICAgICAgImFjY2Vzc19yaWdodHMiOiBbCiAgICAgICAgInJlYWQiLAogICAgICAgICJjcmVhdGUiLAogICAgICAgICJ1cGRhdGUiLAogICAgICAgICJkZWxldGUiCiAgICAgIF0KICAgIH0sCiAgICB7CiAgICAgICJncm91cCI6ICJHcm91cHMiLAogICAgICAiYWNjZXNzX3JpZ2h0cyI6IFsKICAgICAgICAicmVhZCIsCiAgICAgICAgImNyZWF0ZSIsCiAgICAgICAgInVwZGF0ZSIsCiAgICAgICAgImRlbGV0ZSIKICAgICAgXQogICAgfQogIF0KfQ=="
# The secret for the above defined bootstrap API Key.
# This must be at least 64 alphanumeric characters long.
# You will be able to use that key afterwards with setting
# the `Authorization` header:
#
# `Authorization: API-Key <your_key_name_from_above>$<this_secret>`
#BOOTSTRAP_API_KEY_SECRET=twUA2M7RZ8H3FyJHbti2AcMADPDCxDqUKbvi8FDnm3nYidwQx57Wfv6iaVTQynMh
#####################################
############## CACHE ################
#####################################
# Can be set to 'k8s' to try to split off the node id from the hostname
# when Hiqlite is running as a StatefulSet inside Kubernetes.
#HQL_NODE_ID_FROM=k8s
# The node id must exist in the nodes and there must always be
# at least a node with ID 1
# Will be ignored if `HQL_NODE_ID_FROM=k8s`
HQL_NODE_ID=1
# All cluster member nodes.
# To make setting the env var easy, the values are separated by `\s`
# while nodes are separated by `\n`
# in the following format:
#
# id addr_raft addr_api
# id addr_raft addr_api
# id addr_raft addr_api
#
# 2 nodes must be separated by 2 `\n`
HQL_NODES="
1 localhost:8100 localhost:8200
"
# If set to `true`, all SQL statements will be logged for debugging
# purposes.
# default: false
HQL_LOG_STATEMENTS=true
# Secrets for Raft internal authentication as well as for the Hiqlite API.
# These must be at least 16 characters long and you should provide
# different ones for both variables.
HQL_SECRET_RAFT=SuperSecureSecret1337
HQL_SECRET_API=SuperSecureSecret1337
# You can either parse `ENC_KEYS` and `ENC_KEY_ACTIVE` from the
# environment with setting this value to `env`, or parse them from
# a file on disk with `file:path/to/enc/keys/file`
# default: env
#HQL_ENC_KEYS_FROM=env
#####################################
############ DATABASE ###############
#####################################
# The database driver will be chosen at runtime depending on
# the given DATABASE_URL format. Examples:
# Sqlite: 'sqlite:data/rauthy.db' or 'sqlite::memory:'
# Postgres: 'postgresql://User:PasswordWithoutSpecialCharacters@localhost:5432/DatabaseName'
#
# NOTE: The password in this case should be alphanumeric. Special
# characters could cause problems in the connection string.
#
# CAUTION:
# To make the automatic migrations work with Postgres15, when
# you do not want to just use the `postgres` user, You need
# to have a user with the same name as the DB / schema. For
# instance, the following would work without granting extra
# access to the `public` schema which is disabled by default
# since PG15:
#
# database: rauthy
# user: rauthy
# schema: rauthy with owner rauthy
#
#DATABASE_URL=sqlite::memory:
#DATABASE_URL=sqlite:data/rauthy.db
#DATABASE_URL=postgresql://rauthy:123SuperSafe@localhost:5432/rauthy
# Max DB connections - irrelevant for SQLite (default: 5)
#DATABASE_MAX_CONN=5
# If specified, the current Database, set with DATABASE_URL,
# will be DELETED and OVERWRITTEN with a migration from the
# given database with this variable. Can be used to migrate
# between different databases.
#
# !!! USE WITH CARE !!!
#MIGRATE_DB_FROM=sqlite:data/rauthy.db
# Defines the time in seconds after which the `/health` endpoint
# includes HA quorum checks. The initial delay solves problems
# like Kubernetes StatefulSet starts that include the health
# endpoint in the scheduling routine. In these cases, the scheduler
# will not start other Pods if the first does not become healthy.
#
# This is a chicken-and-egg problem which the delay solves.
# There is usually no need to adjust this value.
#
# default: 30
#HEALTH_CHECK_DELAY_SECS=30
# Disables the housekeeping schedulers (default: false)
#SCHED_DISABLE=true
# The interval in minutes in which the scheduler for expired
# users should run. If this finds expired users, it invalidates
# all existing sessions and refresh tokens for this user.
# default: 60
#SCHED_USER_EXP_MINS=60
# The threshold in minutes after which time the user expiry
# scheduler should automatically clean up expired users.
# If not set at all, expired users will never be cleaned up
# automatically.
# default: disabled / not set
#SCHED_USER_EXP_DELETE_MINS=7200
#####################################
######### DEVICE GRANT ##############
#####################################
# The lifetime in secods of auth codes for the Device Authorization
# Grant flow. You may increase the default of 300 seconds, if you have
# "slow users" and they are simply not fast enough with the verification.
# default: 300
DEVICE_GRANT_CODE_LIFETIME=300
# The length of the `user_code` the user has to enter manually for
# auth request validation. This must be < 64 characters.
# default: 8
DEVICE_GRANT_USER_CODE_LENGTH=8
# Specifies the rate-limit in seconds per IP for starting new Device
# Authorization Grant flows. This is especially important for public
# clients, because a code request for this flow will actually create
# cached data. If this would happen on an unrestricted, open endpoint,
# the application could easily be DoS'ed.
# If you use the `device_code` grant with confidential clients only,
# you can leave this unset, which will not rate-limit the endpoint.
# default: not set
DEVICE_GRANT_RATE_LIMIT=1
# The interval in seconds which devices are told to use when they
# poll the token endpoint during Device Authorization Grant flow.
# default: 5
DEVICE_GRANT_POLL_INTERVAL=5
# You can define a global lifetime in hours for refresh tokens issued
# from a Device Authorization Grant flow. You might want to have a
# higher lifetime than normal refresh tokens, because they might be
# used in IoT devices which may be offline for longer periods of time.
# default: 72
DEVICE_GRANT_REFRESH_TOKEN_LIFETIME=72
#####################################
############## DPOP #################
#####################################
# May be set to 'false' to disable forcing the usage of
# DPoP nonce's.
# default: true
DPOP_FORCE_NONCE=true
# Lifetime in seconds for DPoP nonces. These are used to
# limit the lifetime of a client's DPoP proof. Do not set
# lower than 30 seconds to avoid too many failed client
# token requests.
# default: 900
DPOP_NONCE_EXP=900
#####################################
########## DYNAMIC CLIENTS ##########
#####################################
# If set to `true`, dynamic client registration will be enabled.
# Only activate this, if you really need it and you know, what
# you are doing. The dynamic client registration without further
# restriction will allow anyone to register new clients, even
# bots and spammers, and this may create security issues, if not
# handled properly and your users just login blindly to any client
# they get redirected to.
# default: false
#ENABLE_DYN_CLIENT_REG=false
# If specified, this secret token will be expected during
# dynamic client registrations to be given as a
# `Bearer <DYN_CLIENT_REG_TOKEN>` token. Needs to be communicated
# in advance.
# default: <empty>
#DYN_CLIENT_REG_TOKEN=
# The default token lifetime in seconds for a dynamic client,
# that will be set during the registration.
# This value can be modified manually after registration via
# the Admin UI like for any other client.
# default: 1800
#DYN_CLIENT_DEFAULT_TOKEN_LIFETIME=1800
# If set to 'true', client secret and registration token will be
# automatically rotated each time a dynamic client updates itself
# via the PUT endpoint. This is the only way that secret rotation
# could be automated safely.
# However, this is not mandatory by RFC and it may lead to errors,
# if the dynamic clients are not implemented properly to check for
# and update their secrets after they have done a request.
# If you get into secret-problems with dynamic clients, you should
# update the client to check for new secrets, if this is under your
# control. If you cannot do anything about it, you might set this
# value to 'false' to disable secret rotation.
# default: true
#DYN_CLIENT_SECRET_AUTO_ROTATE=true
# This scheduler will be running in the background, if
# `ENABLE_DYN_CLIENT_REG=true`. It will auto-delete dynamic clients,
# that have been registered and not been used in the following
# `DYN_CLIENT_CLEANUP_THRES` hours.
# Since a dynamic client should be used right away, this should never
# be a problem with "real" clients, that are not bots or spammers.
#
# The interval is specified in minutes.
# default: 60
#DYN_CLIENT_CLEANUP_INTERVAL=60
# The threshold for newly registered dynamic clients cleanup, if
# not being used within this timeframe. This is a helper to keep
# the database clean, if you are not using any `DYN_CLIENT_REG_TOKEN`.
# The threshold should be specified in minutes. Any client, that has
# not been used within this time after the registration will be
# automatically deleted.
#
# Note: This scheduler will only run, if you have not set any
# `DYN_CLIENT_REG_TOKEN`.
#
# default: 60
#DYN_CLIENT_CLEANUP_MINUTES=60
# The rate-limiter timeout for dynamic client registration.
# This is the timeout in seconds which will prevent an IP from
# registering another dynamic client, if no `DYN_CLIENT_REG_TOKEN`
# is set. With a `DYN_CLIENT_REG_TOKEN`, the rate-limiter will not
# be applied.
# default: 60
#DYN_CLIENT_RATE_LIMIT_SEC=60
#####################################
############# E-MAIL ################
#####################################
# This contact information will be added to the `rauthy`client
# within the anti lockout rule with each new restart.
RAUTHY_ADMIN_EMAIL="admin@localhost.de"
# Will be used as the prefix for the E-Mail subject for each E-Mail
# that will be sent out to a client.
# This can be used to further customize your deployment.
# default: "Rauthy IAM"
EMAIL_SUB_PREFIX="Rauthy IAM"
# Rauthy will force TLS and try a downgrade to STARTTLS, if
# TLS fails. It will never allow an unencrypted connection.
# You might want to set `SMTP_DANGER_INSECURE=true` if you
# need this for local dev.
#SMTP_URL=
#SMTP_USERNAME=
#SMTP_PASSWORD=
# Format: "Rauthy <rauthy@localhost.de>"
# default: "Rauthy <rauthy@localhost.de>"
#SMTP_FROM=
# The number of retries that should be done for connecting to
# the given SMTP_URL.
# When these retries are exceeded without success, Rauthy will
# panic and exit, so no E-Mail can get lost silently because of
# a missing SMTP connection.
# default: 3
#SMTP_CONNECT_RETRIES=3
# You can set this to `true` to allow an unencrypted and
# unauthenticated SMTP connection to an SMTP relay on your localhost
# or for development purposes.
# When set to `true`, `SMTP_USERNAME` and `SMTP_PASSWORD` will be
# ignored and you can modify the target port with
# `SMTP_DANGER_INSECURE_PORT`.
# default: false
#SMTP_DANGER_INSECURE=false
# The port for an insecure SMTP relay.
# This will most likely be used for testing only.
# It will only be taken into account if `SMTP_DANGER_INSECURE=true` is set.
# default: 1025
#SMTP_DANGER_INSECURE_PORT=1025
#####################################
###### ENCRYPTION / HASHING #########
#####################################
# You need to define at least one valid encryption key.
# These keys are used in various places, like for instance
# encrypting confidential client secrets in the database, or
# encryption cookies, and so on.
#
# The format must match:
# ENC_KEYS="
# q6u26onRvXVG4427/M0NFQzhSSldCY01rckJNa1JYZ3g2NUFtSnNOVGdoU0E=
# bVCyTsGaggVy5yqQ/UzluN29DZW41M3hTSkx6Y3NtZmRuQkR2TnJxUTYzcjQ=
# "
#
# The first part until the first `/` is the key ID.
# The ID must match '[a-zA-Z0-9]{2,20}'
#
# The key itself begins after the first `/` has been found.
# The key must be exactly 32 bytes long, encoded as base64.
#
# You can find a more detailed explanation on how to generate
# keys in the documentation:
# 1. https://sebadob.github.io/rauthy/getting_started/k8s.html#create-and-apply-secrets
# 2. https://sebadob.github.io/rauthy/config/encryption.html
#
# You can provide multiple keys to make things like key
# rotation work. Be careful with removing old keys. Make sure
# that all secrets have been migrated beforehand.
# You can find a utility in the Admin UI to do this for you.
#
#ENC_KEYS="
#q6u26onRvXVG4427/M0NFQzhSSldCY01rckJNa1JYZ3g2NUFtSnNOVGdoU0E=
#bVCyTsGaggVy5yqQ/UzluN29DZW41M3hTSkx6Y3NtZmRuQkR2TnJxUTYzcjQ=
#"
# This identifies the key ID from the `ENC_KEYS` list, that
# should actively be used for new encryptions.
ENC_KEY_ACTIVE=bVCyTsGaggVy5yqQ
# Argon2ID hashing parameters. Take a look at the documentation
# for more information:
# https://sebadob.github.io/rauthy/config/argon2.html
# M_COST should never be below 32768 in production
ARGON2_M_COST=131072
# T_COST should never be below 1 in production
ARGON2_T_COST=4
# P_COST should never be below 2 in production
ARGON2_P_COST=8
# Limits the maximum amount of parallel password hashes at
# the exact same time to never exceed system memory while
# still allowing a good amount of memory for the argon2id
# algorithm (default: 2)
#
# CAUTION: You must make sure, that you have at least
# (MAX_HASH_THREADS * ARGON2_M_COST / 1024) + idle memory
# of your deployment available
MAX_HASH_THREADS=1
# The time in ms when to log a warning, if a request waited
# longer than this time. This is an indicator, that you have
# more concurrent logins than allowed and may need config
# adjustments,
# if this happens more often. (default: 500)
#HASH_AWAIT_WARN_TIME=500
# JWKS auto rotate cronjob. This will (by default) rotate
# all JWKs every 1. day of the month. If you need smaller
# intervals, you may adjust this value. For security reasons,
# you cannot fully disable it.
# In a HA deployment, this job will only be executed on the
# current cache leader at that time.
# Format: "sec min hour day_of_month month day_of_week year"
# default: "0 30 3 1 * * *"
#JWK_AUTOROTATE_CRON="0 30 3 1 * * *"
#####################################
####### EPHEMERAL CLIENTS ###########
#####################################
# Can be set to 'true' to allow the dynamic client lookup via URLs as
# 'client_id's during authorization_code flow initiation.
# default: false
#ENABLE_EPHEMERAL_CLIENTS=false
# Can be set to 'true' to enable WebID functionality like needed
# for things like Solid OIDC.
# default: false
#ENABLE_WEB_ID=false
# If set to 'true', 'solid' will be added to the 'aud' claim from the ID token
# for ephemeral clients.
# default: false
#ENABLE_SOLID_AUD=false
# If set to 'true', MFA / Passkeys will be forced for ephemeral clients.
# default: false
#EPHEMERAL_CLIENTS_FORCE_MFA=false
# The allowed flows separated by ' ' for ephemeral clients.
# default: "authorization_code"
#EPHEMERAL_CLIENTS_ALLOWED_FLOWS="authorization_code refresh_token"
# The allowed scopes separated by ' ' for ephemeral clients.
# default: "openid profile email webid"
#EPHEMERAL_CLIENTS_ALLOWED_SCOPES="openid profile email webid"
# The lifetime in seconds ephemeral clients will be kept inside the cache.
# default: 3600
#EPHEMERAL_CLIENTS_CACHE_LIFETIME=3600
#####################################
######### EVENTS / AUDIT ############
#####################################
# The E-Mail address event notifications should be sent to.
#EVENT_EMAIL=admin@localhost.de
# Matrix variables for event notifications.
# `EVENT_MATRIX_USER_ID` and `EVENT_MATRIX_ROOM_ID` are mandatory.
# Depending on your Matrix setup, additionally one of
# `EVENT_MATRIX_ACCESS_TOKEN` or `EVENT_MATRIX_USER_PASSWORD` is needed.
# If you log in to Matrix with User + Password, you may use `EVENT_MATRIX_USER_PASSWORD`.
# If you log in via OIDC SSO (or just want to use a session token you can revoke),
# you should provide `EVENT_MATRIX_ACCESS_TOKEN`.
# If both are given, the `EVENT_MATRIX_ACCESS_TOKEN` will be preferred.
#
# If left empty, no messages will not be sent to Matrix.
# Format: `@<user_id>:<server address>`
#EVENT_MATRIX_USER_ID=
# Format: `!<random string>:<server address>`
#EVENT_MATRIX_ROOM_ID=
#EVENT_MATRIX_ACCESS_TOKEN=
#EVENT_MATRIX_USER_PASSWORD=
# URL of your Matrix server.
# default: https://matrix.org
#EVENT_MATRIX_SERVER_URL=https://matrix.org
# Optional path to a PEM Root CA certificate file for the Matrix client.
#EVENT_MATRIX_ROOT_CA_PATH=path/to/my/root_ca_cert.pem
# May be set to disable the TLS validation for the Matrix client.
# default: false
#EVENT_MATRIX_DANGER_DISABLE_TLS_VALIDATION=false
# The default behavior is, that Rauthy will panic at startup if it cannot connect
# to a configured Matrix server. The reason is that event notifications cannot be
# dropped silently.
# However, if you use a self-hosted Matrix server which uses Rauthy as its OIDC
# provider and both instances went offline, you will have a chicken and egg problem:
# - Rauthy cannot connect to Matrix and will panic
# - Your Matrix server cannot connect to Rauthy and will panic
# To solve this issue, you can temporarily set this value to 'true' and revert
# back, after the system is online again.
# default: false
#EVENT_MATRIX_ERROR_NO_PANIC=false
# The Webhook for Slack Notifications.
# If left empty, no messages will be sent to Slack.
#EVENT_SLACK_WEBHOOK=
# The notification level for events. Works the same way as a logging level.
# For instance: 'notice' means send out a notifications for all events with
# the notice level or higher.
# Possible values:
# - info
# - notice
# - warning
# - critical
#
# default: 'warning'
EVENT_NOTIFY_LEVEL_EMAIL=warning
# default: 'notice'
EVENT_NOTIFY_LEVEL_MATRIX=notice
# default: 'notice'
EVENT_NOTIFY_LEVEL_SLACK=notice
# Define the level from which on events should be persisted inside the
# database. All events with a lower level will be lost, if there is no
# active event subscriber.
# Possible values:
# - info
# - notice
# - warning
# - critical
# default: 'info'
#EVENT_PERSIST_LEVEL=info
# Define the number of days when events should be cleaned up from the database.
# default: 31
#EVENT_CLEANUP_DAYS=31
# The level for the generated Event after a new user has been registered.
# default: info
EVENT_LEVEL_NEW_USER=info
# The level for the generated Event after a user has changed his E-Mail
# default: notice
EVENT_LEVEL_USER_EMAIL_CHANGE=notice
# The level for the generated Event after a user has reset its password
# default: notice
EVENT_LEVEL_USER_PASSWORD_RESET=notice
# The level for the generated Event after a user has been given the
# 'rauthy_admin' role
# default: notice
EVENT_LEVEL_RAUTHY_ADMIN=notice
# The level for the generated Event after a new App version has been found
# default: notice
EVENT_LEVEL_RAUTHY_VERSION=notice
# The level for the generated Event after the JWKS has been rotated
# default: notice
EVENT_LEVEL_JWKS_ROTATE=notice
# The level for the generated Event after DB secrets have been migrated
# to a new key
# default: notice
EVENT_LEVEL_SECRETS_MIGRATED=notice
# The level for the generated Event after a Rauthy instance has been
# started
# default: info
EVENT_LEVEL_RAUTHY_START=info
# The level for the generated Event after a Rauthy entered a healthy
# state (again)
# default: notice
EVENT_LEVEL_RAUTHY_HEALTHY=notice
# The level for the generated Event after a Rauthy entered an unhealthy
#state
# default: critical
EVENT_LEVEL_RAUTHY_UNHEALTHY=critical
# The level for the generated Event after an IP has been blacklisted
# default: warning
EVENT_LEVEL_IP_BLACKLISTED=warning
# The level for the generated Event after certain amounts of false
# logins from an IP
# default: critical
EVENT_LEVEL_FAILED_LOGINS_25=critical
# default: critical
EVENT_LEVEL_FAILED_LOGINS_20=critical
# default: warning
EVENT_LEVEL_FAILED_LOGINS_15=warning
# default: warning
EVENT_LEVEL_FAILED_LOGINS_10=warning
# default: notice
EVENT_LEVEL_FAILED_LOGINS_7=notice
# default: info
EVENT_LEVEL_FAILED_LOGIN=info
# If set to 'true', it will disable the app version checker.
# This is a scheduled task that looks up the latest version periodically
# by doing a request to the Github API to check the latest release.
# This ignores any type of prerelease and will only notify for a new stable.
# default: false
#DISABLE_APP_VERSION_CHECK=false
#####################################
############## FED CM ###############
#####################################
## CAUTION: The FedCM is highly experimental at this point!
## Do not attempt to use it in production because it is subject to change
## in the future! The spec is currently a draft and under active development.
# Set to `true` to enable the experimental FedCM.
# default: false
#EXPERIMENTAL_FED_CM_ENABLE=false
# Session lifetime for FedCM in seconds - the session can not be extended
# beyond this time and a new login will be forced.
# default: 2592000
#SESSION_LIFETIME_FED_CM=2592000
# Session timeout for FedCM in seconds
# When a new token / login is requested before this timeout hits the limit,
# the user will be authenticated without prompting for the credentials again.
# This is the value which can extend the session, until it hits its maximum
# lifetime set with _FED_CM.
# default: 259200
#SESSION_TIMEOUT_FED_CM=259200
#####################################
####### LIFETIMES / TIMEOUTS ########
#####################################
# Set the grace time in seconds for how long in seconds the
# refresh token should still be valid after usage. Keep this
# value small, but do not set it to 0 with an HA deployment
# to not get issues with small HA cache latencies.
#
# If you have an external client, which does concurrent
# requests, from which the request interceptor wants to refresh
# the token, you may have multiple hits on the endpoint and all
# of them should be valid.
#
# Caching is done on the endpoint itself, but grace time of 0
# will only be good for a single instance of rauthy.
# default: 5
#REFRESH_TOKEN_GRACE_TIME=5
# Session lifetime in seconds - the session can not be
# extended beyond this time and a new login will be forced.
# This is the session for the authorization code flow.
# default: 14400
#SESSION_LIFETIME=14400
# If 'true', a 2FA / MFA check will be done with each automatic
# token generation, even with an active session, which kind of
# makes the session useless with Webauthn enabled, but provides
# maximum amount of security.
# If 'false', the user will not get an MFA prompt with an active
# session at the authorization endpoint.
# default: false
#SESSION_RENEW_MFA=false
# Session timeout in seconds
# When a new token / login is requested before this timeout hits
# the limit, the user will be authenticated without prompting for
# the credentials again.
# This is the value which can extend the session, until it hits
# its maximum lifetime set with SESSION_LIFETIME.
# default: 5400
#SESSION_TIMEOUT=5400
# ML: magic link
# LT: lifetime
# Lifetime in minutes for reset password magic links (default: 30)
#ML_LT_PWD_RESET=30
# Lifetime in minutes for the first password magic link, for
# setting the initial password. (default: 4320)
#ML_LT_PWD_FIRST=4320
#####################################
############# LOGGING ###############
#####################################
# This is the log level for stdout logs
# Accepts: error, info, debug, trace (default: info)
#LOG_LEVEL=info
# The log level for the `Hiqlite` persistence layer.
# At the time of writing, only the cache will use `hiqlite`
# default: info
LOG_LEVEL_DATABASE=info
# This is a special config which allows the configuration of
# customized access logs. These logs will be logged with each
# request in addition to the normal LOG_LEVEL logs.
# The following values are valid:
# - Debug
# CAUTION: The Debug setting logs every information available
# to the middleware which includes SENSITIVE HEADERS
# DO NOT use the Debug level in a working production environment!
# - Verbose
# Verbose logging without headers - generates huge outputs
# - Basic
# Logs access to all endpoints apart from the Frontend ones
# which all js, css, ...
# - Modifying
# Logs only requests to modifying endpoints and skips all GET
# - Off
# default: Modifying
#LOG_LEVEL_ACCESS=Basic
# You can change the log output format to JSON, if you set:
# `LOG_FMT=json`.
# Keep in mind, that some logs will include escaped values,
# for instance when `Text` already logs a JSON in debug level.
# Some other logs like an Event for instance will be formatted
# as Text anyway. If you need to auto-parse events, please consider
# using an API token and listen ot them actively.
# default: text
#LOG_FMT=text
#####################################
################ MFA ################
#####################################
# If 'true', MFA for an account must be enabled to access the
# rauthy admin UI (default: true)
#ADMIN_FORCE_MFA=true
#####################################
############## POW #################
#####################################
# The difficulty for a Proof-of-Work (PoW).
# The default is 20, which is reasonable for modern processors.
# This is currently only used for the user registration via UI.
# The value must be between 10 and 99.
# default: 20
POW_DIFFICULTY=20
# The expiration duration in seconds for a PoW
# default: 30
POW_EXP=30
#####################################
############# SERVER ################
#####################################
# The server address to listen on. Can bind to a specific IP.
# default: 0.0.0.0
#LISTEN_ADDRESS=0.0.0.0
# The listen ports for HTTP / HTTPS, depending on the
# activated 'LISTEN_SCHEME'
# default: 8080
#LISTEN_PORT_HTTP=8080
# default: 8443
#LISTEN_PORT_HTTPS=8443
# The scheme to use locally, valid values:
# http | https | http_https | unix_http | unix_https (default: http_https)
# For more details about the UNIX domain socket, check out its documentation page.
LISTEN_SCHEME=http
# The Public URL of the whole deployment
# The LISTEN_SCHEME + PUB_URL must match the HTTP ORIGIN
# HEADER later on, which is especially important when running
# rauthy behind a reverse proxy. In case of a non-standard
# port (80/443), you need to add the port to the PUB_URL
PUB_URL=localhost:8080
# default value: number of available physical cores
#HTTP_WORKERS=1
# When rauthy is running behind a reverse proxy, set to true
# default: false
PROXY_MODE=false
# A `\n` separated list of trusted proxy CIDRs.
# When `PROXY_MODE=true` or `PEER_IP_HEADER_NAME` is set,
# these are mandatory to be able to extract the real client
# IP properly and safely to prevent IP header spoofing.
# All requests with a different source will be blocked.
#TRUSTED_PROXIES="
#192.168.14.0/24
#"
# To bring support for applications using deep-linking, you can set custom URL
# schemes to be accepted when present in the `Origin` header. For instance, a
# Tauri app would set `tauri://` instead of `https://`.
#
# Provide the value as a space separated list of Strings, like for instance:
# "tauri myapp"
#ADDITIONAL_ALLOWED_ORIGIN_SCHEMES="tauri myapp"
# To enable or disable the additional HTTP server to expose the /metrics endpoint
# default: true
#METRICS_ENABLE=true
# The IP address to listen on for the /metrics endpoint.
# You do not want to expose your metrics on a publicly reachable endpoint!
# default: 0.0.0.0
#METRICS_ADDR=0.0.0.0
# The post to listen on for the /metrics endpoint.
# You do not want to expose your metrics on a publicly reachable endpoint!
# default: 9090
#METRICS_PORT=9090
# If the Swagger UI should be served together with the /metrics route on the internal
# server. It it then reachable via:
# http://METRICS_ADDR:METRICS_PORT/docs/v1/swagger-ui/
# default: true
#SWAGGER_UI_INTERNAL=true
# If the Swagger UI should be served externally as well. This makes the link in the
# Admin UI work.
#
# CAUTION: The Swagger UI is open and does not require any login to be seen!
# Rauthy is open source, which means anyone could just download it and see on their
# own, but it may be a security concern to just expose less information.
# default: false
#SWAGGER_UI_EXTERNAL=false
# The interval in seconds in which keep-alives should be sent to SSE clients.
# Depending on your network setup, proxy timeouts, ...
# you may adjust this value to fit your needs.
# default: 30
#SSE_KEEP_ALIVE=30
#####################################
############ TEMPLATES ##############
#####################################
# You can overwrite some default email templating values here.
# If you want to modify the basic templates themselves, this is
# currently only possible with a custom build from source.
# The content however can mostly be set here.
# If the below values are not set, the default will be taken.
# New Password E-Mail
#TPL_EN_PASSWORD_NEW_SUBJECT="New Password"
#TPL_EN_PASSWORD_NEW_HEADER="New password for"
#TPL_EN_PASSWORD_NEW_TEXT=""
#TPL_EN_PASSWORD_NEW_CLICK_LINK="Click the link below to get forwarded to the password form."
#TPL_EN_PASSWORD_NEW_VALIDITY="This link is only valid for a short period of time for security reasons."
#TPL_EN_PASSWORD_NEW_EXPIRES="Link expires:"
#TPL_EN_PASSWORD_NEW_BUTTON="Set Password"
#TPL_EN_PASSWORD_NEW_FOOTER=""
#TPL_DE_PASSWORD_NEW_SUBJECT="Passwort Reset angefordert"
#TPL_DE_PASSWORD_NEW_HEADER="Passwort Reset angefordert für"
#TPL_DE_PASSWORD_NEW_TEXT=""
#TPL_DE_PASSWORD_NEW_CLICK_LINK="Klicken Sie auf den unten stehenden Link für den Passwort Reset."
#TPL_DE_PASSWORD_NEW_VALIDITY="Dieser Link ist aus Sicherheitsgründen nur für kurze Zeit gültig."
#TPL_DE_PASSWORD_NEW_EXPIRES="Link gültig bis:"
#TPL_DE_PASSWORD_NEW_BUTTON="Passwort Setzen"
#TPL_DE_PASSWORD_NEW_FOOTER=""
# Password Reset E-Mail
#TPL_EN_RESET_SUBJECT="Password Reset Request"
#TPL_EN_RESET_HEADER="Password reset request for"
#TPL_EN_RESET_TEXT=""
#TPL_EN_RESET_CLICK_LINK="Click the link below to get forwarded to the password request form."
#TPL_EN_RESET_VALIDITY="This link is only valid for a short period of time for security reasons."
#TPL_EN_RESET_EXPIRES="Link expires:"
#TPL_EN_RESET_BUTTON="Reset Password"
#TPL_EN_RESET_FOOTER=""
#TPL_DE_RESET_SUBJECT="Passwort Reset angefordert"
#TPL_DE_RESET_HEADER="Passwort Reset angefordert für"
#TPL_DE_RESET_TEXT=""
#TPL_DE_RESET_CLICK_LINK="Klicken Sie auf den unten stehenden Link für den Passwort Reset."
#TPL_DE_RESET_VALIDITY="Dieser Link ist aus Sicherheitsgründen nur für kurze Zeit gültig."
#TPL_DE_RESET_EXPIRES="Link gültig bis:"
#TPL_DE_RESET_BUTTON="Passwort Zurücksetzen"
#TPL_DE_RESET_FOOTER=""
#####################################
############### TLS #################
#####################################
## Rauthy TLS
# Overwrite the path to the TLS certificate file in PEM
# format for rauthy (default: tls/tls.crt)
#TLS_CERT=tls/tls.crt
# Overwrite the path to the TLS private key file in PEM
# format for rauthy. If the path / filename ends with
# '.der', rauthy will parse it as DER, otherwise as PEM.
# (default: tls/tls.key)
#TLS_KEY=tls/tls.key
## CACHE TLS
# If given, these keys / certificates will be used to establish
# TLS connections between nodes.
HQL_TLS_RAFT_KEY=tls/key.pem
HQL_TLS_RAFT_CERT=tls/cert-chain.pem
HQL_TLS_RAFT_DANGER_TLS_NO_VERIFY=true
HQL_TLS_API_KEY=tls/key.pem
HQL_TLS_API_CERT=tls/cert-chain.pem
HQL_TLS_API_DANGER_TLS_NO_VERIFY=true
#####################################
############# WEBAUTHN ##############
#####################################
# The 'Relaying Party (RP) ID' - effective domain name
# (default: localhost)
# CAUTION: When this changes, already registered devices will stop
# working and users cannot log in anymore!
RP_ID=localhost
# Url containing the effective domain name
# (default: http://localhost:8080)
# CAUTION: Must include the port number!
RP_ORIGIN=http://localhost:8080
# Non critical RP Name
# Has no security properties and may be changed without issues
# (default: Rauthy Webauthn)
RP_NAME='Rauthy Webauthn'
# The Cache lifetime in seconds for Webauthn requests. Within
# this time, a webauthn request must have been validated.
# (default: 60)
#WEBAUTHN_REQ_EXP=60
# The Cache lifetime for additional Webauthn Data like auth
# codes and so on. Should not be lower than WEBAUTHN_REQ_EXP.
# The value is in seconds (default: 90)
#WEBAUTHN_DATA_EXP=90
# With webauthn enabled for a user, he needs to enter
# username / password on a new system. If these credentials are
# verified, rauthy will set an additional cookie, which will
# determine how long the user can then use only (safe) MFA
# passwordless webauthn login with yubikeys, apple touch id,
# Windows hello, ... until he needs to verify his credentials
# again.
# Passwordless login is generally much safer than logging in
# with a password. But sometimes it is possible, that the
# Webauthn devices do not force the user to include a second
# factor, which in that case would be a single factor login
# again. That is why we should ask for the original password
# in addition once in a while to set the cookie.
# The value is in hours (default: 2160)
#WEBAUTHN_RENEW_EXP=2160
# This feature can be set to 'true' to force User verification
# during the Webauthn ceremony. UV will be true, if the user
# does not only need to verify its presence by touching the key,
# but by also providing proof that he knows (or is) some secret
# via a PIN or biometric key for instance. With UV, we have a
# true MFA scenario where UV == false (user presence only)
# would be a 2FA scenario (with password). Be careful with this
# option, since Android and some special combinations of OS +
# browser to not support UV yet.
# (default: false)
#WEBAUTHN_FORCE_UV=false
# Can be set to 'true' to disable password expiry for users
# that have at least one active passkey. When set to 'false',
# the same password expiry from the set policy will apply to
# these users as well.
# With this option active, rauthy will ignore any password
# expiry set by the password policy for Webauthn users.
# default: true
#WEBAUTHN_NO_PASSWORD_EXPIRY=true
Swagger UI
Rauthy comes with its own OpenAPI documentation and Swagger UI. There are some things you should know about it.
It is publicly exposed when you start Rauthy for testing locally, but by default it is not. The reason is that it is not easily possible with the built-in options to only show it to authenticated Rauthy admins. Rauthy is open source and anyone can simply start it and take a look at the API docs, but exposing the API docs in production to anyone is not a good practice, especially because it might increase the attack surface.
For this reason, you have multiple ways of accessing the Swagger UI.
The UI is not meant to be used for testing out endpoints. Rauthy has way to many options and details regarding to the authentication and it's not that simple, that the default Swagger UI can handle it, at least to my knowledge.
If you want to test certain endpoints, I suggest you create an API Key for that case, which you can use with any tool you like.
If you just want to take a look at the SwaggerUI when you have no running Rauthy instance yet, I have a test / dev instance running which exposes the API docs: Swagger UI
Please note that this is a testing instance and it might not be online all the time or be running the latest nightly build, but everything should be fine most of the time.
External Access
If you open the Rauthy Admin UI and navigate to Docs
, you will find a link to the
Swagger UI that each Rauthy instance
hosts. By default, in production this will not be exposed and the link in the UI will not be
working as mentioned in the
description in the same place.
If you don't care about exposing it publicly, you can do so with the following and make the link work:
# If the Swagger UI should be served externally as well. This makes the link in the Admin UI work.
#
# CAUTION: The Swagger UI is open and does not require any login to be seen!
# Rauthy is open source, which means anyone could just download it and see on their own,
# but it may be a security concern to just expose less information.
# (default: false)
SWAGGER_UI_EXTERNAL=true
Internal Access
In addition to the default HTTP server, Rauthy will start a second one on a different port for metrics (if not disabled on purpose). The Swagger UI will be exposed on the second port by default. You can expose the default port while making the second one available only in an internal network for instance.
If you are in a network where you have access to the second internal port, you can reach the
Swagger UI via
/docs/v1/swagger-ui/ on port 9090
.
You can change the internal exposing or port as well. The metrics and internal Swagger UI share the same config:
# To enable or disable the additional HTTP server to expose the /metrics endpoint
# default: true
#METRICS_ENABLE=true
# The IP address to listen on for the /metrics endpoint.
# You do not want to expose your metrics on a publicly reachable endpoint!
# default: 0.0.0.0
#METRICS_ADDR=0.0.0.0
# The post to listen on for the /metrics endpoint.
# You do not want to expose your metrics on a publicly reachable endpoint!
# default: 9090
#METRICS_PORT=9090
# If the Swagger UI should be served together with the /metrics route on the internal server.
# It it then reachable via:
# http://METRICS_ADDR:METRICS_PORT/docs/v1/swagger-ui/
# (default: true)
#SWAGGER_UI_INTERNAL=true