Installing an SSO solution in my homelab with Authelia to facilitate the user management and reduce the number of logins required.
First, what is single-sign-on:
Single sign-on (SSO) is an authentication scheme that allows a user to login with a single ID to any of several related, yet independent, software systems.
True single sign-on allows the user to log in once and access services without re-entering authentication factors. It should not be confused with same-sign on (Directory Server Authentication), it refers to systems requiring authentication for each application but using the same credentials from a directory server
In my homelab, I don't really care if I need to reenter my login/password to sign in to an app. So, what I'm actually using is Same-Sign On, however, in this article I'll only show examples of Single-Sign On in this post.
Which one?
There are multiple options, which all have advantages/disadvantages. Here is a list of the one I investigated and my up/downs of each:
CAS (Central Authentication Service)
Overall really amazing 🔥, it's used by numerous universities in France mainly because the CAS protocol is available on many educational tools like Moodle. However, it's Java-based which already tells a lot, it's extremely resource intensive, especially on the ram. It's also a bit annoying to configure since you need to recompile the thing each time. It supports just about every authentication source and authentication protocol possible
Keycloak, pretty suitable option, it's a bit more lightweight than CAS (but it's still Java 😒). It supports an LDAP backend. Overall pretty fantastic, used by numerous pros.
ADFS, I want the damn thing to work, lol
Kerberos, well, it might work but not many tools/app support it, so it's a no-go
Authelia, it's perfect, Go-based, supports most of the authentication scheme I want to use, as well as Webauthn, TOTP and mobile push notifications, easy to configure, pretty good community, multiple integrations with proxies and apps
I tested a lot of them, so I probably forgot to mention one, don't hesitate to comment!
Installation
I choose to use the full docker-compose method with some modifications to suit my setup. The original example uses traefik because it assumes the same network this is fine, but I'm using multiple hosts for apps, so it had to go. Here is what my docker-compose.yml looks like:
Next is the server, the docker-compose.yml exposes port 9091, so it's the one to be used, I'm using the subdomain auth.internal.tugler.fr so I don't need to change the root path. I also didn't set up ssl directly in there because I have it in the reverse proxy with the acme. The rest is default.
Next is double authentication, I've enabled TOTP and WebAuthn just in case, but I don't actually intend to use it on most apps, maybe on core services like proxmox, but it's not determined yet
Next, I need to set up the actual backend, to check users I have a Windows server running just for that (it's about the only thing that works well). The config was pretty simple, there is a pretty good example in the docs to get started. Even if my ldap_bind user has the permissions to modify password, I didn't actually enable it for now.
The password policy also doesn't need to be enabled.
Next is the access control part, the default is to deny everything, I allow my internal network at 10.10.0.0/16 and the network of my internal VPN at 10.5.0.0/16.
I also added set my app dashboard (internal.tugler.fr) to bypass any auth and only allow users with the group Apps_Paperless_Admin to access paperless.internal.tugler.fr (explained later).
I then proceeded by configuring the Redis cache for the session store and the anti-brute force settings.
Notifications are something I didn't configure, I'll probably add email notifications in the future, but I don't have a mail server setup right now, so file system notification it is, just to make the thing happy.
The final thing to configure is the OpenID connect provider, for which I basically used the default config. The clients array will be configured on the next part.
My case is a bit special because I use the api of paperless for automatically ingesting PDFs from my emails and can't automate the login flow.
Hence, why I have two hosts, one is direct.paperless.internal.tugler.fr which proxies paperless directly (And use the standard paperless auth which have one user specifically for this), and the other is paperless.internal.tugler.fr which goes through Authelia first.
Caddy is configured to send these headers to paperless:
Remote-User → Username
Remote-Groups → Groups of said user
Remote-Name → Full name of the user
Remote-Email → Email of the user
I also needed to configure environment variables for the docker container. Here is a very reduced example:
Overall, I'm very pleased with this setup. It will simplify the login process for all the apps and eliminate the need for multiple user database (for those which don't support LDAP) 😀.
Moreover, it's f-ing cool to have a proper setup in a homelab 😎.
Self-Host all the things!
Want to chat about this article? Just post a message down here. Chat is powered by giscus and all discussions can be found
here: TheStaticTurtle/blog-comments