Sairam SureshSairam's Blog
Back to posts
technologylinuxpodmanselfhostingselinuxproject
March 10, 2026·10 min read
Deploying my own services (Part 2)

This is a continuation of part 1 of self-hosting some services. You should definitely check it out here to make sure that you have enough context coming into this blog post

Now that I had the Coder server running, there were 2 main things that I wanted to address to be able to get the full potential out of the deployment: SELinux and Wildcard Domain Support.

Dealing with SELinux

What SELinux is all about

SELinux (Or Security-Enhanced Linux), is a type of Linux Security Module that enforces MAC (Mandatory Access Control). It operates semi-independently of the root user to my knowledge (the root user can still decide whether SELinux policies are being enforced), but outside that it works without knowledge of Linux users, and as such can enforce policies that can prevent extreme damage to the system in case malware finds it's way onto the system.

By default, SELinux is bundled with RHEL and RHEL-based distros (Fedora, SUSE, Oracle), and is set to enforcing by default. It is possible to set SELinux to permissive (i.e. it will log an illegal operation as denied but will still allow it anyway) with

sudo setenforce 0

but this setting will not persist after a reboot.

The relationship between SELinux and Podman

SELinux and podman work hand-in-hand to enforce strict container isolation. Podman is already leading the charge by being rootless by default, meaning that it does not need the root user to operate and podman itself along with it's containers can operate as a standard user. This presents many benefits to the cybersecurity side of things, because a root user is no longer involved in the situation. Should a hacker get into a container and attempt to escalate their privileges, at best they will be thrown into a non-root user with no sudo capabilities, limiting the amount of damage they can do to the system

SELinux takes that one step further by further controlling what resources a podman container is allowed to access. Things like filesystem access, port access, etc are all vetted by SELinux to ensure that they are allowed under it's policies. SELinux comes with a set of default policies for containers that give them most of the freedoms of the user they are operating under but with some very notable restrictions.

Podman containers are strictly not allowed to access any devices or any podman socket even if they have been mounted properly in the command or the compose file. A reasonable restriction, given that most containers are probably not going to manage the host container or use privileged devices for no reason. However, my case is different

My container setup and SELinux

Since coder is a workspace management platform is needs to be able to control the host's resources in some way. Podman in podman is possible but it was something I did not want to delve into as it would complicate enforcing SELinux policies. As such, I just decided to mount the host's rootless podman socket into the container to allow new containers to be provisioned on the host from the Coder container.

It worked while SELinux was set to permissive, but once it was set to enforcing, I could no longer create workspaces because I got a Permission denied error. There was also a log in SELinux stating that the container attempted to access the podman socket but was denied as it was not allowed under it's default policies.

I was at a loss for what I could do for a while, as I did not know how to create my own SELinux policies. That is, until I stumbled upon Udica, a tool created by Red Hat that allows you to generate and apply SELinux for containers by analysing their metadata.

I used this tool to create the SELinux policies and loaded them using an semodule command. With that, I could leave SELinux to enforce policies while also being able to create workspaces! There is also an inherent benefit in that I get to keep the security benefits of SELinux and follow the principle of least privilege, where the container is only given just enough privileges to be able to do it's job.

Now, I was on to the second problem: wildcard domain support

Wildcard Domain Support

First of all, it is good to know that wildcard domains are technically optional for Coder, but it comes with some nice benefits:

  1. Web-based applications can run on their own subdomain, giving them their own security context isolated from the main coder web server.
  2. Ports of workspaces can be accessed via subdomains instead of path-based names (3000.workspace.coder.home.net vs coder.home.net/workspace/3000), and
  3. Especially in VSCode for the web, most extensions expect themselves to be running at the root of the domain, rather than at a path. Running extensions from a path tend to break assumptions around this and as such cause some extensions like the Flutter extension to break.

However, while there were the benefits, there were also some challenges that I had to handle:

  1. Tailscale did not support wildcard subdomains in their MagicDNS DNS solution, meaning that I could not just use something.coder.tail4ef781.ts.net as a wildcard domain
  2. I do not run my own DNS server
  3. Even if I did run my own DNS server, I would need to find a way to manage certificates for HTTPS support.

And as such, then began my journey to enable wildcard support for my deployment.

Step 1: Prepare the custom DNS solution.

At this point I once again had a Raspberry Pi lying around waiting to be used for some homelab project, and so I thought I would use it as a quick DNS server.

I decided to go with AdguardHome, since it supported DNS rewrites and also had a cherry on top in that it would also block analytics and tracking at the DNS level. As such, I pulled the container image, attached it to a Tailscale sidecar and now I had a dns server running!

I added rewrites for coder.home.net and *.coder.home.net to point to my own internal web server, which I will be detailing in the next step.

Step 2: Serving the coder website at the wildcard domain

Now that my DNS was configured, I needed to get a reverse proxy to receive the requests and proxy them to the correct destinations. I decided to use Caddy for this as the syntax was very simple and in no time I managed to add reverse proxy rules for coder.home.net and *.coder.home.net to redirect to my Coder deployment.

While this may seem like thats all there is to it, there is still a crucial part of the puzzle remaining: HTTPS.

Step 3: Handling HTTPS

HTTPS is quite important, especially for a development platform, as most browsers tend to restrict what websites can do when they are only served as HTTP. However, in this regard, my options were pretty limited. I did not actually own the home.net domain, and as such I could not configure Caddy to go to LetsEncrypt to get certificates for the domain. And while I could use Caddy's internal self-signing certificates, those only last for 7 days, and it gets very annoying to need to keep changing and renewing certificates every week. Therefore, I decided to host my own Certificate Authority.

StepCA was the perfect candidate for this as it supports ACME (the same protocol used by Caddy to obtain certificates from a certificate provider), and could be easily started as a podman container in my podman-compose file. After reconfiguring Caddy to serve HTTPS using the certificates provided by my CA, and configuring my Mac and my iPhone by using Keychain Access and Configuration Profiles respectively, I could now securely access Coder on my Phone and my Mac.

I also added a quick script in my Coder Workspaces to download and add my certificates to their Certificate Trust stores so that they can access the Coder Backplane without erroring out.

Ending notes

All in all, this was a very fun and fruitful experience. Not only can I enjoy using Coder at it's maximum potential, I have also created a little "world" within my Tailscale VPN which allows me to augment or delete the web at will for my personal devices that are connected to my tailscale network. I will be sure to cover on how I have used that to my advantage soon, but in part 3 of this post I will be talking about how I moved my workspaces from containers to full-out VMs, which are much more secure in practise.

Until next time!

— Sairam