Sairam SureshSairam's Blog
Back to posts
technologypodmanselfhostingdnsprojectlinux
March 7, 2026ยท10 min read
Deploying my own services (Part 1)

In the very beginning, all I wanted to do was to be able to connect to my Raspberry Pi from anywhere. I found it really cool that I could control it over the same local area network, but I also wanted to be able to control it and use it over the internet. That led me to Tailscale, which let me create VPN tunnels between my device and the raspberry pi over their DERP relay servers. Little did I know that that would lead me into a rabbit hole of deploying and maintaining my own services.

How it started

First, I wanted to try developing things on my Raspberry Pi, which is an ARM64 Linux platform with unique capabilities provided to it by its GPIO pins and HAT expansion. I had a Sense HAT which I also attached on top (in hindsight I should've tried getting a proper cooler for the Pi since I tend to use it more like a server, but thats for another day)

As such, I installed code-server on the Raspberry Pi, which gave me a neat VSCode environment on my it, which I could access from my laptop or even with my phone since it did not need the VSCode application. Paired with tailscale, I could access it from anywhere.

Then, I had a desktop that I had lying around with a decent Intel Processor and 32GB of DDR5 (at the time of writing the ram is probably worth at least 3 times the entire setup now ๐Ÿ˜ญ)

I tried installing code-server on that desktop, and while it worked as intended it suffered the issue of separation of concerns

Separation of Concerns?

On a raspberry pi, the issue is not as amplified, since it is definitely a much weaker (but much more efficient) platform compared to x64. As such you would probably not need to separate out the resources into separate projects anyway. However, I faced this issue on the bigger desktop. While code-server works, there really was not much separation between projects, and more importantly I did not want the main environment to be polluted with tools that i may use once or twice and forget about ever using. Things like devcontainers also were not supported since they seemed to be proprietary extensions developed by microsoft which do not work in a web-based environment.

As such, I was looking for solutions and I stumbled upon the Coder platform by the same company.

What is Coder and how I hosted it

Coder, from my understanding, allows you to provision workspaces with Terraform. This means that whatever Terraform supports, e.g. Google Cloud, OCI, or plain Docker Containers or Proxmox Virtual Environment VMs can be used as Coder Workspaces. From there, you can attach to those workspaces with web apps (code-server being one of them) and IDEs such as VSCode, AntiGravity and Jetbrains Gateway.

At first I was considering just installing it directly on the host system, but then I decided that I wanted to challenge myself. As such, I decided to learn how to use Podman (A drop in replacement for docker optimised around Linux and the SELinux Security Module) and Podman Compose to be able to bring up services like tailscale together.

Creating the Podman Compose file

There are 2 very important containers that need to be created:

  1. The tailscale container: The container that is responsible for communicating with tailscale and exposing the Coder port
  2. The coder container: The container responsible for actually running the coder server

And on top of that, a third container needs to be created for the postgres server that coder relies on to work.

After these 3 containers were created in the compose file, the tailscale serve configuration was used to serve the coder port on port 443 with TLS after the containers were configured to share the same network namespace as the tailscale container.

After all that and a podman-compose up later, the Coder server was running!

Setting up Coder Workspaces

Now that the Coder server was running, I needed to create a Terraform template that would allow me to provision a workspace.

Admittedly I did not have much experience with Terraform at first, so I decided to go with Coder's default Docker Containers template, in which I modified some parameters to use podman's socket instead of docker's socket. With that ready, I was expecting everything to just work and decided to create a workspace.

Unfortunately, I was quickly disappointed. While the podman container was created, the container never seemed to connect back to the Coder backplane, and I got an "Agent Unhealthy" error. I was in for a long debugging session.

Debugging why the container could not connect to the Coder Backplane

Before I was working on this project I had no idea what DNS was and what it was used for. However, soon I would come to know, and as many a popular programmers have said, It is always DNS that causes the issues. It is always DNS

It seemed that the container could not connect to the server since it was not sharing the same tailscale sidecar, which caused it to be unable to resolve the tailscale URL to an ip address. (It took me multiple weeks to figure this out). However, adding in a clause to the Terraform to attach the container to the same network namespace as the main coder server, it finally worked, and I could get a connection to the agent, and work on my stuff!

DISCLAIMER: THIS IS NOT SAFE TO DO

Since I was not sharing this with anyone else, it was fine, but since a container may potentially execute unknown malicious code, sharing the same network as the coder server could spell disaster since now that container can masquerade as the Coder Server and attempt to access other parts of my network.

In the next parts....

In the part 2 (or even part 3) of this journey I will be sharing on how I enhanced security of the main server and isolate it further in case it attempts a break out with Security-Enhanced Linux, and how I decided to use VMs instead of regular containers to host workspaces which are more secure.

Stay Tuned!

โ€” Sairam