2022-01-15 / Bartłomiej Kurek
Server setup: #1 - plan

In these series we are going to setup a Debian based personal server with security in mind. The series will include basic setup of a Debian system, networking and firewall, isolating services in containers (nodes), docker/kubernetes/containers within isolated nodes.

The OS used here is Debian 11. In order to make all the steps reproducible, we're going to use a clean Debian install in a virtual machine.

Series link

Plan

Main host:

  • Debian
  • firewall (default policy: DROP, blacklisting scanners and attackers)
  • acts as a gateway
  • is a reverse proxy for isolated services
  • local name resolver (/etc/hosts, unbound)
  • monitoring
  • backups and storage

Nodes

Nodes are subsystems isolated from the main host. They might be virtual machines or containers. Nodes have static IPs. They are "regular" systems: they run a proper init, may use their own firewall rules, they run cron and so on. Other services and Kubernetes/Docker/containers are run within these nodes.

Why?

I'm not setting up a "big computing" infrastructure but a personal server. I will be running a mail system, a web server, couple of applications, video/streaming software, databases, containers, etc.

Pid 1:
I want the nodes to be regular systems with init scripts and proper service supervision.

Cron:
Operating systems include basic reporting/monitoring scripts by default (cron jobs). Many other things depend on cron (renewal of certificates, rotating logs, etc.). I also schedule a few of my own scripts that help me monitor the state of the server/apps/access.

Separation:
Some processes are much more important than others.

  • The mailing system is crucial. Not only it handles my personal email but other systems/subsystems will use it as their mail transfer agent. Applications that send notifications or newsletters also need an email server. Thus, mailing needs to support multiple hosts, domains, accounts (for humans and mailing apps). Server monitoring and security depend on email. Mail submission ports need to be open all the time and the email server program is amongst the most frequent targets of attackers/bots/spammers. Additionally - if you run your own mailbox, you have to take care of DMARC and DKIM. These in turn require a database. I want this part of the system to be separate, easy to control, administer, backup and migrate/replicate.

  • Crucial apps
    I run my own git server, a custom app for organizing workshops, a private instance of video conferencing software, etc. The workshop app needs a strict, secure environment as it handles users' personal data. The data (billing data, email addresses) is always encrypted in the database and in the mailing queues (notifications and newsletter background tasks: redis, celery, etc.). In this system everything is always transmitted over secure channels (even on local connections). This part of the system requires strict rules (networking, access).

  • Other apps
    I do a lot of my own programming projects and also engage in other people's research and projects. These require various tools, server programs, databases, compilers, libraries or programming languages... It's quite a different environment where various containers are run, things are often installed, removed, configured, reconfigured, updated, replaced... An environment where things are allowed to break. A development sandbox, thus this environment requires isolation.

  • Docker
    I use docker but I dislike some parts of it. There's a lot of feature creep, some things work with docker-compose, some require docker swarm, others are troublesome. I want to have different docker instances in different nodes. Running my own images is different from running "random" images. Docker itself is more than merely a tool for running containers. In the default operating mode it modifies the firewall. Networking in docker is complicated, so docker got to the level of routing packets over different protocols, providing name resolution, etc. So - Docker - yes, but once it's closed in a node. I don't build my infrastructure on docker, I use it as one of the tools within my infrastructure. The same applies to Kubernetes. I don't plan to have a dependency on kubernetes for anything I run, but some of my educational/research activities include a container orchiestrator.

Networking

Main host will have a bridge interface. All nodes' interfaces will be attached to this bridge and the transmission will be handled by the main host (NAT, firewall, routing, filtering, logging, ...). The firewall will be built around the default DROP policy (from anywhere to anywhere), only essential ports will be enabled (in and out). The services running within the nodes will be exposed by port-forwarding. The main host controls the network while the nodes may setup their own firewall rules in their stacks. No, docker doesn't touch the firewall on the main host.

Why?

There are bots, scanners, malicious actors and state sponsored attacks. Dystopia.
Skipping the philosphical dwellings on the state of nature, just wars and aggressionism:

  • the Internet is not a secure space
  • the software can never be fully trusted
  • the people, well... they have various causes, dreams and desires.
“It is dreadful when something weighs on your mind,
not to have a soul to unburden yourself to. You know what I mean.
I tell my piano the things I used to tell you.”

- Frédéric Chopin

About this guide

Skipping all the hardware/hosting choices - I am doing everything from scratch on a clean minimal Debian 11 system.
You may run it on a real machine, a vps or your own vm. My initial installation includes only basic utilities (not even a ssh server).

This guide assumes you are able to access the server emergency console. I'll be executing all the commands in the server as root user. Any "tricky" moments will be noted.
The guide will be divided into separate articles in order to analyze and discuss various aspects.

Next: debian post-install steps