It's the Easter weekend and I've booked the whole off next week of as well, so i need a project to work on 🤓
I've been meaning to restructure the web server for awhile now and this seems as good a time as any. Let's see how badly I can mess things up.
As this post is to serve as my reminder for when I have to redo this in the future I'll assume we're starting with a clean system. So follow the Post install setup then Install Docker.
The only extra command that I need to run is to create a custom user-defined network. Once connected to a user-defined network, containers can communicate with each other using container names.
docker network create myDockerNetwork
My setup for the server assumes that it will be compromised so the only things I bring down from it are logs. With that the files on my local system are considered the source of truth, which means that I can if I need to overwrite the server versions to return to a known good state.
On my local system I have a ProductionServer
folder in that is the following structure:-
ProductionServer
├── docker/
└── sites/
├── default/
│ ├── config/
│ ├── logs/
│ └── public/
└── mort8088.com/
├── config/
├── logs/
└── public/
The one and only file to stand up all the services I need, the docker-compose.yml
lives in the docker folder
we'll start out with the network:-
networks:
myDockerNetwork:
external: true
The first service we need is the proxy service this will deal with all the incoming traffic and any SSL connections that need negotiating.
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:1.7
container_name: nginx-proxy
restart: always
ports:
- "80:80"
- "443:443"
environment:
- ENABLE_IPV6=true
- TRUST_DOWNSTREAM_PROXY=true
- DEFAULT_HOST=default.local
volumes:
- /etc/nginx/certs:/etc/nginx/certs:ro
- /etc/nginx/vhost.d:/etc/nginx/vhost.d
- /usr/share/nginx/html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- myDockerNetwork
Key | Desc. |
---|---|
image | The Nginx proxy image |
container_name | The name this container will have in docker |
restart | We always want this image to restart |
ports | We'll want this container to link up it's internal ports to the host for HTTP and HTTPS |
environment | Some config items to enable IPv6 support, The default site to use if there isn't a matching domain configured and the docker.sock for the server discovery |
volumes | Map the internal paths out to the host so that we can share things like the certs |
networks | The Docker Network to use |
Next we want to have SSL certificats for the domains we host so we'll be adding the automated ACME SSL certificate generation for Nginx-proxy.
letsencrypt:
image: nginxproxy/acme-companion:2.5
container_name: nginx-letsencrypt
restart: always
environment:
- DEFAULT_EMAIL=%adminEmailAdress%
- nginx_PROXY_CONTAINER=nginx-proxy
volumes:
- /etc/nginx/certs:/etc/nginx/certs
- /etc/nginx/vhost.d:/etc/nginx/vhost.d
- /usr/share/nginx/html:/usr/share/nginx/html
- /var/run/docker.sock:/var/run/docker.sock:ro
depends_on:
- nginx-proxy
networks:
- myDockerNetwork
Key | Desc. |
---|---|
image | ACME SSL certificate generation for nginx-proxy |
container_name | The name this container will have in docker |
restart | We always want this image to restart |
environment | Only two things need to be set here and that's the admin email address and the container name of the proxy. |
volumes | Again the shared folders for cert storage and the docker.sock for the server discovery. |
depends_on | is the container name of the proxy to make sure it's running before this one. |
networks | The Docker Network to use |
The default site is the first container that needs some extra configuration outside of the compose file.
default_site:
image: nginx:alpine
container_name: default_site
restart: unless-stopped
environment:
- VIRTUAL_HOST=default.local
volumes:
- ~/sites/default/public:/usr/share/nginx/html:ro
- ~/sites/default/logs:/var/log/nginx
- ~/sites/default/config:/etc/nginx/conf.d
networks:
- myDockerNetwork
Key | Desc. |
---|---|
image | Official build of Nginx |
container_name | The name this container will have in docker |
restart | unless stopped |
environment | The virtual host is the domain that the Nginx server will respond to |
volumes | Three folders that will contain the site data, logs and configuration files |
networks | The Docker Network to use |
As this is effectively a holding site for any domain that comes to the server that doesn't yet have a site set up for it as well as handling any web requests to the ip address. We'll add in a basic set of files for it to serve.
In the public folder create an index.html and a 404.html file. Put whatever content you want in them and add any supporting files they may reference.
Next up is the Nginx configuration file, create a default.conf in the config folder with the following content
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
set_real_ip_from %NetworkSubnetMask%;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
root /usr/share/nginx/html;
index index.html index.htm;
access_log /var/log/nginx/access.log realip;
error_log /var/log/nginx/error.log;
error_page 404 /404.html;
location / {
try_files $uri $uri/ =404;
}
location = /404.html {
internal;
}
autoindex off;
}
Key | Desc. |
---|---|
listen | This has two entries to let the server know it should be listening on port 80 for both IPv4 & IPv6. The default_server parameter, if present, will cause the server to become the default server for the specified _address_ :_port_ pair. |
server_name | Sets names of a virtual server, in this instance _ matches to any unmatched domain. |
set_real_ip_from | %NetworkSubnetMask% defines trusted address that is known to send correct replacement addresses, you can get this by running:-docker network inspect myDockerNetwork \| grep -m 1 Subnet \| awk -F'"' '{print $4}' |
real_ip_header | Because we're using proxy forwarding we would get the proxy IP address as the client IP. This defines the request header field whose value will be used to replace the that client address. |
real_ip_recursive | Can't remember why just know it should be on. |
root | Where in the container to find the root of the site |
index | Sets the default file to serve from a folder URL it'll look for the files in order. |
access_log | Where in the container to store the access log |
error_log | Where in the container to store the error log |
error_page | This line defines the file to return for 404-File not found errors. |
location / | This the default request processor. |
location = /404.html | This protects the error page from direct access, keeping it available only in the context of an error. |
autoindex | Autoindex is off to prevent directory listings |
With this there is one other file to add to the config folder but this is just to format the access log file we specified realip
in the access_log
line so now we need the format file.
Create a file called 00-logformat.conf
in the config folder with the following in it:-
log_format realip '$http_x_forwarded_for - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent"';
This is the last part I needed so that the logs actually had the requesting client IP and not the Proxy IP.
With the domains that you want to serve the set up is largely the same as the default server with a few exceptions.
Key | Desc. |
---|---|
server_name | Sets names of a virtual server so you'd expect me to have mort8088.com and maybe www.mort8088.com I don't like the www. in web address so I'll redirect if it's found. |
I have a second server block to redirect requests to the shorter https://mort8088.com
server {
listen 80;
server_name www.mort8088.com;
return 301 https://mort8088.com$request_uri;
}
Because the files on my local system are the source of truth for the server I need a way to pull down any logs from the server and then upload new files and remove any that aren't supposed to be there.
It's scripting time! Create a file in your local ~/bin
folder and open it for editing. Start the bash script in the normal way and then we need to set up some variables:-
Variable | Desc. |
---|---|
LOCAL_BASE | This is the path to your local ProductionServer folder |
LOCAL_SITES | "$LOCAL_BASE/sites" |
LOCAL_DOCKER | "$LOCAL_BASE/docker" |
REMOTE_BASE | The absolute path to the service account home folder |
REMOTE_SITES | "$REMOTE_BASE/sites" |
REMOTE_DOCKER | "$REMOTE_BASE/docker" |
USER | The service account username |
HOST | IP address of the remote server |
RSYNC_DOWN_OPTIONS | Options for the rsync download see below |
RSYNC_UP_OPTIONS | Options for the rsync upload see below |
-avz --update
-a
: Archive mode – preserves permissions, timestamps, symbolic links, etc.-v
: Verbose – shows progress/details during transfer.-z
: Compress – reduces data size during transfer.--update
: Skip files that are newer on the destination.(probably not needed)Copies files efficiently while preserving attributes, compressing data, and avoiding overwriting newer files.
For each site that I want the logs for I add a line like this changing out the %domainName% for the appropriate folder name. For a new site just add a new line.
rsync $RSYNC_DOWN_OPTIONS "$USER@$HOST:$REMOTE_SITES/%domainName%/logs/" "$LOCAL_SITES/%domainName%/logs/"
-avz --delete
-a
: Archive mode – preserves file permissions, timestamps, symbolic links, etc.-v
: Verbose – displays detailed output during transfer.-z
: Compress – compresses data during transfer for efficiency.--delete
: Deletes files from the destination that no longer exist in the source.Synchronises source to destination exactly, preserving attributes, compressing data, and removing extraneous files from the destination.
Because we know there are only two folders we need to maintain the upload block should never need updating
# Sync from local to server.
echo "Syncing from local to server..."
echo "Sites Up..."
rsync $RSYNC_UP_OPTIONS "$LOCAL_SITES/" "$USER@$HOST:$REMOTE_SITES/"
echo "Docker Up..."
rsync $RSYNC_UP_OPTIONS "$LOCAL_DOCKER/" "$USER@$HOST:$REMOTE_DOCKER/"
If anyone manages to upload anything to your public folder it'll get deleted on your next update, if they some how modify your docker-compose file it'll be replaced with your known good file.
You could add a cron-job that once a week runs an update script on the server that stops all your containers pulls updates and restarts them.
When you're sorting things out with your set-up don't keep restarting the full docker stack I kept messing things up and ended up hitting the rate limit with Let's Encrypt on more than one occasion. Just restart the container you've changed.
While I was working out the sync script I didn't think about what I was doing and wiped out my local changes by pulling the whole server down before uploading and --update didn't help because the server time was different to my local time.
The reason I had to add set_real_ip_from
, real_ip_header
& real_ip_recursive
to the server config and to put in a custom format for the access log was because the log only had the proxy IP address which is useless for knowing where the request came from. I do plan on rolling my own bad actor blocker at some point.
Using Docker to run a reverse proxy in front of multiple containerised Nginx websites offers several key benefits:
docker-compose.yml
and restarting. The proxy config is dynamic so no need to edit anything.My next job is to investigate more options for securing the servers from bad actors. But until then i am confident that a "If they breach me a I'll burn it to the ground" approach is one I'm happy with.
davehenry.blog
by
Dave Henry
is licensed under
CC BY-NC-SA 4.0