Finally got time to migrate the RTFM.CO.UA blog to a new server with Debian 10. This time manually, without any automation will set up a LEMP stack
Wrote a similar at 2016 – Debian: установка LEMP — NGINX + PHP-FPM + MariaDB (Rus), but in time the post is more complete of the process and tools used to spin up a ready-for-use Linux server for hosting a website, actually – a WordPress blog.
And again, it was planned as a quick note on installing NGINX + PHP + MySQL, but as a result, I’ve described LEMP, Linux monitoring, logs, emailing, etc set up process and configuration.
So, what we will do in this post:
- create a droplet in DigitalOcean
- SSL from Let’s Encrypt
- NGINX
- PHP-FPM
- MySQL (MariaDB) as a database server
- NGINX Amplify agent – monitoring and alerting (I’ve tried Grafana и Loki but they used too many resources, still it was a really interesting setup with automation, check the Prometheus: RTFM blog monitoring set up with Ansible – Grafana, Loki, and promtail)
- WordPress backup script – a self-written Python script, check the Python: скрипт бекапа файлов и баз MySQL в AWS S3 (Rus)
- Logz.io – collect logs to the ELK-stack
unattended-upgrades
– Debian and installed packages auto upgradeslogrotate
– logs rotation
Contents
DigitalOceal: create a droplet
The RTFM blog was hosted on AWS but then I moved it to the DigitalOcean last year because of the lower price.
Create a new droplet:
Will use Debian 10 on the 2 CPU, 2 GB RAM virtual server.
For example, on the currently used droplet with the same configuration CPU and memory usage is the following (the graph from the NGINX Amplify):
Choose an OS and the instance type:
I’m using Frankfurt region, and will enable the Monitoring on the droplet – it will be created with the DigitalOcean agent to have more graphs in the DO’s control panel:
Create an RSA key for SSH
On the work station create a ket pair:
Copy its public part:
Create a new SSH key in the DO:
Chose droplets number – one, and set its hostname to the rtfm-do-production-d10:
Optionally, enable backups and create the droplet:
Firewall
While the droplet is creating, let’s configure a firewall fo it:
Add rules: SSH, ICMP – limited by my current IP, and HTTP/S from anywhere, although it might be a good idea to limit it too, so Google will not index the blog during migration as a copy of the original site:
Connect the firewall to the droplet:
Floating IP
Analog of the Elastic IP in AWS, create a new one for the new server:
Actually, that’s all here.
Let’s go to the server configuration.
LEMP – Linux, NGINX, PHP, MySQL
Okay, once again – what do we need here?
- nginx
- php-fpm
- lets encrypt
- mysql
- amplify agent – мониторинг
- backup script
- logz.io – has a free tier, but will store the logs only for one day, still, it’s enough for me as I need only for a nice web-UI to check them
- unattended-upgrades – OS and packages auto upgrades
- logrotate – already installed on Debian by default, just will check its configs
- msmtp – to send emails
Connect to the host:
Update the system and reboot:
Install packages for LEMP:
Check if NGINX is working:
Install additional necessary packages:
The mailutils
has an issue when using mailx
with msmtp
, so I had to replace it with bsd-mailx
, see the mailx and msmtp – sending emails from the server.
Let’s Encrypt SSL
Will use Let’s Encrypt to get the SSL certificate for the blog.
Let’s Encrypt DNS validation
Here is a question with the validation process, as the rtfm.co.ua domain is still pointed to the old server and we can not use the common approach with the .well-known
directory.
What we can do here, is to use the DNS validation when obtaining a new certificate, and then when we will have already configured NGINX and PHP – will reconfigure certbot
to use the webroot validation, as the DNS validation seems does not support certificates renew
(but I’m not sure about this).
Get the certificate:
Add a new record on the DNS of the domain:
Check it:
Go back to the server, press Enter – and it’s done:
NGINX
Generate the Diffie-Hellman key (check the ClientKeyExchange
) for NGINX:
Remove the default
config – here the RTFM will be the default host:
Create a config file for the RTFM virtualhost – /etc/nginx/conf.d/rtfm.co.ua.conf
. I’m just coping it from the old server.
It’s long enough and I didn’t change it for the last few years, just some SLS settings.
The last check on the https://www.ssllabs.com still gives me the A+ level, so it can be used.
Also, take a look at the NGINX configs generators, for example, https://www.serverion.com/nginx-config or SSL Configuration Generator from Mozilla.
In my config, I’m limiting access to the /wp-admin
and wp-login.php
as I’m the only one person who uses it:
server { listen 80 default_server; server_name rtfm.co.ua www.rtfm.co.ua; server_tokens off; return 301 https://rtfm.co.ua$request_uri; } server { listen 443 ssl default_server; server_name rtfm.co.ua; root /data/www/rtfm/rtfm.co.ua; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains" always; server_tokens off; # access_log /var/log/nginx/rtfm.co.ua-access.log main_ext; error_log /var/log/nginx/rtfm.co.ua-error.log warn; ssl_certificate /etc/letsencrypt/live/rtfm.co.ua/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/rtfm.co.ua/privkey.pem; ssl_protocols TLSv1.2; ssl_prefer_server_ciphers on; ssl_dhparam /etc/nginx/dhparams.pem; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_stapling on; ssl_stapling_verify on; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } client_max_body_size 1024m; location ~ /\.ht { deny all; } location ~* \.(jpg|swf|jpeg|gif|png|css|js|ico)$ { root /data/www/rtfm/rtfm.co.ua; expires 24h; } location /wp-admin/admin-ajax.php { location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/rtfm.co.ua-php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } location /wp-admin/ { index index.php, index.html; auth_basic_user_file /data/www/rtfm/.htpasswd_rtfm; auth_basic "Password-protected Area"; # office allow 194.***.***.24/29; # home 397 LocalNet allow 31.***.***.117/32; # home 397 Lanet allow 176.***.***.237; deny all; location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/rtfm.co.ua-php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } location /wp-config.php { deny all; } location /.user.ini { deny all; } location /wp-login.php { auth_basic_user_file /data/www/rtfm/.htpasswd_rtfm; auth_basic "Password-protected Area"; # office allow 194.***.***.24/29; # home 397 LocalNet allow 31.***.***.117/32; # home 397 Lanet allow 176.***.***.237; deny all; location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/rtfm.co.ua-php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } location /uploads/noindex { auth_basic_user_file /data/www/rtfm/.htpasswd_rtfm; auth_basic "Password-protected Area"; } location = /favicon.ico { access_log off; log_not_found off; } location / { try_files $uri =404; index index.php; proxy_read_timeout 3000; rewrite ^/sitemap(-+([a-zA-Z0-9_-]+))?\.xml$ "/index.php?xml_sitemap=params=$2" last; rewrite ^/sitemap(-+([a-zA-Z0-9_-]+))?\.xml\.gz$ "/index.php?xml_sitemap=params=$2;zip=true" last; rewrite ^/sitemap(-+([a-zA-Z0-9_-]+))?\.html$ "/index.php?xml_sitemap=params=$2;html=true" last; rewrite ^/sitemap(-+([a-zA-Z0-9_-]+))?\.html.gz$ "/index.php?xml_sitemap=params=$2;html=true;zip=true" last; if (!-f $request_filename){ set $rule_1 1$rule_1; } if (!-d $request_filename){ set $rule_1 2$rule_1; } if ($rule_1 = "21"){ rewrite /. /index.php last; } } location ~ \.php$ { try_files $uri =404; proxy_read_timeout 3000; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/rtfm.co.ua-php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } }
Check it and reload:
Let’s check.
On the working laptop update the /etc/hosts
to set the new droplet’s IP for the rtfm.co.ua domain:
139.59.205.180 rtfm.co.ua
Try to open it:
Good – SSL is working, NGINX is running.
PHP-FPM
Similarly to the NGINX config, I’ll copy the PHP-FPM config from my old server.
Here are FPM-pools used, each running under its own system user.
See also PHP-FPM: Process Manager — dynamic vs ondemand vs static (Rus).
Linux: non-login user
Add a non-login user:
Create a /etc/php/7.3/fpm/pool.d/rtfm.co.ua.conf
file:
[rtfm.co.ua] user = rtfm group = rtfm listen = /var/run/rtfm.co.ua-php-fpm.sock listen.owner = www-data listen.group = www-data pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 ;pm.process_idle_timeout = 10s; ;pm.max_requests = 500 catch_workers_output = yes chdir = / pm.status_path = /status slowlog = /var/log/nginx/rtfm.co.ua-slow.log php_flag[display_errors] = off ;php_admin_value[display_errors] = 'stderr' php_admin_value[display_errors] = off php_admin_value[error_log] = /var/log/nginx/rtfm.co.ua-php-error.log php_admin_flag[log_errors] = on php_admin_value[session.save_path] = /var/lib/php/session/rtfm php_value[session.save_handler] = files php_value[session.save_path] = /var/lib/php/session php_admin_value[upload_max_filesize] = 128M php_admin_value[post_max_size] = 128M
Check the PHP-FPM config:
Reload configs:
Find the NIGNX root directory for the blog:
... root /data/www/rtfm/rtfm.co.ua; ...
Create the directory:
Add a test file with the phpinfo()
function the check the NGINX + PHP:
Check it (again by updating the /etc/hosts
):
Nice – everything is working.
MySQL
Debian have MariaDB by default instead of MySQL. Not a big difference in the configuration, and actually MariaDB works faster.
Run the initial configuration script:
Create a database for the RTFM:
Create a user rtfm with access to the rtfm_db1_production database only from the localhost with the password password:
Check it:
Now everything is ready for the migration.
WordPress blog migration
Here I have to pause in this post writing to create the database’ dump and move blog’s files.
After the migration will proceed from the new server.
What needs to be done:
- crate files archive
- database dump
- move them to the new host
- change the DNS entry to point the domain to the new IP
Save the post as a Draft – WordPress will save it in its database which I’ll dump and will move to the new server to proceed writing right from this place.
Archiving files
Create an archive with the blog’s files, check them:
Create a TAR-archive with compression:
Check the file:
MySQL database dump
Create the dump (first, read the WordPress: Error establishing a database connection about the -d
option) :
Check it:
On the firewall of the old server open the port for the SSH connection from the new server and copy the files:
Unpack the files:
Check them:
Move the rtfm.co.ua
directory to the /data/www/rtfm
catalog:
Check files:
Upload the dump to the new database:
Обновляем локальный /etc/hosts
– и:
Er… WTF?
WordPress: Error establishing a database connection
Check the data in the database – seems like everything is in its place:
PHP: check MySQL connection
Let’s use a simple script to check if PHP<->MySQL is working and we have all necessary libs installed:
<?php $link = @mysqli_connect('localhost', 'rtfm', 'Ta6paidie7Ie'); if(!$link) { die("Failed to connect to the server: " . mysqli_connect_error()); } else { echo "Connected\n"; } if([email protected]_select_db($link, 'rtfm_db1_production')) { die("Failed to connect to the database: " . mysqli_error($link)); } else { echo "DB found\n"; } ?>
Run it:
All good too.
WordPress: WP_ALLOW_REPAIR
Try to use the WordPress database repair
– in the wp-config.php
before the “‘That’s all, stop editing! Happy blogging’” line add the following:
define('WP_ALLOW_REPAIR', true);
And open the https://rtfm.co.ua/wp-admin/maint/repair.php URL:
Seems to be OK, but still no:
The last thing was to install a clean WordPress installation, and it worked fine
So, it’s really something wrong with the dump itself – but what?
The “Error establishing a database connection” cause
So, I went to check the mysqldump options
and finally got the issue:
-d , --no-data |
Do not write any table row information (that is, do not dump table contents). This is useful if you want to dump only the CREATE TABLE statement for the table (for example, to create an empty copy of the table by loading the dump file). See also –ignore-table-data . |
😀
Not sure why I’ve added the -d
when created the dump, maybe it’s after the AWS Database Migration Service struggling, where I had to create a clean database’ scheme, without data.
So, create the dump again without the -d
this time:
Repeat all the operations, and now everything is working – now writing this post from the new server:
What’s next?
Need to configure the certbot
for the webroot validation for future renewals, and add it to the cron for auto-updates.
The finish with the rest of the services:
- amplify agent
- backup script
- logz.io
- unattended-upgrades
- msmtp
SSL: webroot validation
So, we already have a certificate but it was validated via a DNS record.
As far as I know, this will not work during the renew
so need to change it to the webroot.
Let’s Encrypt: webroot validation
Call the certbot
for the rtfm.co.ua, set the --webroot-path
instead of the dns
– it must find an already existing certificate and ask to use it or create a new one.
On the first question “How would you like to authenticate with the ACME CA?” answer “Place files in webroot directory (webroot)“, on the second – “You have an existing certificate […]” – “Renew & replace the cert (limit ~5 per 7 days)“, to generate a new Let’s Encrypt config file for the domain:
Okay, all good, now check the config file which will be used during the renewal:
Nice – here we can add a cronjob
.
certbot renew
– auto-update certificates
Add a crontask to run certbot renew
once per week.
Edit the crontab
:
Add:
@weekly certbot renew &> /var/log/letsencrypt/letsencrypt.log
Let’s Encrypt hook – NGINX reload
The last thing here is to reload NGINX after the certificate was updated.
It can be added directly to the crontask like the next:
@weekly certbot renew &> /var/log/letsencrypt/letsencrypt.log && service nginx reload
But in this case, if any of the certificates will not be updated, then NGINX will be reloaded at all.
So the better way is to use a hook for the domain – add it to the /etc/letsencrypt/renewal/rtfm.co.ua.conf
.
In the renewalparams
add the renew_hook
, so it will look like the following:
[renewalparams] account = 868c8164304408984fefbbff845d4f48 authenticator = webroot server = https://acme-v02.api.letsencrypt.org/directory webroot_path = /data/www/rtfm/rtfm.co.ua/.well-known, renew_hook = systemctl reload nginx
Собственно, с SSL мы закончили.
Amplify – NGINX, PHP, and server monitoring
Base-level monitoring, but with a nice web-UI and can be added in a couple of minutes, see the NGINX: Amplify — SaaS мониторинг от NGINX (Rus).
The official documentation is here>>>.
Download the installation script:
Set you API-key in a variable and run the script:
A few minutes – and the new host is on the Amplify dashboard:
For the sake of interest – load on the old host after the rtfm.co.ua domain was switched to the new host:
Backup script for websites
I’m using my own Python script which was written three years ago – https://github.com/setevoy2/simple-backup. It will archive files, create database dump, and can upload them to an AWS S3 bucket.
Actually, for WordPress, there are a lot of plugins for the backups, but I still have no time to check them, so will do it in my old-fashion way.
Clone the tool:
Still, not sure if the copy in the Github is still working…
I remember, that the AWS S3 upload was broken at some moment, and I didn’t fix it.
Let’s try as-is:
Well, maybe will work.
For the backup data, it uses a /backups
directory that is mounted as a dedicated disk and a config-file.
First, add a new volume.
Disks and partitions on the host now:
DigitalOcean Volume
Go to the DigitalOcean, create a Volume:
Check it on the host:
Linux: mount a volume
DigitalOcean Volume by default is mounted to the /mnt/rtfm_do_production_d10_backups
, and didn’t create a record in the fstab
:
Unmount it:
Create the /backups
drectory:
Get the UUID of the new disk:
Update the /etc/fstab
– add this volume mount into the /backups
, and in the opts with the nofail
option set that this disk is not necessary to be present, so the system can boot without it if any:
# /etc/fstab: static file system information. UUID=4e8b8101-6a06-429a-aaca-0ccd7ff14aa1 / ext4 errors=remount-ro 0 1 UUID=a6e27193-4079-4d9d-812e-6ba29c702b75 /backups ext4 nofail 0 0
Try to mount all the volumes specified in the /etc/fstab
:
Check:
Seems good and the data is here:
Also good to reboot the server to make sure everything is working but will do it later after will finish this post.
The config-file for the simple-backup
can be taken from the old host, let’s try to run it:
Ha!
And even AWS S4 upload is working again!
Great, so we are done here.
What’s next?
- logz.io
- unattended-upgrades
- logrotate
- msmtp
Logz.io, Filebeat и логи NGINX
Let’s add NGINX logs collecting to the Logz.io.
Register an account, and go to the documentation – https://app.logz.io/#/dashboard/data-sources/nginx.
Need to install the Filebeat, add it:
Get a public certificate for the Logz.io:
Configure the Filebeat.
Backup the config:
Update it as per documentation – just copy-paste:
... - type: log paths: - /var/log/nginx/access.log - /var/log/nginx/rtfm.co.ua-access.log fields: logzio_codec: plain token: JzR***ZmW type: nginx_access fields_under_root: true encoding: utf-8 ignore_older: 3h - type: log paths: - /var/log/nginx/error.log - /var/log/nginx/rtfm.co.ua-error.log fields: logzio_codec: plain token: JzR***ZmW type: nginx_error fields_under_root: true encoding: utf-8 ignore_older: 3h ...
In the outputs
comment out the output.elasticsearch
block, and add the output.logstash
:
... # ------------------------------ Logstash Output ------------------------------- # ... output.logstash: hosts: ["listener.logz.io:5015"] ssl: certificate_authorities: ['/etc/pki/tls/certs/COMODORSADomainValidationSecureServerCA.crt' ...
Check its syntax:
Check the connection to the Logz.io:
Restart the service:
Check logs:
Data is here.
So, left only the unattended-upgrades
, logrotate
, and msmtp
.
Install unattended-upgrades
Already described in the Debian: автоматические обновления с помощью unattended-upgrades и отправка почты через AWS SES (Rus), let’s do do it here just without the AWS SES.
Documentation is here>>>.
unattended-upgrades
and apt-listchanges
already installed, just need to configure it.
Run the dpkg-reconfigure unattended-upgrades
:
Answer Yes.
Check the /etc/apt/apt.conf.d/20auto-upgrades
:
APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1";
Now the is no need in the APT::Periodic::Enable
option to enable the updates, those two lines are enough.
Next, check the /etc/apt/apt.conf.d/50unattended-upgrades
.
In general, you can leave everything with the default values here, but worth to set:
Unattended-Upgrade::Mail
– get emails about updates installedUnattended-Upgrade::Automatic-Reboot
– up to you, for now, can leave it to the false, and enable laterUnattended-Upgrade::Automatic-Reboot-Time
– if the previous option will be enabled, worth to set the rebooting time
Run test upgrade:
Okay.
Now, let go to see the logrotate
configs.
logrotate
Actually, there is also everything is ready for use.
All logrotate
configuration files:
NGINX logs rotation config:
Maybe, will add the size
parameter later.
Check its work:
Some logs already can be rotated.
mailx
and msmtp
– sending emails from the server
The root user will get emails about the server’s status, and it will be good to receive them on an external email box.
First, check the /etc/aliases
to know which email is used for the root user:
If doing any updates here – run the:
550 001.RDNS/PTR error. Rejected
So, the emails to the root will be sent to the [email protected], but if try to send an email now – it will not be delivered:
This is because it sends via Exim MTA, check its log:
“550 001.RDNS/PTR error. Rejected” – this is because we haven’t PTR record configured for our FloatinIP of the server, and on the DigitalOcean we can’t easily update it.
To mitigate this issue install the msmtp
, so we will send emails via an external SMTP-server instead of the local:
Themsmtp-mta
will create a symlink from the /usr/sbin/sendmail
, and when mailx
will try to send an email via the sendmail
, it will actually use the msmtp
:
Configure the /etc/msmtprc
:
defaults port 25 tls on tls_trust_file /etc/ssl/certs/ca-certificates.crt account freehost host freemail.freehost.com.ua from [email protected] auth on user [email protected] password password # Set a default account account default : freehost
Check it:
mailx: cannot send message: process exited with a non-zero status
To send an email with the mailx
via the msmtp
– install the bsd-mailx
instead of the mailutils
:
Otherwise, you get the “mailx: cannot send message: process exited with a non-zero status” and “msmtp: no recipients found” errors.
Try sending with the mailx
:
Now, emails from the unattended-upgrades
must be delivered to the mailbox, specified in the Unattended-Upgrade::Mail
.
Well, that’s all.
Similar posts
Also published on Medium.