Author Archives: The Wizard

GPG key adding script for Debian*

*Debian based systems

There might be an easier way. Or maybe there is even an automatic way to do it. I couldn’t be bothered to search long enough.

Recently I’ve had to reinstall my system on a clean SSD drive. Some of the things were copies, but as Debian released it’s new stable (and testing, as I’m using this one) version I could install it from scratch. My pain was adding multiple keys so I’ve semi-automated it by creating a script.

The steps to reproduce are:

apt install dirmngr gnupg

The script:


for var in “$@”
echo “Your key is:” $var
gpg –recv-keys $1
gpg –export $1 | apt-key add –

$@ passes all parameters to the script. The loop is using each element to increment itself using it as “var” variable. To use the script you need to:

chmod +x

and you use it like

./ key1 … keyN


Proxy for yum

Short post, short story. If your server is in a separated network without “internet” connectivity, but you have a proxy server set up in the network you can download the packages from official repos!

In /etc/yum.conf add

and voila’

Serve error code via Haproxy.

Quest: Serve error code via Haproxy. For testing reasons etc.

1) Creaate a backend:
backend error
mode http
log global
option httplog
errorfile 500 /etc/haproxy/errorpages/503.http
errorfile 502 /etc/haproxy/errorpages/503.http
errorfile 503 /etc/haproxy/errorpages/503.http
errorfile 504 /etc/haproxy/errorpages/503.http

2) Create the errorpage:

HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html

<title>Title of your site</title>
<body style=”font-family:Arial,Helvetica,sans-serif;”>




3) Add the use backend
use_backend error if { hdr(Host) -i }

4) What’s worth considering is adding the “testing” mode for some IP’s. Add a new acl:
acl office_ips src

5) Change the use_backend to

use_backend error if { hdr(Host) -i } office_ips

^THIS will redirect every request coming from or, that comes from the IPs declared in office_ips to the backend “error” and show an error

Redirect sites to the rootfolder without using a new vhost on Nginx.

Map. It’s a construction that maps the the value to a variable.
map $http_host $name {

* bob;
default bob2;

So if your hostname ($http_host) would be anything from * than your variable $name will have it’s value set to bob. Otherwise it’s set to bob2.
The “hostnames”, following the nginx man – indicates that source values can be hostnames with a prefix or suffix mask.

RL Example:

You can map the matching domain name to the rootfolder name.

map $http_host $root {
hostnames; funny_cats; funny_dogs;

default funny_animals;


in the “server” section you can use:

root /var/www/$rootpath/;

Than reload nginx and you can check if it’s working with curl:

curl -H “Host:

Mind the gap! (haproxy)

Haproxy is an awesome software. If you’re not sure if your config is correct you can check it by:
haproxy -c -f /path/to/your/config.conf

-c is for check
-f is for file


The check won’t notice if spaces are in the wrong place. So…

hdr(Host) and hdr (Host) are both correct for Haproxy check config. But it won’t work (well it didin’t work for me). hdr(Host) is correct (no space between hdr and (Host)).

Mad load caused by acpi_pad

If your machine hits crazy load and your processor is: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Check with top what’s wrong. In my case I had load 44 without any special reason (non production server). The problem was: acpi_pad.
Kernel: ~3.10 but google say it’s also on higher Linux Kernels (4.x)

Dirty and nasty – “rmmod acpi_pad” and blacklisting the module with ” echo “blacklist acpi_pad” >> /etc/modprobe.d/blacklist-acpi_pad.conf ”

A nicer and suggested solution that I have to check: updating the bios of the server machine.

Oh, the issue is not brand-related. Some people have it on Supermicro boards, Dell boards. I’ve had it on a Lenovo server.

Zabbix proxy on a raspberry Pi

#Backup databases on raspberry pi in case of power outage (no ups / power bank supplied)

#Create the database:
CREATE DATABASE zabbix_proxy character set utf8 collate utf8_bin;

#Create the zabbix_proxy user:
CREATE USER ‘zabbix’@’localhost’ IDENTIFIED BY ‘your_zabbix_user_password’;

#Grant privileges
GRANT ALL PRIVILEGES ON zabbix_proxy.* TO ‘zabbix’@’localhost’ IDENTIFIED BY ‘our_zabbix_user_password’;

#Create the database schema:
zcat /usr/share/doc/zabbix-proxy-mysql/ schema.sql.gz | mysql -uzabbix -p zabbix_proxy

#Create the data mountpoint

#In /etc/fstab
tmpfs /var/log tmpfs defaults,noatime,size=50m 0 0
tmpfs /var/db_storage tmpfs defaults,noatime, size=200m 0 0

#I do also have a pendrive for backig up my data once in a while and for some swap:

UUID=f8671d68-403c-449b-94a7-9b80e2f7dd88 none swap sw,pri=5 0 0
UUID=d3f1712b-d53e-487a-9b2c-09d74bdc517c /mnt/data xfs defaults 0 0

#Short disclaimer:
#As I’ve killed my two of SD cards with the read/write operations, I’ve decided to go for tmpfs in the memory.
#That’s why I suggest a tmpfs for /var/log and a custom localisation for the databases (/var/db_storage).

#Of course you need to create the directory:
mkdir -p /var/db_storage

also you need to change your /etc/mysql/my.cnf line:

#datadir = /var/lib/mysql
datadir = /var/db_storage

#Now we need to secure our files.

crontab -e #edit users crontab file

# Every day at 2AM I’ll get a dump of all my databases
#The best and safest choice would be creating a dedicated user for backing up your #data. Such user should have read-only privileges to all databases.
#You can use, instead of zabbix_proxy (database name) this: –all-databases and back #up all your databases. This is good as long as your database is quite small. If #it’s a large database than this will take ages on such a machine like raspberryPi

0 2 * * * mysqldump -u zabbix -p yoursuperhardpasswordfordbaccess zabbix_proxy | gzip > /mnt/data/db_storage/zabbix_proxy_`date +”%Y-%m-%d”`.sql.gz

#Once every 15 minutes I’ll zip all the files into one archive and keep in on the thumb drive. Just because I can.
0 */15 * * * root zip -r /mnt/data/db_storage/ /var/db_storage/

#Another crontab rule – every day at 1:30AM files older than two weeks will be deleted. This is done to save some space.

30 1 * * * find /path/data/db_storage/ -mindepth 1 -mtime +14 -delete
#this solution is better than the -exex rm {} \; it’s less risky in case of a wrong path 😉

#Oh well and that’s about it.