Tag Archives: debian

Sentry via ansible

Hi all,

Today’s story is about Sentry. I’ve prepared a ansible playbook for Debian and CentOS, so I can learn a bit of both – Sentry and Ansible.

The playbook is available on my github account – https://github.com/thewizardlog/sentry-ansible

The whole idea is to install sentry with as little effort as possible. I guess I’ve managed to do it.

All the instructions are mentioned in the README section on github.
Hope it’s going to help someone.



Nginx throttling

Recently I’ve noticed that someone is trying to brute-force the login to one of my services. At that time I’ve had no captcha on login or any limitations on my API. I’ve had to figure out a fast solution to limit the scale of this. The underneath solution is not a perfect one, as it’s not fixing the problem (unlike limiting unsuccessful login and captcha). This solution is only “slowing” down the brute-force process.

The idea is fairly simple: You can send max 5 requests per minute to an endpoint. The documentation for this can be found here

limit_req_zone $binary_remote_addr zone=login_zone:10m rate=5r/m;

So we create a 10 megabytezone for our request. We limit the number of requests to 5 requests per minute. It’s a login endpoint, so how many times would you like to login during 1 minute?

limit_req_status 429;

If you exceed the 5req is 1 minute you get a “Too many requests” status code.

You insert your limit inside the location block it’s meant to work with:

    location /login {
                proxy_cache_use_stale off;
                proxy_cache_lock off;
                client_max_body_size 20m;
                limit_req zone=login_zone burst=5 nodelay;

                proxy_pass http://api/login;


GPG key adding script for Debian*

*Debian based systems

There might be an easier way. Or maybe there is even an automatic way to do it. I couldn’t be bothered to search long enough.

Recently I’ve had to reinstall my system on a clean SSD drive. Some of the things were copies, but as Debian released it’s new stable (and testing, as I’m using this one) version I could install it from scratch. My pain was adding multiple keys so I’ve semi-automated it by creating a script.

The steps to reproduce are:

 apt install dirmngr gnupg 


The script:


for var in "$@"
echo "Your key is:" $var
gpg --recv-keys $1
gpg --export $1 | apt-key add -

$@ passes all parameters to the script. The loop is using each element to increment itself using it as “var” variable. To use the script you need to:

chmod +x script.sh

and you use it like

./script.sh key1 ... keyN


Zabbix proxy on a raspberry Pi

#Backup databases on raspberry pi in case of power outage (no ups / power bank supplied)

#Create the database:
CREATE DATABASE zabbix_proxy character set utf8 collate utf8_bin;

#Create the zabbix_proxy user:
CREATE USER ‘zabbix’@’localhost’ IDENTIFIED BY ‘your_zabbix_user_password’;

#Grant privileges
GRANT ALL PRIVILEGES ON zabbix_proxy.* TO ‘zabbix’@’localhost’ IDENTIFIED BY ‘our_zabbix_user_password’;

#Create the database schema:
zcat /usr/share/doc/zabbix-proxy-mysql/ schema.sql.gz | mysql -uzabbix -p zabbix_proxy

#Create the data mountpoint

#In /etc/fstab
tmpfs /var/log tmpfs defaults,noatime,size=50m 0 0
tmpfs /var/db_storage tmpfs defaults,noatime, size=200m 0 0

#I do also have a pendrive for backig up my data once in a while and for some swap:

UUID=f8671d68-403c-449b-94a7-9b80e2f7dd88 none swap sw,pri=5 0 0
UUID=d3f1712b-d53e-487a-9b2c-09d74bdc517c /mnt/data xfs defaults 0 0

#Short disclaimer:
#As I’ve killed my two of SD cards with the read/write operations, I’ve decided to go for tmpfs in the memory.
#That’s why I suggest a tmpfs for /var/log and a custom localisation for the databases (/var/db_storage).

#Of course you need to create the directory:
mkdir -p /var/db_storage

also you need to change your /etc/mysql/my.cnf line:

#datadir = /var/lib/mysql
datadir = /var/db_storage

#Now we need to secure our files.

crontab -e #edit users crontab file

# Every day at 2AM I’ll get a dump of all my databases
#The best and safest choice would be creating a dedicated user for backing up your #data. Such user should have read-only privileges to all databases.
#You can use, instead of zabbix_proxy (database name) this: –all-databases and back #up all your databases. This is good as long as your database is quite small. If #it’s a large database than this will take ages on such a machine like raspberryPi

0 2 * * * mysqldump -u zabbix -p yoursuperhardpasswordfordbaccess zabbix_proxy | gzip > /mnt/data/db_storage/zabbix_proxy_`date +”%Y-%m-%d”`.sql.gz

#Once every 15 minutes I’ll zip all the files into one archive and keep in on the thumb drive. Just because I can.
0 */15 * * * root zip -r /mnt/data/db_storage/db_storage.zip /var/db_storage/

#Another crontab rule – every day at 1:30AM files older than two weeks will be deleted. This is done to save some space.

30 1 * * * find /path/data/db_storage/ -mindepth 1 -mtime +14 -delete
#this solution is better than the -exex rm {} \; it’s less risky in case of a wrong path 😉

#Oh well and that’s about it.


Backing up your sever with Mega.nz

This post was influenced by Matteo Mattei and most of the credit belongs to him. Be shure to check his tutorial over HERE

So we’ve got some default monitoring, some services and a few databases. But if it’ll crash we are left with nothing. (In my case KS-1 is a cheap server with a single drive)
A good idea is to to prepare a backup and roll it back, just for the sake of checking if everything is allright.
Since mega.nz is back we can use it’s space (50G) to store our backups.

1. First, we need to get megatools. It’s a pack of multiple tools to work with Mega.nz. You can upload, download files, register$
Unfortunately, Debian stable does not have a ready package yet. But no worries.
Just take a look on: https://packages.debian.org/search?keywords=megatools
I have chosen the more ‘stable’ version – the one for the testing branch of debian.

wget http://ftp.bg.debian.org/debian/pool/main/m/megatools/megatools_1.9.97-1_amd64.deb

as root: dpkg -i megatools_1.9.97-1_amd64.deb

Oops, still need the requirements:

apt-get install glib-networking
apt-get -f install

and it should be running. 😉

2. Register yourself an account on mega.nz, if you haven’t done that yet.
In the home folder of the backup-doing user create a credentials file called .megarc

Username = mega_username
Password = mega_password

Don’t forget to set change the rights on that file.

chmod 640 /root/.megarc

3. Check your configuration with “megals” command. If everything is ok, and you have a clean account than you should get
something like

user@debian:~# megals

4. The backup script.

The script was created by Matteo Mattei and all the credit goes to him. I only edited it a bit for my needs. My edits were:
backing up most recent logs and changing the way of uploading the files to the server.





# Create local working directory and collect all data
rm -rf ${WORKING_DIR}
mkdir ${WORKING_DIR}

# Backup /etc folder
cd /
tar cJf ${WORKING_DIR}/etc.tar.gx etc
cd - > /dev/null

# Backup MySQL
if [ "${BACKUP_MYSQL}" = "true" ]
mkdir ${WORKING_DIR}/mysql
for db in $(mysql -u${MYSQL_USER} -p${MYSQL_PASSWORD} -e 'show databases;' | grep -Ev "^(Database|mysql|information_sche$
#echo "processing ${db}"
mysqldump --opt -u${MYSQL_USER} -p${MYSQL_PASSWORD} "${db}" | gzip > ${WORKING_DIR}/mysql/${db}_$(date +%F_%T).s$
#echo "all db now"
mysqldump --opt -u${MYSQL_USER} -p${MYSQL_PASSWORD} --events --ignore-table=mysql.event --all-databases | gzip > ${WORKI$

# Backup domains
mkdir ${WORKING_DIR}/domains
for folder in $(find ${DOMAINS_FOLDER} -mindepth 1 -maxdepth 1 -type d)
cd $(dirname ${folder})
tar cJf ${WORKING_DIR}/domains/$(basename ${folder}).tar.xz $(basename ${folder})
cd - > /dev/null

# Backup latest logs. I know it's not the most 'elegant way'.
mkdir ${WORKING_DIR}/logs

for fol in $(find ${LOG_FOLDER} -mindepth 1 -maxdepth 1 -type d && ls /var/log |grep ".log" | grep -v "gz")
cd $(dirname ${fol})
tar cJf ${WORKING_DIR}/logs/$(basename ${fol}).tar.xz $(basename ${fol})
cd - > /dev/null

# Workaround to prevent dbus error messages
export $(dbus-launch)

# Create base backup folder
[ -z "$(megals --reload /Root/backup_${SERVER})" ] && megamkdir /Root/backup_${SERVER}

# Remove old logs
while [ $(megals --reload /Root/backup_${SERVER} | grep -E "/Root/backup_${SERVER}/[0-9]{4}-[0-9]{2}-[0-9]{2}$" | wc -l) -gt ${D$
TO_REMOVE=$(megals --reload /Root/backup_${SERVER} | grep -E "/Root/backup_${SERVER}/[0-9]{4}-[0-9]{2}-[0-9]{2}$" | sort$
megarm ${TO_REMOVE}

# Create remote folder
curday=$(date +%F)
megamkdir /Root/backup_${SERVER}/${curday} 2> /dev/null

# Backup now!!!
#megasync --reload --no-progress -l ${WORKING_DIR} -r /Root/backup_${SERVER}/${curday} > /dev/null
megacopy --reload --no-progress --local ${WORKING_DIR} -r /Root/backup_${SERVER}/${curday} > /dev/null

# Kill DBUS session daemon (workaround)

# Clean local environment
rm -rf ${WORKING_DIR}
exit 0

5. Install dependencies:

For “dbus-launch” you’ll need a pack from the repo “dbus-x11”

apt install dbus-x11

6. Chmod the script

chmod +x backupscript.sh
chmod 750 backupscript.sh

At this point you can start your script by ./backupscript.sh and check if it’s working. If it’s fine than you can edit your crontab and add the righ value.

nano /etc/crontab
04 04 * * * root /root/backupscript.sh


Zabbix + MariaDB + Nginx

Hello there my Friend,

This blog is a personal documentation of my work. So if you like it or you have any suggestions or questions, leave them in the comment section.

The idea: Monitoring my server + my raspberry Pi machine at home (temp + availability because of power shortages).
The hardware: At first I used a cheap VPS (1core, 2G ram, 15G hdd, 100mbps ) which was totally fine. Now I’m documenting this and redoing it on a low end box (2core, 2G ram, 500G hdd, 100mbps) just because of the migration.
OS of choice: Debian GNU/Linux 8 (jessie), just because I like it.
Http server of choice: Nginx, cuz I wanted to learn it.
DB of choice: MariaDB, I wanted to learn it and like the ‘idea’ of it.

1) Get Zabbix packages

It’s explained well enough here – just do the “installing repository configuration package” and continue with:

apt install zabbix-frontend-php nginx-full php5-fpm zabbix-server-mysql php5-mysql php5 mariadb-client-10.0 mariadb-client-core-10.0 mariadb-server mariadb-server-10.0 mariadb-server-core-10.0 

2) Prepare the database

as root:


and follow the questions that appear.

After that we have to create a database and user that will operate on that DB.

login to the mysql shell as root:

 mysql -uroot -p 

than we shall create the database, I’ll call mine zabbix (it’s easier to use the scripts afterwards)

CREATE DATABASE zabbix character set utf8 collate utf8_bin;

Than we create the user that will operate on the database. Mine is called zabbix and will be able to log to the database from the host ‘localhost’. The user is identified by a password set in the ‘password’ value.

CREATE USER ‘zabbix’@’localhost’ IDENTIFIED BY ‘password’;

and now we’ll allow the user ‘zabbix’ to have all possible privileges on the database ‘zabbix’

GRANT ALL PRIVILEGES ON zabbix.* TO 'zabbix'@'localhost' IDENTIFIED BY 'password'; 

now we can ‘quit;’ the database shell and create the database structure using the script provided by the Zabbix crew!

As root we do such a command:

zcat create.sql.gz | mysql -uzabbix -p<password> zabbix

This will read the config file from the gz and pass it to the pipe. The mysql command is quite simple

-u is for user

-p is for password (you can leave it blank and the prompt will ask you for it or fill it if you’re shure no one is peeking 😉 )

zabbix is the database name that will be populated

3) The Http server

My conf.d for zabbix:

server {
listen 80;
root /var/www;
index index.html index.php;
server_name name_of_your_server;

access_log /var/log/nginx/access.log;
gzip on;
gzip_min_length 1000;
gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;
gzip_comp_level 4;
gzip_proxied any;

tcp_nopush on;
tcp_nodelay on;

#keepalive_timeout 0;
keepalive_timeout 10;
fastcgi_read_timeout 10m;
proxy_read_timeout 10m;
client_header_timeout 10m;
client_body_timeout 10m;
send_timeout 10m;
large_client_header_buffers 16 32k;

location /{
root /var/www/html;

location /zabbix {
# try_files $uri $uri/ /index.html = 404;

index index.php index.html index.htm;
#try_files $uri $uri/ index.php;
expires 30d;

#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /usr/share/nginx/html;

location ~* \.php$ {
expires off;
if (!-f $request_filename) { return 404; }
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
fastcgi_read_timeout 300;
include fastcgi_params;

4) Edit the php.ini file

In Debian that file is located in: /etc/php5/fpm/php.ini

Set such values

post_max_size = 16M
max_execution_time = 300
max_input_time = 300
date.timezone = Continent/City
always_populate_raw_post_data = -1 

and restart php5-fpm

 systemctl restart php5-fpm 

5) I’ve copied the zabbix directory to /var/www/zabbix:

cp -R /usr/share/zabbix/ /var/www/zabbix 

6) After that install the server providing the data (database name, database username, database password, database host)

7) The default login credentials are: Admin / password