Tag Archives: linux

Redirect sites to the rootfolder without using a new vhost on Nginx.

Map. It’s a construction that maps the the value to a variable.
map $http_host $name {
hostnames;

*.bob.com bob;
default bob2;
}

So if your hostname ($http_host) would be anything from *.bob.com than your variable $name will have it’s value set to bob. Otherwise it’s set to bob2.
The “hostnames”, following the nginx man – indicates that source values can be hostnames with a prefix or suffix mask.

RL Example:

You can map the matching domain name to the rootfolder name.

map $http_host $root {
hostnames;

.funnycats.com funny_cats;
.funnydogs.com funny_dogs;

default funny_animals;

}

in the “server” section you can use:

root /var/www/$rootpath/;

Than reload nginx and you can check if it’s working with curl:

curl 127.0.0.1 -H “Host: bork.funnycats.com

Mind the gap! (haproxy)

Haproxy is an awesome software. If you’re not sure if your config is correct you can check it by:
haproxy -c -f /path/to/your/config.conf

-c is for check
-f is for file

BUT REMEMBER:

The check won’t notice if spaces are in the wrong place. So…

hdr(Host) and hdr (Host) are both correct for Haproxy check config. But it won’t work (well it didin’t work for me). hdr(Host) is correct (no space between hdr and (Host)).

Mad load caused by acpi_pad

If your machine hits crazy load and your processor is: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Check with top what’s wrong. In my case I had load 44 without any special reason (non production server). The problem was: acpi_pad.
Kernel: ~3.10 but google say it’s also on higher Linux Kernels (4.x)

Solution?
Dirty and nasty – “rmmod acpi_pad” and blacklisting the module with ” echo “blacklist acpi_pad” >> /etc/modprobe.d/blacklist-acpi_pad.conf ”

A nicer and suggested solution that I have to check: updating the bios of the server machine.

Oh, the issue is not brand-related. Some people have it on Supermicro boards, Dell boards. I’ve had it on a Lenovo server.

Zabbix proxy on a raspberry Pi

#Backup databases on raspberry pi in case of power outage (no ups / power bank supplied)

#Create the database:
CREATE DATABASE zabbix_proxy character set utf8 collate utf8_bin;

#Create the zabbix_proxy user:
CREATE USER ‘zabbix’@’localhost’ IDENTIFIED BY ‘your_zabbix_user_password’;

#Grant privileges
GRANT ALL PRIVILEGES ON zabbix_proxy.* TO ‘zabbix’@’localhost’ IDENTIFIED BY ‘our_zabbix_user_password’;

#Create the database schema:
zcat /usr/share/doc/zabbix-proxy-mysql/ schema.sql.gz | mysql -uzabbix -p zabbix_proxy

#Create the data mountpoint

#In /etc/fstab
tmpfs /var/log tmpfs defaults,noatime,size=50m 0 0
tmpfs /var/db_storage tmpfs defaults,noatime, size=200m 0 0

#I do also have a pendrive for backig up my data once in a while and for some swap:

UUID=f8671d68-403c-449b-94a7-9b80e2f7dd88 none swap sw,pri=5 0 0
UUID=d3f1712b-d53e-487a-9b2c-09d74bdc517c /mnt/data xfs defaults 0 0

#Short disclaimer:
#As I’ve killed my two of SD cards with the read/write operations, I’ve decided to go for tmpfs in the memory.
#That’s why I suggest a tmpfs for /var/log and a custom localisation for the databases (/var/db_storage).

#Of course you need to create the directory:
mkdir -p /var/db_storage

also you need to change your /etc/mysql/my.cnf line:

#datadir = /var/lib/mysql
datadir = /var/db_storage

#Now we need to secure our files.

crontab -e #edit users crontab file

# Every day at 2AM I’ll get a dump of all my databases
#The best and safest choice would be creating a dedicated user for backing up your #data. Such user should have read-only privileges to all databases.
#You can use, instead of zabbix_proxy (database name) this: –all-databases and back #up all your databases. This is good as long as your database is quite small. If #it’s a large database than this will take ages on such a machine like raspberryPi

0 2 * * * mysqldump -u zabbix -p yoursuperhardpasswordfordbaccess zabbix_proxy | gzip > /mnt/data/db_storage/zabbix_proxy_`date +”%Y-%m-%d”`.sql.gz

#Once every 15 minutes I’ll zip all the files into one archive and keep in on the thumb drive. Just because I can.
0 */15 * * * root zip -r /mnt/data/db_storage/db_storage.zip /var/db_storage/

#Another crontab rule – every day at 1:30AM files older than two weeks will be deleted. This is done to save some space.

30 1 * * * find /path/data/db_storage/ -mindepth 1 -mtime +14 -delete
#this solution is better than the -exex rm {} \; it’s less risky in case of a wrong path 😉

#Oh well and that’s about it.

Cheers,
TheWizard

Backing up your sever with Mega.nz

This post was influenced by Matteo Mattei and most of the credit belongs to him. Be shure to check his tutorial over HERE

So we’ve got some default monitoring, some services and a few databases. But if it’ll crash we are left with nothing. (In my case KS-1 is a cheap server with a single drive)
A good idea is to to prepare a backup and roll it back, just for the sake of checking if everything is allright.
Since mega.nz is back we can use it’s space (50G) to store our backups.

1. First, we need to get megatools. It’s a pack of multiple tools to work with Mega.nz. You can upload, download files, register$
Unfortunately, Debian stable does not have a ready package yet. But no worries.
Just take a look on: https://packages.debian.org/search?keywords=megatools
I have chosen the more ‘stable’ version – the one for the testing branch of debian.


wget http://ftp.bg.debian.org/debian/pool/main/m/megatools/megatools_1.9.97-1_amd64.deb

as root: dpkg -i megatools_1.9.97-1_amd64.deb

Oops, still need the requirements:

apt-get install glib-networking
apt-get -f install



and it should be running. 😉

2. Register yourself an account on mega.nz, if you haven’t done that yet.
In the home folder of the backup-doing user create a credentials file called .megarc


.megarc
[Login]
Username = mega_username
Password = mega_password

Don’t forget to set change the rights on that file.


chmod 640 /root/.megarc

3. Check your configuration with “megals” command. If everything is ok, and you have a clean account than you should get
something like


user@debian:~# megals
/Contacts
/Inbox
/Root
/Trash

4. The backup script.

The script was created by Matteo Mattei and all the credit goes to him. I only edited it a bit for my needs. My edits were:
backing up most recent logs and changing the way of uploading the files to the server.


#!/bin/bash

SERVER="myservername"
DAYS_TO_BACKUP=7
WORKING_DIR="your_backup_tmp_dir"

BACKUP_MYSQL="true"
MYSQL_USER="username"
MYSQL_PASSWORD="password"

DOMAINS_FOLDER="/var/www/html"
LOG_FOLDER="/var/log"

#############################http://www.matteomattei.com/backup-your-server-on-mega-co-nz-using-megatools/#####
# Create local working directory and collect all data
rm -rf ${WORKING_DIR}
mkdir ${WORKING_DIR}
cd ${WORKING_DIR}

# Backup /etc folder
cd /
tar cJf ${WORKING_DIR}/etc.tar.gx etc
cd - > /dev/null

# Backup MySQL
if [ "${BACKUP_MYSQL}" = "true" ]
then
mkdir ${WORKING_DIR}/mysql
for db in $(mysql -u${MYSQL_USER} -p${MYSQL_PASSWORD} -e 'show databases;' | grep -Ev "^(Database|mysql|information_sche$
do
#echo "processing ${db}"
mysqldump --opt -u${MYSQL_USER} -p${MYSQL_PASSWORD} "${db}" | gzip > ${WORKING_DIR}/mysql/${db}_$(date +%F_%T).s$
done
#echo "all db now"
mysqldump --opt -u${MYSQL_USER} -p${MYSQL_PASSWORD} --events --ignore-table=mysql.event --all-databases | gzip > ${WORKI$
fi

# Backup domains
mkdir ${WORKING_DIR}/domains
for folder in $(find ${DOMAINS_FOLDER} -mindepth 1 -maxdepth 1 -type d)
do
cd $(dirname ${folder})
tar cJf ${WORKING_DIR}/domains/$(basename ${folder}).tar.xz $(basename ${folder})
cd - > /dev/null
done

# Backup latest logs. I know it's not the most 'elegant way'.
mkdir ${WORKING_DIR}/logs

for fol in $(find ${LOG_FOLDER} -mindepth 1 -maxdepth 1 -type d && ls /var/log |grep ".log" | grep -v "gz")
do
cd $(dirname ${fol})
tar cJf ${WORKING_DIR}/logs/$(basename ${fol}).tar.xz $(basename ${fol})
cd - > /dev/null
done

##################################
# Workaround to prevent dbus error messages
export $(dbus-launch)

# Create base backup folder
[ -z "$(megals --reload /Root/backup_${SERVER})" ] && megamkdir /Root/backup_${SERVER}

# Remove old logs
while [ $(megals --reload /Root/backup_${SERVER} | grep -E "/Root/backup_${SERVER}/[0-9]{4}-[0-9]{2}-[0-9]{2}$" | wc -l) -gt ${D$
do
TO_REMOVE=$(megals --reload /Root/backup_${SERVER} | grep -E "/Root/backup_${SERVER}/[0-9]{4}-[0-9]{2}-[0-9]{2}$" | sort$
megarm ${TO_REMOVE}
done

# Create remote folder
curday=$(date +%F)
megamkdir /Root/backup_${SERVER}/${curday} 2> /dev/null

# Backup now!!!
#megasync --reload --no-progress -l ${WORKING_DIR} -r /Root/backup_${SERVER}/${curday} > /dev/null
megacopy --reload --no-progress --local ${WORKING_DIR} -r /Root/backup_${SERVER}/${curday} > /dev/null

# Kill DBUS session daemon (workaround)
kill ${DBUS_SESSION_BUS_PID}
rm -f ${DBUS_SESSION_BUS_ADDRESS}

# Clean local environment
rm -rf ${WORKING_DIR}
exit 0

5. Install dependencies:

For “dbus-launch” you’ll need a pack from the repo “dbus-x11”


apt install dbus-x11

6. Chmod the script

chmod +x backupscript.sh
chmod 750 backupscript.sh

At this point you can start your script by ./backupscript.sh and check if it’s working. If it’s fine than you can edit your crontab and add the righ value.


nano /etc/crontab
04 04 * * * root /root/backupscript.sh

Cheers,
Wizard

Zabbix + MariaDB + Nginx

Hello there my Friend,

This blog is a personal documentation of my work. So if you like it or you have any suggestions or questions, leave them in the comment section.

The idea: Monitoring my server + my raspberry Pi machine at home (temp + availability because of power shortages).
The hardware: At first I used a cheap VPS (1core, 2G ram, 15G hdd, 100mbps ) which was totally fine. Now I’m documenting this and redoing it on a low end box (2core, 2G ram, 500G hdd, 100mbps) just because of the migration.
OS of choice: Debian GNU/Linux 8 (jessie), just because I like it.
Http server of choice: Nginx, cuz I wanted to learn it.
DB of choice: MariaDB, I wanted to learn it and like the ‘idea’ of it.

 

1) Get Zabbix packages

It’s explained well enough here – just do the “installing repository configuration package” and continue with:

apt install zabbix-frontend-php nginx-full php5-fpm zabbix-server-mysql php5-mysql php5 mariadb-client-10.0 mariadb-client-core-10.0 mariadb-server mariadb-server-10.0 mariadb-server-core-10.0 

2) Prepare the database

as root:

 mysql_secure_installation 

and follow the questions that appear.

After that we have to create a database and user that will operate on that DB.

login to the mysql shell as root:

 mysql -uroot -p 

than we shall create the database, I’ll call mine zabbix (it’s easier to use the scripts afterwards)


CREATE DATABASE zabbix character set utf8 collate utf8_bin;

Than we create the user that will operate on the database. Mine is called zabbix and will be able to log to the database from the host ‘localhost’. The user is identified by a password set in the ‘password’ value.


CREATE USER ‘zabbix’@’localhost’ IDENTIFIED BY ‘password’;

and now we’ll allow the user ‘zabbix’ to have all possible privileges on the database ‘zabbix’


GRANT ALL PRIVILEGES ON zabbix.* TO 'zabbix'@'localhost' IDENTIFIED BY 'password'; 

now we can ‘quit;’ the database shell and create the database structure using the script provided by the Zabbix crew!

As root we do such a command:


zcat create.sql.gz | mysql -uzabbix -p<password> zabbix

This will read the config file from the gz and pass it to the pipe. The mysql command is quite simple

-u is for user

-p is for password (you can leave it blank and the prompt will ask you for it or fill it if you’re shure no one is peeking 😉 )

zabbix is the database name that will be populated

3) The Http server

My conf.d for zabbix:


server {
listen 80;
root /var/www;
index index.html index.php;
server_name <name_of_your_server>;

access_log /var/log/nginx/access.log;
gzip on;
gzip_min_length 1000;
gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;
gzip_comp_level 4;
gzip_proxied any;

tcp_nopush on;
tcp_nodelay on;

#keepalive_timeout 0;
keepalive_timeout 10;
fastcgi_read_timeout 10m;
proxy_read_timeout 10m;
client_header_timeout 10m;
client_body_timeout 10m;
send_timeout 10m;
large_client_header_buffers 16 32k;

location /{
root /var/www/html;
}

location /zabbix {
# try_files $uri $uri/ /index.html = 404;

index index.php index.html index.htm;
#try_files $uri $uri/ index.php;
expires 30d;
}

#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /usr/share/nginx/html;
#}

location ~* \.php$ {
expires off;
if (!-f $request_filename) { return 404; }
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
fastcgi_read_timeout 300;
include fastcgi_params;
}
}

4) Edit the php.ini file

In Debian that file is located in: /etc/php5/fpm/php.ini

Set such values

 
<pre>post_max_size = 16M
max_execution_time = 300
max_input_time = 300
date.timezone = Continent/City
always_populate_raw_post_data = -1 

and restart php5-fpm

 systemctl restart php5-fpm 

5) I’ve copied the zabbix directory to /var/www/zabbix:

cp -R /usr/share/zabbix/ /var/www/zabbix 

6) After that install the server providing the data (database name, database username, database password, database host)

7) The default login credentials are: Admin / password

 

Cheers,
Wizard