Tag Archives: backup

Zabbix proxy on a raspberry Pi

#Backup databases on raspberry pi in case of power outage (no ups / power bank supplied)

#Create the database:
CREATE DATABASE zabbix_proxy character set utf8 collate utf8_bin;

#Create the zabbix_proxy user:
CREATE USER ‘zabbix’@’localhost’ IDENTIFIED BY ‘your_zabbix_user_password’;

#Grant privileges
GRANT ALL PRIVILEGES ON zabbix_proxy.* TO ‘zabbix’@’localhost’ IDENTIFIED BY ‘our_zabbix_user_password’;

#Create the database schema:
zcat /usr/share/doc/zabbix-proxy-mysql/ schema.sql.gz | mysql -uzabbix -p zabbix_proxy

#Create the data mountpoint

#In /etc/fstab
tmpfs /var/log tmpfs defaults,noatime,size=50m 0 0
tmpfs /var/db_storage tmpfs defaults,noatime, size=200m 0 0

#I do also have a pendrive for backig up my data once in a while and for some swap:

UUID=f8671d68-403c-449b-94a7-9b80e2f7dd88 none swap sw,pri=5 0 0
UUID=d3f1712b-d53e-487a-9b2c-09d74bdc517c /mnt/data xfs defaults 0 0

#Short disclaimer:
#As I’ve killed my two of SD cards with the read/write operations, I’ve decided to go for tmpfs in the memory.
#That’s why I suggest a tmpfs for /var/log and a custom localisation for the databases (/var/db_storage).

#Of course you need to create the directory:
mkdir -p /var/db_storage

also you need to change your /etc/mysql/my.cnf line:

#datadir = /var/lib/mysql
datadir = /var/db_storage

#Now we need to secure our files.

crontab -e #edit users crontab file

# Every day at 2AM I’ll get a dump of all my databases
#The best and safest choice would be creating a dedicated user for backing up your #data. Such user should have read-only privileges to all databases.
#You can use, instead of zabbix_proxy (database name) this: –all-databases and back #up all your databases. This is good as long as your database is quite small. If #it’s a large database than this will take ages on such a machine like raspberryPi

0 2 * * * mysqldump -u zabbix -p yoursuperhardpasswordfordbaccess zabbix_proxy | gzip > /mnt/data/db_storage/zabbix_proxy_`date +”%Y-%m-%d”`.sql.gz

#Once every 15 minutes I’ll zip all the files into one archive and keep in on the thumb drive. Just because I can.
0 */15 * * * root zip -r /mnt/data/db_storage/db_storage.zip /var/db_storage/

#Another crontab rule – every day at 1:30AM files older than two weeks will be deleted. This is done to save some space.

30 1 * * * find /path/data/db_storage/ -mindepth 1 -mtime +14 -delete
#this solution is better than the -exex rm {} \; it’s less risky in case of a wrong path 😉

#Oh well and that’s about it.

Cheers,
TheWizard

Backing up your sever with Mega.nz

This post was influenced by Matteo Mattei and most of the credit belongs to him. Be shure to check his tutorial over HERE

So we’ve got some default monitoring, some services and a few databases. But if it’ll crash we are left with nothing. (In my case KS-1 is a cheap server with a single drive)
A good idea is to to prepare a backup and roll it back, just for the sake of checking if everything is allright.
Since mega.nz is back we can use it’s space (50G) to store our backups.

1. First, we need to get megatools. It’s a pack of multiple tools to work with Mega.nz. You can upload, download files, register$
Unfortunately, Debian stable does not have a ready package yet. But no worries.
Just take a look on: https://packages.debian.org/search?keywords=megatools
I have chosen the more ‘stable’ version – the one for the testing branch of debian.


wget http://ftp.bg.debian.org/debian/pool/main/m/megatools/megatools_1.9.97-1_amd64.deb

as root: dpkg -i megatools_1.9.97-1_amd64.deb

Oops, still need the requirements:

apt-get install glib-networking
apt-get -f install



and it should be running. 😉

2. Register yourself an account on mega.nz, if you haven’t done that yet.
In the home folder of the backup-doing user create a credentials file called .megarc


.megarc
[Login]
Username = mega_username
Password = mega_password

Don’t forget to set change the rights on that file.


chmod 640 /root/.megarc

3. Check your configuration with “megals” command. If everything is ok, and you have a clean account than you should get
something like


user@debian:~# megals
/Contacts
/Inbox
/Root
/Trash

4. The backup script.

The script was created by Matteo Mattei and all the credit goes to him. I only edited it a bit for my needs. My edits were:
backing up most recent logs and changing the way of uploading the files to the server.


#!/bin/bash

SERVER="myservername"
DAYS_TO_BACKUP=7
WORKING_DIR="your_backup_tmp_dir"

BACKUP_MYSQL="true"
MYSQL_USER="username"
MYSQL_PASSWORD="password"

DOMAINS_FOLDER="/var/www/html"
LOG_FOLDER="/var/log"

#############################http://www.matteomattei.com/backup-your-server-on-mega-co-nz-using-megatools/#####
# Create local working directory and collect all data
rm -rf ${WORKING_DIR}
mkdir ${WORKING_DIR}
cd ${WORKING_DIR}

# Backup /etc folder
cd /
tar cJf ${WORKING_DIR}/etc.tar.gx etc
cd - > /dev/null

# Backup MySQL
if [ "${BACKUP_MYSQL}" = "true" ]
then
mkdir ${WORKING_DIR}/mysql
for db in $(mysql -u${MYSQL_USER} -p${MYSQL_PASSWORD} -e 'show databases;' | grep -Ev "^(Database|mysql|information_sche$
do
#echo "processing ${db}"
mysqldump --opt -u${MYSQL_USER} -p${MYSQL_PASSWORD} "${db}" | gzip > ${WORKING_DIR}/mysql/${db}_$(date +%F_%T).s$
done
#echo "all db now"
mysqldump --opt -u${MYSQL_USER} -p${MYSQL_PASSWORD} --events --ignore-table=mysql.event --all-databases | gzip > ${WORKI$
fi

# Backup domains
mkdir ${WORKING_DIR}/domains
for folder in $(find ${DOMAINS_FOLDER} -mindepth 1 -maxdepth 1 -type d)
do
cd $(dirname ${folder})
tar cJf ${WORKING_DIR}/domains/$(basename ${folder}).tar.xz $(basename ${folder})
cd - > /dev/null
done

# Backup latest logs. I know it's not the most 'elegant way'.
mkdir ${WORKING_DIR}/logs

for fol in $(find ${LOG_FOLDER} -mindepth 1 -maxdepth 1 -type d && ls /var/log |grep ".log" | grep -v "gz")
do
cd $(dirname ${fol})
tar cJf ${WORKING_DIR}/logs/$(basename ${fol}).tar.xz $(basename ${fol})
cd - > /dev/null
done

##################################
# Workaround to prevent dbus error messages
export $(dbus-launch)

# Create base backup folder
[ -z "$(megals --reload /Root/backup_${SERVER})" ] && megamkdir /Root/backup_${SERVER}

# Remove old logs
while [ $(megals --reload /Root/backup_${SERVER} | grep -E "/Root/backup_${SERVER}/[0-9]{4}-[0-9]{2}-[0-9]{2}$" | wc -l) -gt ${D$
do
TO_REMOVE=$(megals --reload /Root/backup_${SERVER} | grep -E "/Root/backup_${SERVER}/[0-9]{4}-[0-9]{2}-[0-9]{2}$" | sort$
megarm ${TO_REMOVE}
done

# Create remote folder
curday=$(date +%F)
megamkdir /Root/backup_${SERVER}/${curday} 2> /dev/null

# Backup now!!!
#megasync --reload --no-progress -l ${WORKING_DIR} -r /Root/backup_${SERVER}/${curday} > /dev/null
megacopy --reload --no-progress --local ${WORKING_DIR} -r /Root/backup_${SERVER}/${curday} > /dev/null

# Kill DBUS session daemon (workaround)
kill ${DBUS_SESSION_BUS_PID}
rm -f ${DBUS_SESSION_BUS_ADDRESS}

# Clean local environment
rm -rf ${WORKING_DIR}
exit 0

5. Install dependencies:

For “dbus-launch” you’ll need a pack from the repo “dbus-x11”


apt install dbus-x11

6. Chmod the script

chmod +x backupscript.sh
chmod 750 backupscript.sh

At this point you can start your script by ./backupscript.sh and check if it’s working. If it’s fine than you can edit your crontab and add the righ value.


nano /etc/crontab
04 04 * * * root /root/backupscript.sh

Cheers,
Wizard