Redirect sites to the rootfolder without using a new vhost on Nginx.

Map. It’s a construction that maps the the value to a variable.
map $http_host $name {
hostnames;

*.bob.com bob;
default bob2;
}

So if your hostname ($http_host) would be anything from *.bob.com than your variable $name will have it’s value set to bob. Otherwise it’s set to bob2.
The “hostnames”, following the nginx man – indicates that source values can be hostnames with a prefix or suffix mask.

RL Example:

You can map the matching domain name to the rootfolder name.

map $http_host $root {
hostnames;

.funnycats.com funny_cats;
.funnydogs.com funny_dogs;

default funny_animals;

}

in the “server” section you can use:

root /var/www/$rootpath/;

Than reload nginx and you can check if it’s working with curl:

curl 127.0.0.1 -H “Host: bork.funnycats.com

Mind the gap! (haproxy)

Haproxy is an awesome software. If you’re not sure if your config is correct you can check it by:
haproxy -c -f /path/to/your/config.conf

-c is for check
-f is for file

BUT REMEMBER:

The check won’t notice if spaces are in the wrong place. So…

hdr(Host) and hdr (Host) are both correct for Haproxy check config. But it won’t work (well it didin’t work for me). hdr(Host) is correct (no space between hdr and (Host)).

Mad load caused by acpi_pad

If your machine hits crazy load and your processor is: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Check with top what’s wrong. In my case I had load 44 without any special reason (non production server). The problem was: acpi_pad.
Kernel: ~3.10 but google say it’s also on higher Linux Kernels (4.x)

Solution?
Dirty and nasty – “rmmod acpi_pad” and blacklisting the module with ” echo “blacklist acpi_pad” >> /etc/modprobe.d/blacklist-acpi_pad.conf ”

A nicer and suggested solution that I have to check: updating the bios of the server machine.

Oh, the issue is not brand-related. Some people have it on Supermicro boards, Dell boards. I’ve had it on a Lenovo server.

Zabbix proxy on a raspberry Pi

#Backup databases on raspberry pi in case of power outage (no ups / power bank supplied)

#Create the database:
CREATE DATABASE zabbix_proxy character set utf8 collate utf8_bin;

#Create the zabbix_proxy user:
CREATE USER ‘zabbix’@’localhost’ IDENTIFIED BY ‘your_zabbix_user_password’;

#Grant privileges
GRANT ALL PRIVILEGES ON zabbix_proxy.* TO ‘zabbix’@’localhost’ IDENTIFIED BY ‘our_zabbix_user_password’;

#Create the database schema:
cat /usr/share/doc/zabbix-proxy-mysql/ schema.sql.gz | mysql -uzabbix -p zabbix_proxy

#Create the data mountpoint

#In /etc/fstab
tmpfs /var/log tmpfs defaults,noatime,size=50m 0 0
tmpfs /var/db_storage tmpfs defaults,noatime, size=200m 0 0

#I do also have a pendrive for backig up my data once in a while and for some swap:

UUID=f8671d68-403c-449b-94a7-9b80e2f7dd88 none swap sw,pri=5 0 0
UUID=d3f1712b-d53e-487a-9b2c-09d74bdc517c /mnt/data xfs defaults 0 0

#Short disclaimer:
#As I’ve killed my two of SD cards with the read/write operations, I’ve decided to go for tmpfs in the memory.
#That’s why I suggest a tmpfs for /var/log and a custom localisation for the databases (/var/db_storage).

#Of course you need to create the directory:
mkdir -p /var/db_storage

also you need to change your /etc/mysql/my.cnf line:

#datadir = /var/lib/mysql
datadir = /var/db_storage

#Now we need to secure our files.

crontab -e #edit users crontab file

# Every day at 2AM I’ll get a dump of all my databases
#The best and safest choice would be creating a dedicated user for backing up your #data. Such user should have read-only privileges to all databases.
#You can use, instead of zabbix_proxy (database name) this: –all-databases and back #up all your databases. This is good as long as your database is quite small. If #it’s a large database than this will take ages on such a machine like raspberryPi

0 2 * * * mysqldump -u zabbix -p yoursuperhardpasswordfordbaccess zabbix_proxy | gzip > /mnt/data/db_storage/zabbix_proxy_`date +”%Y-%m-%d”`.sql.gz

#Once every 15 minutes I’ll zip all the files into one archive and keep in on the thumb drive. Just because I can.
0 */15 * * * root zip -r /mnt/data/db_storage/db_storage.zip /var/db_storage/

#Another crontab rule – every day at 1:30AM files older than two weeks will be deleted. This is done to save some space.

30 1 * * * find /path/data/db_storage/ -mindepth 1 -mtime +14 -delete
#this solution is better than the -exex rm {} \; it’s less risky in case of a wrong path 😉

#Oh well and that’s about it.

Cheers,
TheWizard

Backing up your sever with Mega.nz

This post was influenced by Matteo Mattei and most of the credit belongs to him. Be shure to check his tutorial over HERE

So we’ve got some default monitoring, some services and a few databases. But if it’ll crash we are left with nothing. (In my case KS-1 is a cheap server with a single drive)
A good idea is to to prepare a backup and roll it back, just for the sake of checking if everything is allright.
Since mega.nz is back we can use it’s space (50G) to store our backups.

1. First, we need to get megatools. It’s a pack of multiple tools to work with Mega.nz. You can upload, download files, register$
Unfortunately, Debian stable does not have a ready package yet. But no worries.
Just take a look on: https://packages.debian.org/search?keywords=megatools
I have chosen the more ‘stable’ version – the one for the testing branch of debian.


wget http://ftp.bg.debian.org/debian/pool/main/m/megatools/megatools_1.9.97-1_amd64.deb

as root: dpkg -i megatools_1.9.97-1_amd64.deb

Oops, still need the requirements:

apt-get install glib-networking
apt-get -f install



and it should be running. 😉

2. Register yourself an account on mega.nz, if you haven’t done that yet.
In the home folder of the backup-doing user create a credentials file called .megarc


.megarc
[Login]
Username = mega_username
Password = mega_password

Don’t forget to set change the rights on that file.


chmod 640 /root/.megarc

3. Check your configuration with “megals” command. If everything is ok, and you have a clean account than you should get
something like


user@debian:~# megals
/Contacts
/Inbox
/Root
/Trash

4. The backup script.

The script was created by Matteo Mattei and all the credit goes to him. I only edited it a bit for my needs. My edits were:
backing up most recent logs and changing the way of uploading the files to the server.


#!/bin/bash

SERVER="myservername"
DAYS_TO_BACKUP=7
WORKING_DIR="your_backup_tmp_dir"

BACKUP_MYSQL="true"
MYSQL_USER="username"
MYSQL_PASSWORD="password"

DOMAINS_FOLDER="/var/www/html"
LOG_FOLDER="/var/log"

#############################http://www.matteomattei.com/backup-your-server-on-mega-co-nz-using-megatools/#####
# Create local working directory and collect all data
rm -rf ${WORKING_DIR}
mkdir ${WORKING_DIR}
cd ${WORKING_DIR}

# Backup /etc folder
cd /
tar cJf ${WORKING_DIR}/etc.tar.gx etc
cd - > /dev/null

# Backup MySQL
if [ "${BACKUP_MYSQL}" = "true" ]
then
mkdir ${WORKING_DIR}/mysql
for db in $(mysql -u${MYSQL_USER} -p${MYSQL_PASSWORD} -e 'show databases;' | grep -Ev "^(Database|mysql|information_sche$
do
#echo "processing ${db}"
mysqldump --opt -u${MYSQL_USER} -p${MYSQL_PASSWORD} "${db}" | gzip > ${WORKING_DIR}/mysql/${db}_$(date +%F_%T).s$
done
#echo "all db now"
mysqldump --opt -u${MYSQL_USER} -p${MYSQL_PASSWORD} --events --ignore-table=mysql.event --all-databases | gzip > ${WORKI$
fi

# Backup domains
mkdir ${WORKING_DIR}/domains
for folder in $(find ${DOMAINS_FOLDER} -mindepth 1 -maxdepth 1 -type d)
do
cd $(dirname ${folder})
tar cJf ${WORKING_DIR}/domains/$(basename ${folder}).tar.xz $(basename ${folder})
cd - > /dev/null
done

# Backup latest logs. I know it's not the most 'elegant way'.
mkdir ${WORKING_DIR}/logs

for fol in $(find ${LOG_FOLDER} -mindepth 1 -maxdepth 1 -type d && ls /var/log |grep ".log" | grep -v "gz")
do
cd $(dirname ${fol})
tar cJf ${WORKING_DIR}/logs/$(basename ${fol}).tar.xz $(basename ${fol})
cd - > /dev/null
done

##################################
# Workaround to prevent dbus error messages
export $(dbus-launch)

# Create base backup folder
[ -z "$(megals --reload /Root/backup_${SERVER})" ] && megamkdir /Root/backup_${SERVER}

# Remove old logs
while [ $(megals --reload /Root/backup_${SERVER} | grep -E "/Root/backup_${SERVER}/[0-9]{4}-[0-9]{2}-[0-9]{2}$" | wc -l) -gt ${D$
do
TO_REMOVE=$(megals --reload /Root/backup_${SERVER} | grep -E "/Root/backup_${SERVER}/[0-9]{4}-[0-9]{2}-[0-9]{2}$" | sort$
megarm ${TO_REMOVE}
done

# Create remote folder
curday=$(date +%F)
megamkdir /Root/backup_${SERVER}/${curday} 2> /dev/null

# Backup now!!!
#megasync --reload --no-progress -l ${WORKING_DIR} -r /Root/backup_${SERVER}/${curday} > /dev/null
megacopy --reload --no-progress --local ${WORKING_DIR} -r /Root/backup_${SERVER}/${curday} > /dev/null

# Kill DBUS session daemon (workaround)
kill ${DBUS_SESSION_BUS_PID}
rm -f ${DBUS_SESSION_BUS_ADDRESS}

# Clean local environment
rm -rf ${WORKING_DIR}
exit 0

5. Install dependencies:

For “dbus-launch” you’ll need a pack from the repo “dbus-x11”


apt install dbus-x11

6. Chmod the script

chmod +x backupscript.sh
chmod 750 backupscript.sh

At this point you can start your script by ./backupscript.sh and check if it’s working. If it’s fine than you can edit your crontab and add the righ value.


nano /etc/crontab
04 04 * * * root /root/backupscript.sh

Cheers,
Wizard

Zabbix + MariaDB + Nginx

Hello there my Friend,

This blog is a personal documentation of my work. So if you like it or you have any suggestions or questions, leave them in the comment section.

The idea: Monitoring my server + my raspberry Pi machine at home (temp + availability because of power shortages).
The hardware: At first I used a cheap VPS (1core, 2G ram, 15G hdd, 100mbps ) which was totally fine. Now I’m documenting this and redoing it on a low end box (2core, 2G ram, 500G hdd, 100mbps) just because of the migration.
OS of choice: Debian GNU/Linux 8 (jessie), just because I like it.
Http server of choice: Nginx, cuz I wanted to learn it.
DB of choice: MariaDB, I wanted to learn it and like the ‘idea’ of it.

 

1) Get Zabbix packages

It’s explained well enough here – just do the “installing repository configuration package” and continue with:

apt install zabbix-frontend-php nginx-full php5-fpm zabbix-server-mysql php5-mysql php5 mariadb-client-10.0 mariadb-client-core-10.0 mariadb-server mariadb-server-10.0 mariadb-server-core-10.0 

2) Prepare the database

as root:

 mysql_secure_installation 

and follow the questions that appear.

After that we have to create a database and user that will operate on that DB.

login to the mysql shell as root:

 mysql -uroot -p 

than we shall create the database, I’ll call mine zabbix (it’s easier to use the scripts afterwards)


CREATE DATABASE zabbix character set utf8 collate utf8_bin;

Than we create the user that will operate on the database. Mine is called zabbix and will be able to log to the database from the host ‘localhost’. The user is identified by a password set in the ‘password’ value.


CREATE USER ‘zabbix’@’localhost’ IDENTIFIED BY ‘password’;

and now we’ll allow the user ‘zabbix’ to have all possible privileges on the database ‘zabbix’


GRANT ALL PRIVILEGES ON zabbix.* TO 'zabbix'@'localhost' IDENTIFIED BY 'password'; 

now we can ‘quit;’ the database shell and create the database structure using the script provided by the Zabbix crew!

As root we do such a command:


zcat create.sql.gz | mysql -uzabbix -p<password> zabbix

This will read the config file from the gz and pass it to the pipe. The mysql command is quite simple

-u is for user

-p is for password (you can leave it blank and the prompt will ask you for it or fill it if you’re shure no one is peeking 😉 )

zabbix is the database name that will be populated

3) The Http server

My conf.d for zabbix:


server {
listen 80;
root /var/www;
index index.html index.php;
server_name <name_of_your_server>;

access_log /var/log/nginx/access.log;
gzip on;
gzip_min_length 1000;
gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;
gzip_comp_level 4;
gzip_proxied any;

tcp_nopush on;
tcp_nodelay on;

#keepalive_timeout 0;
keepalive_timeout 10;
fastcgi_read_timeout 10m;
proxy_read_timeout 10m;
client_header_timeout 10m;
client_body_timeout 10m;
send_timeout 10m;
large_client_header_buffers 16 32k;

location /{
root /var/www/html;
}

location /zabbix {
# try_files $uri $uri/ /index.html = 404;

index index.php index.html index.htm;
#try_files $uri $uri/ index.php;
expires 30d;
}

#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root /usr/share/nginx/html;
#}

location ~* \.php$ {
expires off;
if (!-f $request_filename) { return 404; }
fastcgi_split_path_info ^(.+\.php)(/.+)$;
#fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_index index.php;
fastcgi_read_timeout 300;
include fastcgi_params;
}
}

4) Edit the php.ini file

In Debian that file is located in: /etc/php5/fpm/php.ini

Set such values

 
<pre>post_max_size = 16M
max_execution_time = 300
max_input_time = 300
date.timezone = Continent/City
always_populate_raw_post_data = -1 

and restart php5-fpm

 systemctl restart php5-fpm 

5) I’ve copied the zabbix directory to /var/www/zabbix:

cp -R /usr/share/zabbix/ /var/www/zabbix 

6) After that install the server providing the data (database name, database username, database password, database host)

7) The default login credentials are: Admin / password

 

Cheers,
Wizard

SunwellCore Server

SunwellCore is an early fork of Trinity Core, a World of Warcraft emulator for one of the expansions – Wrath of the Lich King. It’s mainly known by beeing used on a international server – Sunwell, which was closed somwhere at the end of 2015 (or maybe early 2016, can’t remember). But enough of the whining about the server, let’s do the business.

For this server you’ll need (the same as for TrinityCore):
– the code (I just git cloned https://github.com/Kittnz/Sunwell.git)

– a database – Sunwell Core Crew have done it on MySQL, I’m doing it on MariaDB;

– a machine that will handle everything;

– A will to make it work.

  1. Meet the requirements for TrinityCore
  2. Get the code
    git clone https://github.com/Kittnz/Sunwell.git 
  3. For SunwellCore you should also get libACE, libtool, autoconf and those packages which are in the “dep” folder in my case I had to
    apt install libace-dev autoconf libtool libemalloc-dev libgsoap-dev libutfcpp-dev libg3d-dev
  4. To compile the source with map tools you need to get libmpq. It’s provided in Sunwell/dep/libmpq  I didin’t get this part working. I have compiled my version without map tools.
    You need to run:

    sh ./autoconf.sh
    ./configure
    make -j $(nproc)
    and as root, to install for the whole system: make install
  5. In the cloned repo create a build folder and change dir into it:
     mkdir build && cd build 
  6. After that we have to prepare our files for make and make install. To do that I’ve used:
    cmake ../ -DCMAKE_INSTALL_PREFIX=/home/<username>/sunwell_server -DCONF_DIR=/home/<username>sunwell <del>-DTOOLS=1</del>
  7. I’ve had some problems with the cmake so I made a small change in the file:
    /home/<username>/Sunwell/src/server/shared/Packets/ByteBuffer.h

    I’ve edited the line

     return uint32(mktime(&lt) + _timezone);

    to

    return uint32(mktime(&lt)); 
  8.  If we’ve done everything correctly we should be able to compile and install our files. If you have a multicore machine you can use
    make -j $(nproc) 

    to use all the cores. Otherwise you can use

     make 

    or if you don’t want all the cores to be involved, you can use

     make -j <numberofcores> 

    and set the desired number of cores. Of course the same thing works for

     make install 

    or

     make -j $(nproc) 
  9. If you have problems with swap and RAM during compilation – keep on reading. Otherwise continue to the next step. During the install of my machine, I have set my swap partition too small. Apparently it was causing problems during the compilation (all ram and swap was full) so I’ve created a swapfile.

All these commands are supposed to be done as root or sudo

9.1 Create a file. I’ve created a 1 gigabyte file:

 dd if=/dev/zero of=/tmp/swap bs=1M count=1000

 

 

9.2 Format the swap file:

 mkswap /tmp/swap 

9.3  Change the chmod to 0600

 chmod 0600 /tmp/swap 

9.4 Add the file to /etc/fstab

/tmp/swap swap swap defaults 0 0 

9.5 Run

 swapon -a 

to activate the swap

 

9.6 Redo the compilation & installation process.

10. Database

The fabulous creators of Sunwell Core have actually provided us with the database. It’s full of their fixes (or hacks and temp solutions, as the creators of creators of TC like to call it on their IRC channel).

First of all you need to create a user and some databases, I’ve changed a bit the script provided by the TC crew:


GRANT USAGE ON * . * TO 'sunwell'@'localhost' IDENTIFIED BY 'password' WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 ;

CREATE DATABASE `sc_world` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;

CREATE DATABASE `sc_characters` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;

CREATE DATABASE `sc_auth` DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci;

GRANT ALL PRIVILEGES ON `sc_world` . * TO 'sunwell'@'localhost' WITH GRANT OPTION;

GRANT ALL PRIVILEGES ON `sc_characters` . * TO 'sunwell'@'localhost' WITH GRANT OPTION;

GRANT ALL PRIVILEGES ON `sc_auth` . * TO 'sunwell'@'localhost' WITH GRANT OPTION;

after that you need to “fill” the database with data. The Sunwell crew provided all changes and all you need to do is:

 mysql -usunwell -p databasename < file_to_import.sql 

or if you have more files you cant also use:

 cat *.sql | mysql -usunwell -p databasename 

As you can guess you need to import the files to the right database. They mostly go to the *world database.

11. Maps

Because I wasn’t able to compile the core with tools, I had to use the provided maps. All you need to do is to unpack them to the right folder (pointed in the worldserver.conf). Here are the files: DbcMapsMMapsVMaps

12. Config

Once again, the Sunwell Crew thought about us and prepared a config. They’re located in the extras folder in the cloned git repo. After compilation copy them into the /etc folder. You need to edit the part about connecting to the database (in both, worldserver and authserver), RealmName (in worldserver), DataDir (well, you can leave it, but I create a seperate one) and your LogsDir (also in worldserver). For example, check the Trinity Core Documentation

13. Hey ho, let’s go!

1. Get screen
2. go to your compiled folder
3. start on seperate screen sessions:

 bin/authserver

and

bin/worldserver 

If you have don everything right, you should start your own server. If you want to set it public / lan than follow Realmlist Table paragraph

This little tutorial is not perfect. It might contain mistakes. But I hope it helped you.

If you have any questions or suggestions post them in the comment section. I’m not a developer. I’m not an expert. But I’m someone who spent some time on this server and I am very grateful to Xinef, Pussywizard, Jajcer and rest of the crew for allowing me to meet all the awesome people and spend great nights and evenings. I hope you guys are doing well.

Cheers,
Wizard