The point of no return

Redmine Installation on CentOS7 with Postgres10

6th December 2018 by Ali Erdinç Köroğlu

It’s been a long time since I’m not posting anything about free software so let me try to explain how to install Redmine on CentOS7.
Bytheway Nginx will connect Puma via “unix socket” and ruby will also connect postgresql via “unix socket” to get rid of the TCP over head.

From wikipedia:

Redmine is a free and open source, web-based project management and issue tracking tool. It allows users to manage multiple projects and associated subprojects. It features per project wikis and forums, time tracking, and flexible, role-based access control. It includes a calendar and Gantt charts to aid visual representation of projects and their deadlines. Redmine integrates with various version control systems and includes a repository browser and diff viewer.

The design of Redmine is significantly influenced by Trac, a software package with some similar features.

Redmine is written using the Ruby on Rails framework. It is cross-platform and cross-database and supports 34 languages

So lets begin but 1st disable selinux :)

/etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected.
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Update your CentOS

[root@redmine ~]# yum -y update

Install EPEL repository

[root@redmine ~]# yum -y epel-release

We will use Postgresql official repository

[root@redmine ~]# yum -y install https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm

Exclude official centos postgresql packages..

/etc/yum.repos.d/CentOS-Base.repo
[base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
exclude=postgre* 
#released updates
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
exclude=postgre*

Install required packages

yum -y install zlib-devel curl-devel openssl-devel ftp wget ImageMagick-devel gcc-c++ patch readline-devel libyaml-devel libffi-devel bzip2 autoconf automake libtool bison subversion git glibc-headers glibc-devel nginx postgresql10-server postgresql10 postgresql10-devel sqlite-devel

Let’s run Postgresql and create redmine user+database

[root@redmine ~]# /usr/pgsql-10/bin/postgresql-10-setup initdb
Initializing database ... OK
 
[root@redmine ~]# systemctl enable postgresql-10
Created symlink from /etc/systemd/system/multi-user.target.wants/postgresql-10.service to /usr/lib/systemd/system/postgresql-10.service.
[root@redmine ~]# systemctl start postgresql-10
[root@redmine ~]# su - postgres -c "psql"
psql (10.6)
Type "help" for help.
 
postgres=# CREATE ROLE redmine LOGIN ENCRYPTED PASSWORD 'redredmine' NOINHERIT VALID UNTIL 'infinity';
CREATE ROLE
postgres=# CREATE DATABASE redmine WITH ENCODING='UTF8' OWNER=redmine;
CREATE DATABASE
postgres=# \q
[root@redmine ~]#

And redmine user..

[root@redmine ~]# adduser -d /opt/red -s /bin/bash -c 'Redmine user' red
[root@redmine ~]# install -d -m 755 -o red -g red /opt/red

Let the rock-off begin..

[root@redmine ~]# su -l red[red@redmine ~]$ curl -sSL https://rvm.io/mpapis.asc | gpg --import -gpg: directory '/opt/red/.gnupg' created
gpg: new configuration file '/opt/red/.gnupg/gpg.conf' created
gpg: WARNING: options in '/opt/red/.gnupg/gpg.conf' are not yet active during this run
gpg: keyring '/opt/red/.gnupg/secring.gpg' created
gpg: keyring '/opt/red/.gnupg/pubring.gpg' created
gpg: /opt/red/.gnupg/trustdb.gpg: trustdb created
gpg: key D39DC0E3: public key "Michal Papis (RVM signing) <mpapis@gmail.com>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg: no ultimately trusted keys found
[red@redmine ~]$ curl -sSL https://get.rvm.io | bash -s stable --ruby...
..
.
  * To start using RVM you need to run `source /opt/red/.rvm/scripts/rvm`
    in all your open shell windows, in rare cases you need to reopen all shell windows.
[red@redmine ~]$ source ~/.rvm/scripts/rvm[red@redmine ~]$ rvm --default use rubyUsing /opt/red/.rvm/gems/ruby-2.5.1
[red@redmine ~]$ cd  && svn co http://svn.redmine.org/redmine/branches/3.4-stable redmine...
..
.
 U   redmine
Checked out revision 17690.
[red@redmine ~]$ mkdir -p ./redmine/tmp/pids ./redmine/public/plugin_assets[red@redmine ~]$ cp ./redmine/config/configuration.yml.example ./redmine/config/configuration.yml[red@redmine ~]$ cp ./redmine/config/database.yml.example ./redmine/config/database.yml

Database connection for Redmine (ps: hash all other database adapter configuration lines)

/opt/red/redmine/config/database.yml
# PostgreSQL configuration
production:
  adapter: postgresql
  database: redmine
  host: /var/run/postgresql/.s.PGSQL.5432
  username: redmine
  password: "redredmine"

Let’s continue

[red@redmine config]$ cd /opt/red/redmine[red@redmine redmine]$ echo "gem 'puma'" >> Gemfile.local[red@redmine redmine]$ echo "gem: --no-ri --no-rdoc" >> ~/.gemrc[red@redmine redmine]$ gem install bundlerFetching: bundler-1.17.1.gem (100%)
Successfully installed bundler-1.17.1
1 gem installed
[red@redmine redmine]$ bundle install --without development test mysql sqlite

You’ll get an error like that..

Installing pg 0.18.4 with native extensions
Gem::Ext::BuildError: ERROR: Failed to build gem native extension.
 
    current directory: /opt/red/.rvm/gems/ruby-2.5.1/gems/pg-0.18.4/ext
/opt/red/.rvm/rubies/ruby-2.5.1/bin/ruby -r ./siteconf20181206-7476-1d3eg27.rb extconf.rb
checking for pg_config... no
No pg_config... trying anyway. If building fails, please try again with
 --with-pg-config=/path/to/pg_config
checking for libpq-fe.h... no
Can't find the 'libpq-fe.h header

Let’s fix that..

[red@redmine redmine]$ gem install pg -v '0.18.4' --source 'https://rubygems.org/' -- --with-pg-config=/usr/pgsql-10/bin/pg_configBuilding native extensions with: '--with-pg-config=/usr/pgsql-10/bin/pg_config'
This could take a while...
Successfully installed pg-0.18.4
1 gem installed
[red@redmine redmine]$ bundle install --without development test mysql sqlite...
..
.
Bundle complete! 32 Gemfile dependencies, 56 gems now installed.
Gems in the groups development, test, mysql and sqlite were not installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.
[red@redmine redmine]$ rake generate_secret_token

And now we are changing postgresql’s unix socket connection method

/var/lib/pgsql/10/data/pg_hba.conf
--- pg_hba.conf.org	2018-12-06 12:02:22.214000000 +0300
+++ pg_hba.conf	2018-12-06 13:24:50.045000000 +0300
@@ -77,7 +77,7 @@
 # TYPE  DATABASE        USER            ADDRESS                 METHOD
 
 # "local" is for Unix domain socket connections only
-local   all             all                                     peer
+local   all             all                                     md5
 # IPv4 local connections:
 host    all             all             127.0.0.1/32            ident
 # IPv6 local connections:

Restart postgresql server and migrate redmine

[root@redmine ~]# systemctl restart postgresql-10[root@redmine ~]# su - l  red
[red@redmine ~]$ cd /opt/red/redmine
[red@redmine redmine]$ RAILS_ENV=production rake db:migrate...
..
.
** Invoke db:_dump (first_time)
** Execute db:_dump
** Invoke db:schema:dump (first_time)
** Invoke environment 
** Invoke db:load_config 
** Execute db:schema:dump
[red@redmine redmine]$

Puma configuration

/opt/red/redmine/config/puma.rb
#!/usr/bin/env puma
application_path = '/opt/red/redmine'
directory application_path
environment 'production'
daemonize false
pidfile "#{application_path}/tmp/pids/puma.pid"
state_path "#{application_path}/tmp/pids/puma.state"
stdout_redirect "#{application_path}/log/puma.stdout.log", "#{application_path}/log/puma.stderr.log"

Redmine SystemD script

/lib/systemd/system/redmine.service
[Unit]
Description=Redmine server
After=syslog.target
After=network.target
 
[Service]
Type=simple
WorkingDirectory=/opt/red
User=red
Group=red
ExecStart=/bin/bash -c 'source /opt/red/.rvm/scripts/rvm && rvm --default use ruby && cd /opt/red/redmine/ && bundle exec puma --config /opt/red/redmine/config/puma.rb -b unix:///opt/red/redmine.sock'
TimeoutSec=300
 
[Install]
WantedBy=multi-user.target
[root@redmine red]# systemctl status redmine
● redmine.service - Redmine server
   Loaded: loaded (/usr/lib/systemd/system/redmine.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2018-12-06 14:08:11 +03; 1min 5s ago
 Main PID: 21783 (bash)
   CGroup: /system.slice/redmine.service
           ├─21783 /bin/bash -c source /opt/red/.rvm/scripts/rvm && rvm --default use ruby && cd /opt/red/redmine/ && bundle exec puma --config /opt/red/redmine/config/puma.rb -b unix:///opt/red/redmine.sock
           └─22214 puma 3.12.0 (unix:///opt/red/redmine.sock) [redmine]
 
Dec 06 14:08:11 redmine systemd[1]: Started Redmine server.
Dec 06 14:08:12 redmine bash[21783]: Using /opt/red/.rvm/gems/ruby-2.5.1
Dec 06 14:08:14 redmine bash[21783]: Puma starting in single mode...
Dec 06 14:08:14 redmine bash[21783]: * Version 3.12.0 (ruby 2.5.1-p57), codename: Llamas in Pajamas
Dec 06 14:08:14 redmine bash[21783]: * Min threads: 0, max threads: 16
Dec 06 14:08:14 redmine bash[21783]: * Environment: production
Dec 06 14:08:18 redmine bash[21783]: * Listening on unix:///opt/red/redmine.sock
Dec 06 14:08:18 redmine bash[21783]: Use Ctrl-C to stop

Redmine Nginx configuration

/etc/nginx/conf.d/default.conf
upstream redmine {
        server unix:///opt/red/redmine.sock;
}
 
server {
        listen 80;
        listen [::]:80;
        server_name red.redmine.com.tr;
 
        #access_log /var/log/nginx/redmine-access.log;
        access_log off;
        error_log /var/log/nginx/redmine.error error;
 
        location / {
                proxy_http_version 1.1;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                proxy_pass http://redmine/;
 
                # 502..
                proxy_buffers 8 32k;
                proxy_buffer_size 128k;
                proxy_busy_buffers_size 128k;
                proxy_read_timeout 90;
        }
}

Redmine Login Page

Bon Appétit..

Centralized Supervisor Interface: Cesi

28th December 2014 by Ali Erdinç Köroğlu

I already mentioned about supervisor in here. But how about to manage all supervisors from one web interface which has authorization and advanced process management filtering? As you know Supervisord provides a basic web interface to monitor and restart your processes where supervisor installed but XML-RPC interface and API allows you to control remote supervisor and the programs it runs. So it’s possible.. One UI to rule them all..

Here is Cesi (Centralized supervisor interface): it’s a web interface to manage supervisors from single UI, developed by Gülşah Köse and Kaan Özdinçer.

Application Dependencies

  1. Python : a programming language :)
  2. Flask : a microframework for Python
  3. SQlite : a self-contained, serverless, zero-configuration, transactional SQL database engine

I’ll cover everything step by step for CentOS7 minimal installation..
Since Flask is not in CentOS7 or EPEL repositories, I’ll install it via pip. So what is pip? Pip is a tool for installing and managing Python packages from Python Package Index repository. We also need Git (for cesi) and Nginx (you can get nginx from EPEL or from official Nginx repository)

Required CentOS packages

[root@supervisord ~]# yum install python-pip sqlite git nginx

Flask installation

[root@supervisord ~]# pip install flask

Getting a clone from github

[root@supervisord ~]# cd /opt/
[root@supervisord opt]# git clone https://github.com/Gamegos/cesi

Initial database operations

[root@supervisord cesi]# sqlite3 /opt/cesi/cesi/userinfo.db < userinfo.sql

Conf file should be in /etc

[root@supervisord cesi]# mv /opt/cesi/cesi/cesi.conf /etc

Let us define some remote supervisors

/etc/cesi.conf
[node:srv4]
username = superv
password = superv1q2w3e
host = 192.168.9.4
port = 9001
 
[node:srv10]
username = superv
password = superv
host = 192.168.9.10
port = 9001
 
[environment:glassfish]
members = srv4, srv10
 
[cesi]
database = /root/cesi/cesi/userinfo.db
activity_log = /root/cesi/cesi/cesi_activity.log

Since cesi will run by nginx user /opt/cesi should be accessible for it

[root@supervisord opt]# chown -R nginx:nginx /opt/cesi

We’ll run cesi via supervisord :)

/etc/supervisor.d/cesi.ini
[program:cesi]
command=/bin/python /opt/cesi/cesi/web.py
process_name=%(program_name)s
user=nginx
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/cesi-stdout.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
stderr_logfile=/var/log/cesi-stderr.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
stderr_events_enabled=false

Flask will initialise twice the application in debug mode and when you try to stop via supervisord it will not stop.

root     17614  0.0  0.3 227168 12632 ?        Ss   21:05   0:00 /usr/bin/python /usr/bin/supervisord
nginx    17723  6.5  0.4 235768 16684 ?        S    21:36   0:00 /bin/python /opt/cesi/cesi/web.py
nginx    17728  6.5  0.4 309592 16936 ?        Sl   21:36   0:00 /bin/python /opt/cesi/cesi/web.py
[root@supervisord cesi]# supervisorctl stop cesi
cesi: stopped
[root@supervisord cesi]# ps aux
.
.
root     17614  0.0  0.3 227320 12664 ?        Ss   21:05   0:00 /usr/bin/python /usr/bin/supervisord
nginx    17728  0.5  0.4 309592 16936 ?        Sl   21:36   0:01 /bin/python /opt/cesi/cesi/web.py
root     17738  0.0  0.0 123356  1380 pts/0    R+   21:40   0:00 ps aux
[root@supervisord cesi]# supervisorctl stop cesi
cesi: ERROR (not running)

As you see it’s not stopped, so simplest thing would be to change use_reloader in web.py

/opt/cesi/cesi/web.py
--- web.py.org	2014-12-27 21:23:37.625143414 +0200
+++ web.py	2014-12-27 21:23:48.222118215 +0200
@@ -531,7 +531,7 @@
 
 try:
     if __name__ == '__main__':
-        app.run(debug=True, use_reloader=True)
+        app.run(debug=True, use_reloader=False)
 except xmlrpclib.Fault as err:
     print "A fault occurred"
     print "Fault code: %d" % err.faultCode

Working like a charm..

[root@supervisord cesi]# supervisorctl start cesi
cesi: started
[root@supervisord cesi]# ps aux| grep cesi
nginx    17704  0.2  0.4 237048 18232 ?        S    21:30   0:00 /bin/python /opt/cesi/cesi/web.py

System side is ok and let us configure the nginx..

/etc/nginx/conf.d/default.conf
server {
        server_name localhost 192.168.9.240 212.213.214.215;
 
        location / {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_pass http://127.0.0.1:5000;
        }
 
        location /static {
                root /opt/cesi/cesi/;
        }
}

Let’s check TCP sockets

[root@supervisord cesi]# netstat -anp| grep LISTEN       
tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN      17761/python        
tcp        0      0 192.168.9.240:9001      0.0.0.0:*               LISTEN      17614/python        
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      17842/nginx: master 
tcp        0      0 192.168.9.240:22        0.0.0.0:*               LISTEN      852/sshd

It’s time to login..


Login screen


Dashboard


All supervisors in one screen


A node


A process log

Kudos goes to Gülşah and Kaan and that’s all folks..

A New Supervisord Plugin for Nagios

23rd December 2014 by Ali Erdinç Köroğlu

In my last entry I mentioned about Supervisord and how beautiful it is. While I’m looking for how to integrate Supervisord to Nagios I couldn’t find what I’m looking for so I decided to write a new supervisord plugin for nagios. What I was looking for is a plugin works on Nagios server (so no need to install remote servers) to get information about processes from remote supervisords. So here it is.. check_supervisor

Usage: check_supervisord.py -H 192.168.1.1 -P 9001 -u superv -p superv -a glassfish
 
Options:
  -h, --help     
  -H HOSTNAME, --hostname=HOSTNAME (Supervisord hostname)
  -P PORT, --port=PORT (Supervisord port)
  -u USERNAME, --username=USERNAME (Supervisord username)
  -p PASSWORD, --password=PASSWORD (Supervisord password)
  -a PROCNAME, --process-name=PROCNAME (Process name defined in /etc/supervisor.d/*.ini or supervisorctl status)

Console Output:

[root@nagios ~]# /usr/lib64/nagios/plugins/check_supervisord.py -H 192.168.1.1 -P 9001 -u superv -p superv -a glassfish
glassfish OK: 12 day(s) 17 hour(s) 29 minute(s)

You should add the new command definition for check_supervisord into nagios’s commands.cfg

/etc/nagios/objects/commands.cfg
1
2
3
4
define command{
    command_name    check_supervisord
    command_line    $USER1$/check_supervisord.py -H $HOSTADDRESS$ -P $ARG1$ -u $ARG2$ -p $ARG3$ -a $ARG4$
}

And you can use in a definition like this :)

/etc/nagios/conf.d/services/glassfish.cfg
1
2
3
4
5
6
define service {
    use generic-service
    host_name   srv1
    service_description Glassfish
    check_command   check_supervisord!9001!superv!superv!glassfish
}


A screenshot from nagios web interface

Best Way to Daemonize Applications on Linux

22nd December 2014 by Ali Erdinç Köroğlu

I tried to explain how to daemonize applications before but how about monitor and even start/stop/restart processes locally or remotely? Well, here is Supervisor. Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.

As you know we need to write rc.d or systemd scripts for every single process instance. It’s hard to write and maintain also those scripts are not able to automatically restart a crashed process. So supervisord is the solution, it’s simple, efficient, centralized, extensible etc.. etc..

Supervisor has two component, supervisord and supervisorctl. Supervisord is the server side of supervisor and is responsible for starting child programs, controlling, logging and handling events. It’s also providing a web interface to view and control process status and an XML-RPC interface to control supervisor and the programs it runs. Supervisorctl on the otherhand, providing a shell-like interface for connecting to supervisord. But 1st let us install into our CentOS7 server. Supervisor is available on EPEL7 repository, if you don’t know how to add EPEL repository, please read this.

Installation is easy

[root@Neverland ~]# yum install supervisor

Please don’t forget to enable supervisor for systemd

[root@Neverland ~]# systemctl enable supervisord

It begins with a config file :)

/etc/supervisor.conf
[unix_http_server]
file=/var/tmp/supervisor.sock; (the path to the socket file)
 
[inet_http_server]      ; inet (TCP) server disabled by default 
port=192.168.1.1:9001   ; (ip_address:port specifier, *:port for all iface)
username=superv         ; (default is no username (open server))
password=superv         ; (default is no password (open server))
 
[supervisord]
logfile=/var/log/supervisor/supervisord.log  ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB       ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10          ; (num of main logfile rotation backups;default 10)
loglevel=info               ; (log level;default info; others: debug,warn,trace)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false              ; (start in foreground if true;default false)
minfds=1024                 ; (min. avail startup file descriptors;default 1024)
minprocs=200                ; (min. avail process descriptors;default 200)
 
[rpcinterface:supervisor]
supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface
 
[supervisorctl]
serverurl=unix:///var/tmp/supervisor.sock ; use a unix:// URL  for a unix socket
 
[include]
files = supervisord.d/*.ini

This process config file includes details such as directory, command, process name, process owner, logging etc. If you wanna know more, please read this.

/etc/supervisor.d/fixtures.ini
[program:fixtures]
directory=/opt/pronet/fixtures
command=/usr/java/jdk1.7.0_71/bin/java -Dfile.encoding=UTF-8 -Dproject.properties=/opt/pronet/fixtures/fixtures.properties -Dlog4j..
process_name=%(program_name)s
user=pronet
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/pronet/fixtures-stdout.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
stderr_logfile=/var/log/pronet/fixtures-stderr.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
stderr_events_enabled=false

So when you start or restart supervisord fixtures process will start or restart too.. (depends on your process config)

[root@Neverland ~]# systemctl start supervisord
[root@Neverland ~]# supervisorctl status fixtures
fixtures                         RUNNING    pid 5786, uptime 0:00:03

And you can monitor or control remotely..

[root@nagios ~]# supervisorctl -s http://192.168.1.1:9001 -u superv -p superv status fixtures
fixtures                         RUNNING    pid 5786, uptime 0:04:20
[root@nagios ~]# supervisorctl -s http://192.168.1.1:9001 -u superv -p superv restart fixtures
fixtures: stopped
fixtures: started

There are some centralized supervisord web interfaces but I’ll cover them later :)

How to sync time properly: ntpdate or ntpd?

7th November 2013 by Ali Erdinç Köroğlu

Previously I explained how to install chrooted NTP server, but the question is how you’re going to sync time of your server with a NTP server. There are two options: Ntpdate and Ntpd.

Ntpdate is for the one-time synchronization only.
Ntpd (network time protocol daemon) is for automatically sync system time with a remote reference time server.

There are many examples like adding cronjobs for ntpdate hourly, daily, weekly etc. The main difference between ntpd and ntpdate; ntpd will run all the time and continuously adjust the system time when clocks drift but ntpdate will not. Also keep in mind that ntpdate is deprecated as of September 2012.

So why we need ntpdate at all ?
In ancient ages it was important to get the system time before starting ntpd and usually done by ntpdate. Over time, ntpd evolved and no longer necessary to set the time before starting ntpd.

To sum up; if you’re running time specific operations like application servers, database servers, email servers, clusters etc. ntpd is what you need.

Installation

Since NTP package is in the base repository no need to add extra repository.

yum install ntp
chkconfig ntpd on

Configuration

/etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
 
server 192.168.100.254          # your NTP server
server 0.tr.pool.ntp.org        # region releated ntp.org server
server ntp.ulakbim.gov.tr       # local authority

Since this will not be a ntp server for other so no need to listen on all interfaces.

/etc/sysconfig/ntpd
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -g -I eth0"

Starting..

[root@cache ~]# /etc/init.d/ntpd start
Starting ntpd:                                             [  OK  ]

NTP query result and network time synchronisation status

[root@cache ~]# ntpstat 
synchronised to NTP server (192.168.100.254) at stratum 4 
   time correct to within 108 ms
   polling server every 64 s
[root@cache ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*192.168.100.254 82.94.167.75     3 u    5   64  377    0.276  -21.198  25.027

And as you see everything ok..

/var/log/messages
Nov  7 13:48:51 cache ntpd[44248]: ntpd 4.2.4p8@1.1612-o Fri Feb 22 11:23:27 UTC 2013 (1)
Nov  7 13:48:51 cache ntpd[44249]: precision = 0.079 usec
Nov  7 13:48:51 cache ntpd[44249]: Listening on interface #0 wildcard, 0.0.0.0#123 Disabled
Nov  7 13:48:51 cache ntpd[44249]: Listening on interface #1 wildcard, ::#123 Disabled
Nov  7 13:48:51 cache ntpd[44249]: Listening on interface #2 lo, ::1#123 EnabledNov  7 13:48:51 cache ntpd[44249]: Listening on interface #3 eth0, fe80::20c:29ff:febd:d65f#123 EnabledNov  7 13:48:51 cache ntpd[44249]: Listening on interface #4 eth1, fe80::20c:29ff:febd:d669#123 Disabled
Nov  7 13:48:51 cache ntpd[44249]: Listening on interface #5 lo, 127.0.0.1#123 EnabledNov  7 13:48:51 cache ntpd[44249]: Listening on interface #6 eth0, 192.168.100.1#123 EnabledNov  7 13:48:51 cache ntpd[44249]: Listening on interface #7 eth1, 192.168.101.1#123 Disabled
Nov  7 13:48:51 cache ntpd[44249]: Listening on routing socket on fd #29 for interface updates
Nov  7 13:48:51 cache ntpd[44249]: kernel time sync status 2040

WordPress Flexform Theme Revslider space fix

23rd August 2013 by Ali Erdinç Köroğlu

If you’re having troubles 5 or 10px empty space at the bottom of your revolution slider after updating flexform (v1.5) and revslider (3.0.7) to latest version, you can apply this patch.

--- wordpress/wp-content/themes/flexform/page.php.old	2013-08-09 01:02:04.000000000 +0200
+++ wordpress/wp-content/themes/flexform/page.php	2013-08-23 17:48:53.000000000 +0200
@@ -106,7 +106,7 @@
 
 		<div class="page-content clearfix">
 			<?php the_content(); ?>			
-			<div class="link-pages"><?php wp_link_pages(); ?></div>
+			<!--<div class="link-pages"><?php wp_link_pages(); ?></div>-->
 		</div>
 
 		<?php } ?>

Chrooted NTP Server on CentOS 6

27th July 2013 by Ali Erdinç Köroğlu

What is NTP? Network Time Protocol (NTP) is used to automatically sync system time with a remote reference time server. Why time synchronization is important? Because every aspect of managing, securing, planning, and debugging a network involves determining when events happen. Think about time-based AAA authentication and authorization, billing services, financial services, fault analysis.. Time management is a crucial component of healthy and secure network.

Why chroot? Security precaution :)

Scenario

We’ll create a ntp server for 2 different LAN (192.168.100 & 192.168.101) sync via pool.ntp.org and Turkish Academic Network and Information Center time servers.

Installation

Since NTP package is in the base repository there is no need to add extra repository.

yum install ntp
chkconfig ntpd on

Chroot Structure

There is no chroot-ntp package, we should prepare chroot environment.

mkdir /chroot
mkdir /chroot/ntp
mkdir /chroot/ntp/dev
mknod -m 666 /chroot/ntp/dev/null c 1 3
mknod -m 666 /chroot/ntp/dev/zero c 1 5
mknod -m 444 /chroot/ntp/dev/random c 1 8
mkdir /chroot/ntp/etc
mkdir /chroot/ntp/proc
mkdir /chroot/ntp/var
mkdir /chroot/ntp/var/lib
mkdir /chroot/ntp/var/lib/ntp
mv /var/lib/ntp/drift /chroot/ntp/var/lib/ntp/
chown -R ntp:ntp /chroot/ntp/var/lib/ntp
mkdir /chroot/ntp/var/log
mkdir /chroot/ntp/var/log/ntpstats
chown -R ntp:ntp /chroot/ntp/var/log/ntpstats
mv /etc/ntp.conf /chroot/ntp/etc
ln -s /chroot/ntp/etc/ntp.conf /etc/ntp.conf

Structure looks like this..

[root@firewall ~]# tree /chroot/ntp/
/chroot/ntp/
├── dev
│   ├── null
│   ├── random
│   └── zero
├── etc
│   └── ntp.conf
├── proc
└── var
    ├── lib
    │   └── ntp
    │       └── drift
    └── log
        └── ntpstats

Configuration

/chroot/ntp/etc/ntp.conf
server 0.tr.pool.ntp.org
server ntp.ulakbim.gov.tr
server 127.127.1.0
fudge 127.127.1.0 stratum 10
 
restrict 192.168.100.0 mask 255.255.255.0 nomodify notrap
restrict 192.168.101.0 mask 255.255.255.0 nomodify notrap
restrict 127.0.0.1
 
driftfile /var/lib/ntp/drift
logfile /var/log/ntp.log
/etc/sysconfig/ntpd
OPTIONS="-i /chroot/ntp -u ntp:ntp -p /var/run/ntpd.pid -g"

NTP requires proc file system in chroot environment, you could mount manually but I modified ntpd initscript.

diff -u /etc/init.d/ntpd.org /etc/init.d/ntpd
--- /etc/init.d/ntpd.org	2013-07-22 18:33:23.553385624 +0300
+++ /etc/init.d/ntpd	2013-07-24 11:22:47.594735735 +0300
@@ -30,6 +30,27 @@
 
 prog=ntpd
 lockfile=/var/lock/subsys/$prog
+chroot=/chroot/ntp
+
+mount_proc() {
+        echo -n $"Binding proc to chroot environment: "
+        ret=0
+        mount --bind /proc $chroot/proc
+        let ret+=$?;
+        [ $ret -eq 0 ] && success || failure
+        echo
+        return $ret
+}
+
+umount_proc (){
+        echo -n $"Unmounting proc from chroot environment: "
+        ret=0
+       umount $chroot/proc
+        let ret+=$?;
+        [ $ret -eq 0 ] && success || failure
+        echo
+        return $ret
+}
 
 start() {
        [ "$EUID" != "0" ] && exit 4
@@ -38,6 +59,9 @@
        [ -f /etc/sysconfig/ntpd ] || exit 6
        . /etc/sysconfig/ntpd
 
+       # Mounting proc into chroot
+       mount_proc
+
         # Start daemons.
         echo -n $"Starting $prog: "
         daemon $prog $OPTIONS
@@ -54,6 +78,10 @@
        RETVAL=$?
         echo
        [ $RETVAL -eq 0 ] && rm -f $lockfile
+
+       #Unmount proc from chroot
+       umount_proc
+
        return $RETVAL
 }

Let’s start the server..

[root@firewall ntp]# /etc/init.d/ntpd start
Binding proc to chroot environment:                        [  OK  ]
Starting ntpd:                                             [  OK  ]

Just to make sure everything is ok or not :)

[root@firewall ntpstats]# ps aux | grep ntpd
root     23824  0.0  0.0 103236   852 pts/0    S+   13:15   0:00 grep ntpd
ntp      25301  0.0  0.0  30164  1628 ?        Ss   Jul24   0:01 ntpd -i /chroot/ntp -u ntp:ntp -p /var/run/ntpd.pid -g[root@firewall ntpstats]# cat /proc/mounts 
rootfs / rootfs rw 0 0
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
devtmpfs /dev devtmpfs rw,relatime,size=1953976k,nr_inodes=488494,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
/dev/sda1 / ext4 rw,noatime,barrier=1,data=ordered 0 0
/proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
proc /chroot/ntp/proc proc rw,relatime 0 0

NTP query result and network time synchronisation status

[root@firewall ntp]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*195.50.171.101  145.253.2.212    2 u  420 1024  377   69.880   -0.031   0.006
+samur.ulak.net. 131.188.3.221    2 u  352 1024  377   30.842   -2.137   3.257
 LOCAL(0)        .LOCL.          10 l   27   64  377    0.000    0.000   0.000
[root@firewall ntp]# ntpstat 
synchronised to NTP server (195.50.171.101) at stratum 3 
   time correct to within 84 ms
   polling server every 1024 s

« Previous Entries