The point of no return

Centralized Supervisor Interface: Cesi

28th December 2014 by Ali Erdinç Köroğlu

I already mentioned about supervisor in here. But how about to manage all supervisors from one web interface which has authorization and advanced process management filtering? As you know Supervisord provides a basic web interface to monitor and restart your processes where supervisor installed but XML-RPC interface and API allows you to control remote supervisor and the programs it runs. So it’s possible.. One UI to rule them all..

Here is Cesi (Centralized supervisor interface): it’s a web interface to manage supervisors from single UI, developed by Gülşah Köse and Kaan Özdinçer.

Application Dependencies

  1. Python : a programming language :)
  2. Flask : a microframework for Python
  3. SQlite : a self-contained, serverless, zero-configuration, transactional SQL database engine

I’ll cover everything step by step for CentOS7 minimal installation..
Since Flask is not in CentOS7 or EPEL repositories, I’ll install it via pip. So what is pip? Pip is a tool for installing and managing Python packages from Python Package Index repository. We also need Git (for cesi) and Nginx (you can get nginx from EPEL or from official Nginx repository)

Required CentOS packages

[root@supervisord ~]# yum install python-pip sqlite git nginx

Flask installation

[root@supervisord ~]# pip install flask

Getting a clone from github

[root@supervisord ~]# cd /opt/
[root@supervisord opt]# git clone https://github.com/Gamegos/cesi

Initial database operations

[root@supervisord cesi]# sqlite3 /opt/cesi/cesi/userinfo.db < userinfo.sql

Conf file should be in /etc

[root@supervisord cesi]# mv /opt/cesi/cesi/cesi.conf /etc

Let us define some remote supervisors

/etc/cesi.conf
[node:srv4]
username = superv
password = superv1q2w3e
host = 192.168.9.4
port = 9001
 
[node:srv10]
username = superv
password = superv
host = 192.168.9.10
port = 9001
 
[environment:glassfish]
members = srv4, srv10
 
[cesi]
database = /root/cesi/cesi/userinfo.db
activity_log = /root/cesi/cesi/cesi_activity.log

Since cesi will run by nginx user /opt/cesi should be accessible for it

[root@supervisord opt]# chown -R nginx:nginx /opt/cesi

We’ll run cesi via supervisord :)

/etc/supervisor.d/cesi.ini
[program:cesi]
command=/bin/python /opt/cesi/cesi/web.py
process_name=%(program_name)s
user=nginx
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/cesi-stdout.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
stderr_logfile=/var/log/cesi-stderr.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
stderr_events_enabled=false

Flask will initialise twice the application in debug mode and when you try to stop via supervisord it will not stop.

root     17614  0.0  0.3 227168 12632 ?        Ss   21:05   0:00 /usr/bin/python /usr/bin/supervisord
nginx    17723  6.5  0.4 235768 16684 ?        S    21:36   0:00 /bin/python /opt/cesi/cesi/web.py
nginx    17728  6.5  0.4 309592 16936 ?        Sl   21:36   0:00 /bin/python /opt/cesi/cesi/web.py
[root@supervisord cesi]# supervisorctl stop cesi
cesi: stopped
[root@supervisord cesi]# ps aux
.
.
root     17614  0.0  0.3 227320 12664 ?        Ss   21:05   0:00 /usr/bin/python /usr/bin/supervisord
nginx    17728  0.5  0.4 309592 16936 ?        Sl   21:36   0:01 /bin/python /opt/cesi/cesi/web.py
root     17738  0.0  0.0 123356  1380 pts/0    R+   21:40   0:00 ps aux
[root@supervisord cesi]# supervisorctl stop cesi
cesi: ERROR (not running)

As you see it’s not stopped, so simplest thing would be to change use_reloader in web.py

/opt/cesi/cesi/web.py
--- web.py.org	2014-12-27 21:23:37.625143414 +0200
+++ web.py	2014-12-27 21:23:48.222118215 +0200
@@ -531,7 +531,7 @@
 
 try:
     if __name__ == '__main__':
-        app.run(debug=True, use_reloader=True)
+        app.run(debug=True, use_reloader=False)
 except xmlrpclib.Fault as err:
     print "A fault occurred"
     print "Fault code: %d" % err.faultCode

Working like a charm..

[root@supervisord cesi]# supervisorctl start cesi
cesi: started
[root@supervisord cesi]# ps aux| grep cesi
nginx    17704  0.2  0.4 237048 18232 ?        S    21:30   0:00 /bin/python /opt/cesi/cesi/web.py

System side is ok and let us configure the nginx..

/etc/nginx/conf.d/default.conf
server {
        server_name localhost 192.168.9.240 212.213.214.215;
 
        location / {
                proxy_set_header Host $http_host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_pass http://127.0.0.1:5000;
        }
 
        location /static {
                root /opt/cesi/cesi/;
        }
}

Let’s check TCP sockets

[root@supervisord cesi]# netstat -anp| grep LISTEN       
tcp        0      0 127.0.0.1:5000          0.0.0.0:*               LISTEN      17761/python        
tcp        0      0 192.168.9.240:9001      0.0.0.0:*               LISTEN      17614/python        
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      17842/nginx: master 
tcp        0      0 192.168.9.240:22        0.0.0.0:*               LISTEN      852/sshd

It’s time to login..


Login screen


Dashboard


All supervisors in one screen


A node


A process log

Kudos goes to Gülşah and Kaan and that’s all folks..

A New Supervisord Plugin for Nagios

23rd December 2014 by Ali Erdinç Köroğlu

In my last entry I mentioned about Supervisord and how beautiful it is. While I’m looking for how to integrate Supervisord to Nagios I couldn’t find what I’m looking for so I decided to write a new supervisord plugin for nagios. What I was looking for is a plugin works on Nagios server (so no need to install remote servers) to get information about processes from remote supervisords. So here it is.. check_supervisor

Usage: check_supervisord.py -H 192.168.1.1 -P 9001 -u superv -p superv -a glassfish
 
Options:
  -h, --help     
  -H HOSTNAME, --hostname=HOSTNAME (Supervisord hostname)
  -P PORT, --port=PORT (Supervisord port)
  -u USERNAME, --username=USERNAME (Supervisord username)
  -p PASSWORD, --password=PASSWORD (Supervisord password)
  -a PROCNAME, --process-name=PROCNAME (Process name defined in /etc/supervisor.d/*.ini or supervisorctl status)

Console Output:

[root@nagios ~]# /usr/lib64/nagios/plugins/check_supervisord.py -H 192.168.1.1 -P 9001 -u superv -p superv -a glassfish
glassfish OK: 12 day(s) 17 hour(s) 29 minute(s)

You should add the new command definition for check_supervisord into nagios’s commands.cfg

/etc/nagios/objects/commands.cfg
1
2
3
4
define command{
    command_name    check_supervisord
    command_line    $USER1$/check_supervisord.py -H $HOSTADDRESS$ -P $ARG1$ -u $ARG2$ -p $ARG3$ -a $ARG4$
}

And you can use in a definition like this :)

/etc/nagios/conf.d/services/glassfish.cfg
1
2
3
4
5
6
define service {
    use generic-service
    host_name   srv1
    service_description Glassfish
    check_command   check_supervisord!9001!superv!superv!glassfish
}


A screenshot from nagios web interface

Best Way to Daemonize Applications on Linux

22nd December 2014 by Ali Erdinç Köroğlu

I tried to explain how to daemonize applications before but how about monitor and even start/stop/restart processes locally or remotely? Well, here is Supervisor. Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.

As you know we need to write rc.d or systemd scripts for every single process instance. It’s hard to write and maintain also those scripts are not able to automatically restart a crashed process. So supervisord is the solution, it’s simple, efficient, centralized, extensible etc.. etc..

Supervisor has two component, supervisord and supervisorctl. Supervisord is the server side of supervisor and is responsible for starting child programs, controlling, logging and handling events. It’s also providing a web interface to view and control process status and an XML-RPC interface to control supervisor and the programs it runs. Supervisorctl on the otherhand, providing a shell-like interface for connecting to supervisord. But 1st let us install into our CentOS7 server. Supervisor is available on EPEL7 repository, if you don’t know how to add EPEL repository, please read this.

Installation is easy

[root@Neverland ~]# yum install supervisor

Please don’t forget to enable supervisor for systemd

[root@Neverland ~]# systemctl enable supervisord

It begins with a config file :)

/etc/supervisor.conf
[unix_http_server]
file=/var/tmp/supervisor.sock; (the path to the socket file)
 
[inet_http_server]      ; inet (TCP) server disabled by default 
port=192.168.1.1:9001   ; (ip_address:port specifier, *:port for all iface)
username=superv         ; (default is no username (open server))
password=superv         ; (default is no password (open server))
 
[supervisord]
logfile=/var/log/supervisor/supervisord.log  ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB       ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10          ; (num of main logfile rotation backups;default 10)
loglevel=info               ; (log level;default info; others: debug,warn,trace)
pidfile=/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false              ; (start in foreground if true;default false)
minfds=1024                 ; (min. avail startup file descriptors;default 1024)
minprocs=200                ; (min. avail process descriptors;default 200)
 
[rpcinterface:supervisor]
supervisor.rpcinterface_factory=supervisor.rpcinterface:make_main_rpcinterface
 
[supervisorctl]
serverurl=unix:///var/tmp/supervisor.sock ; use a unix:// URL  for a unix socket
 
[include]
files = supervisord.d/*.ini

This process config file includes details such as directory, command, process name, process owner, logging etc. If you wanna know more, please read this.

/etc/supervisor.d/fixtures.ini
[program:fixtures]
directory=/opt/pronet/fixtures
command=/usr/java/jdk1.7.0_71/bin/java -Dfile.encoding=UTF-8 -Dproject.properties=/opt/pronet/fixtures/fixtures.properties -Dlog4j..
process_name=%(program_name)s
user=pronet
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/pronet/fixtures-stdout.log
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stdout_capture_maxbytes=1MB
stdout_events_enabled=false
stderr_logfile=/var/log/pronet/fixtures-stderr.log
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_capture_maxbytes=1MB
stderr_events_enabled=false

So when you start or restart supervisord fixtures process will start or restart too.. (depends on your process config)

[root@Neverland ~]# systemctl start supervisord
[root@Neverland ~]# supervisorctl status fixtures
fixtures                         RUNNING    pid 5786, uptime 0:00:03

And you can monitor or control remotely..

[root@nagios ~]# supervisorctl -s http://192.168.1.1:9001 -u superv -p superv status fixtures
fixtures                         RUNNING    pid 5786, uptime 0:04:20
[root@nagios ~]# supervisorctl -s http://192.168.1.1:9001 -u superv -p superv restart fixtures
fixtures: stopped
fixtures: started

There are some centralized supervisord web interfaces but I’ll cover them later :)

I already explained how to daemonize java applications on SysV-style system in here. Since CentOS7/RHEL7 comes with Systemd which is a system and service manager for Linux we migrate old init scripts to the new system.

Again we’ll use our best budy Daemonize but this time we gonna compile it from source because although it’s signed as approved on fedora package db, Daemonize is not in EPEL7 repository for now. I’ll not going to details how to compile install etc. but I assume that you installed daemonize into /usr/local/sbin

So we need to create two files /etc/sysconfig/fixtures and /lib/systemd/system/fixtures.service
This 1st file is where we define java releated variables such as user, java path, arguments, log files etc..

/etc/sysconfig/fixtures
1
2
3
4
5
6
7
8
9
10
# Configz for fixtures service
 
JAVA_USER="pronet"
JAVA_STDOUT="/var/log/pronet/fixtures.log"
JAVA_STDERR="/var/log/pronet/fixtures-error.log"
JAVA_BIN="/usr/java/jdk1.7.0_71/bin/java"
JAVA_APPDIR="/opt/pronet/fixtures"
ARG1="-Dfile.encoding=UTF-8 -Dproject.properties=/opt/pronet/fixtures/fixtures.properties"
ARG2="-Dlog4j.configuration=file:/opt/pronet/fixtures/fixtures-log.properties"
ARG3="-jar /opt/pronet/fixtures/fixtures.jar"

2nd file is service file for fixtures where we define systemd releted variables. There are plenty of documents in Freedesktop Systemd wiki, if you want to know more about I advice you to read them. But roughly unit: consist information about a service, a socket, a device etc, service: information about a process controlled and supervised by systemd and install: installation information for the unit

/lib/systemd/system/fixtures.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Unit]
Description=Fixtures Service
After=syslog.target
After=network.target
 
[Service]
Type=forking
EnvironmentFile=-/etc/sysconfig/fixtures
ExecStart=/usr/local/sbin/daemonize -u $JAVA_USER -o $JAVA_STDOUT -e $JAVA_STDERR -c $JAVA_APPDIR $JAVA_BIN $ARG1 $ARG2 $ARG3
ExecStop=/bin/kill -TERM $MAINPID
TimeoutSec=300
 
[Install]
WantedBy=multi-user.target

Let’s start and stop the service

[root@Srv25 pronet]# systemctl start fixtures
[root@Srv25 pronet]# systemctl stop fixtures

if there is something wrong all service files and docker containers insert data into the systemd journal and we can read the journal :)

[root@Srv25 pronet]# journalctl -u fixtures.service

Checking the service status

[root@Srv25 pronet]# systemctl status fixtures
fixtures.service - Fixtures Service
   Loaded: loaded (/usr/lib/systemd/system/fixtures.service; disabled)
   Active: active (running) since Wed 2014-10-29 21:21:49 EET; 13min ago
 Main PID: 28859 (java)
   CGroup: /system.slice/fixtures.service
           └─28859 /usr/java/jdk1.7.0_71/bin/java -Dfile.encoding=UTF-8 -Dproject.properties=/opt/pronet/fixtures/fixtures.properties -Dlog4j.configuration=file:...
 
Oct 29 21:21:49 Srv25 systemd[1]: Started Fixtures Service.

Enable the service to be started on bootup

[root@Srv25 pronet]# systemctl enable fixtures.service
ln -s '/usr/lib/systemd/system/fixtures.service' '/etc/systemd/system/multi-user.target.wants/fixtures.service'
[root@Srv25 pronet]#

So that’s how it works..

How to install IBM Director Common Linux Agent 6.3.5 on CentOS6

29th October 2014 by Ali Erdinç Köroğlu

If you’ve IBM hardware you already know about IBM Director, if you dont please leave now :)
As may know IBM only supports IBM Systems Director Common Agent 6.3.5 for RHEL5, RHEL6, SLES10 and SLES11 distributions. So what about CentOS6 users ??

Dont panic..

1. Discover your X86 server from Director Inventory
2. Give root permission for SSH access to Director and finish request access stage
3. Download the latest common agent from IBM website into the server where IBM Director is running!
4. Import the latest agent from Director > Release Management > Agents
5. Goto System x Management > View System x servers and operating systems
6. Find you server’s IP address and right click it
7. Choose Release Management > Install Agent
8. Follow Agent Installation steps but you gonna select CommonAgent 6.3.5 xLinux and finish installation agent.

Dont worry you’ll get errors :)
At this stage IBM Director is copied releated files into you server’s /tmp/commonagent_tmp directory, so let’s begin..

[root@dbs ~]# cd /tmp/commonagent_tmp/
[root@dbs commonagent_tmp]# mkdir rpm
[root@dbs commonagent_tmp]# ./dir6.3.5_commonagent_linux_x86 -x rhel6 -p rpm/
[1-Agree|0-Disagree]: 1
Extracting RPM files to rpm/
.............................
654801 blocks

Now we have rhel6 releated rpms..

[root@dbs commonagent_tmp]# cd rpm/
[root@dbs rpm]# ls
brocade_adapter_cimprovider-3.2.3.0-lsb.i386.rpm     ibmcim-baseserver-6.3.5-rhel6.i386.rpm        ibmcim-snmp-5.2.1-rhel6.i386.rpm
cassite-linux-x86.zip                                ibmcim-baseserver-mof-6.3.5-1.i386.rpm        ibmcim-snmpextensions-6.3.5-rhel6.i386.rpm
diruninstall.agent                                   ibmcim-icu-36.0-rhel6.i386.rpm                ibmcim-ssl-1.0.1.7-rhel6.i386.rpm
dsa-2.26-rhel6.i386.rpm                              ibmcim-instrumentation-6.3.5-rhel6.i386.rpm   install
emulex_fc_provider_ibm-10.2.261.13-rhel6.i686.rpm    ibmcim-network-6.3.5-rhel6.i386.rpm           ISDCommonAgent-6.3.5-1.noarch.rpm
emulex_ucna_provider_ibm-10.2.261.13-rhel6.i686.rpm  ibmcim-network-mof-6.3.5-1.i386.rpm           Lib_Utils-1.00-09.noarch.rpm
ibmcim-6.3.5-rhel6.i386.rpm                          ibmcim-objectmanager-6.3.5-rhel6.i386.rpm     lsi_mr_hhr-00.50.0506-rhel6.i386.rpm
ibmcim-agentextensions-6.3.5-rhel6.i386.rpm          ibmcim-openslp-2.1.1636-rhel6.i386.rpm        qlogic_cna_providers-1.5.6-rhel6.i386.rpm
ibmcim-agentextensions-mof-6.3.5-1.i386.rpm          ibmcim-sblim-2.2.8-1rhel6.i386.rpm            setup.lin
ibmcim-baseos-6.3.5-rhel6.i386.rpm                   ibmcim-serviceprocessor-6.3.5-rhel6.i386.rpm
ibmcim-baseos-mof-6.3.5-1.i386.rpm                   ibmcim-serviceprocessor-mof-6.3.5-1.i386.rpm
[root@dbs rpm]#

Apply my patch to install file..

--- install.org	2014-10-28 23:24:55.208104735 +0200
+++ install	2014-10-28 23:46:12.973181647 +0200
@@ -70,8 +70,8 @@
 RC_POWER_PAM_SLES_DEP_FAILED=70
 
 # Distribution list and versions
-DISTROS="redhat-release redhat-release-server redhat-release-workstation sled-release sles-release SUSE_SLES_SAP-release SLES-for-VMware-release release vmware-esx-vmware-release-4"
-SUPPORTED_RELEASES="rhel5 rhel6 sles10 sles11"
+DISTROS="redhat-release redhat-release-server redhat-release-workstation centos-release sled-release sles-release SUSE_SLES_SAP-release SLES-for-VMware-release release vmware-esx-vmware-release-4"
+SUPPORTED_RELEASES="rhel5 rhel6 centos6 sles10 sles11"
 
 # RPM attributes
 ARCH=i386
@@ -336,6 +336,13 @@
 	return 1
 }
 
+check_centos6()
+{
+        [ "${1}" == "centos-release-6" ] && \
+                RELEASE=rhel6 && BINARY_COMPAT_RELEASE=rhel6 && return 0
+        return 1
+}
+
 check_sles10()
 {
 	if [ "${1}" == "sles-release-10" ] || \
@@ -2453,6 +2460,8 @@
 	[ $1 == "redhat-release-server" ] || \
 	[ $1 == "redhat-release-workstation" ] && \
 	    RELEASE_RPM=$(rpm --qf '%{name}-%{version}\n' -q --whatprovides /etc/redhat-release)
+	[ $1 == "centos-release" ] && \
+            RELEASE_RPM=$(rpm --qf '%{name}-%{version}\n' -q --whatprovides /etc/redhat-release)
 	[ $1 == "sles-release" ] || \
 	[ $1 == "SUSE_SLES_SAP-release" ] || \
 	[ $1 == "SLES-for-VMware-release" ] || \

And then install the agent :)

[root@dbs rpm]# wget http://ae.koroglu.org/doc/install.patch
--2014-10-29 02:08:42--  http://ae.koroglu.org/doc/install.patch
Resolving ae.koroglu.org... 78.46.226.107
Connecting to ae.koroglu.org|78.46.226.107|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1413 (1.4K) [application/octet-stream]
Saving to: “install.patch”
 
100%[==============================================================================>] 1,413       --.-K/s   in 0s      
 
2014-10-29 02:08:47 (265 MB/s) - “install.patch” saved [1413/1413]
 
[root@dbs rpm]# patch -p0 < install.patch 
patching file install
[root@dbs rpm]# ./install -ivs -p /tmp/commonagent_tmp/rpm

PS: You may need to install pam.i686 before installation.

We’re using Hsqldb and Glassfish on a project, our developers found some scripts to run java applications as daemon and it was ok for a while but when they started to report problems about the services I checked what they’re using. A lot of function’s, write’s, check’s, for’s, while’s, if’s, else’s etc.. There should be an easy way to do this? Guess what I decided to make it simple, Yet another NIH Syndrome ? :) Let me try to explain.. Bytheway we dont want to compile source code or add any 3rd party repositories.

But 1st what is a daemon? Daemon stands for Disk and Execution Monitor and is a long-running background process that answers requests for services. As explained in Wikipedia:

On a Unix-like system, the common method for a process to become a daemon, when the process is started from the command line or from a startup script such as an init script or a SystemStarter script, involves:

  1. Dissociating from the controlling tty
  2. Becoming a session leader
  3. Becoming a process group leader
  4. Executing as a background task by forking and exiting (once or twice). This is required sometimes for the process to become a session leader. It also allows the parent process to continue its normal execution.
  5. Setting the root directory (/) as the current working directory so that the process does not keep any directory in use that may be on a mounted file system (allowing it to be unmounted).
  6. Changing the umask to 0 to allow open(), creat(), et al. operating system calls to provide their own permission masks and not to depend on the umask of the caller
  7. Closing all inherited files at the time of execution that are left open by the parent process, including file descriptors 0, 1 and 2 for the standard streams (stdin, stdout and stderr). Required files will be opened later.
  8. Using a logfile, the console, or /dev/null as stdin, stdout, and stderr

There are some methods to run application as daemon on Linux such as; start-stop-daemon (part of dpkg package), Red Hat init.d function, Nohup

Start-stop-daemon: It’s a good tool but EPEL dpkg package does not consist it, I don’t know why ?
Red Hat init.d function: Setting application/working directory could be problem also not possible to get StdOut & StdErr outputs as log file
Nohup: No automation, you have to write lots of things.. nohup $command >>$log_file 2>&1 & echo \$! >$pid_file

No start-stop-daemon in EPEL repository, RHEL init.d daemon function is not enough, Nohup requires bash scripting knowledge.. Ok what now ??
Instead use Daemonize. Daemonize runs a command as a Unix daemon, it’s a good alternative for start-stop-daemon and daemonize package is in the EPEL repository.

I wrote an init.d script (Jdis) to run java applications as daemon with daemonize on RHEL and CentOS Linux, you can download Jdis from GitHub: java_daemon-init.sh

Hsqldb Init script example

An example from real world situation; Hsqldb. HyperSQL DataBase (Hsqldb) is a relational database software written in Java. It offers a small, fast multithreaded and transactional database engine with in-memory and disk-based tables and supports embedded and server mode.

Let’s daemonize it..

We dont want to run the application with root user

[root@zion ~]# useradd hsqldb -s /sbin/nologin

Change owner of the application directory

[root@zion ~]# chown -R hsqldb:hsqldb /opt/hsqldb
/etc/init.d/hsqldb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
#!/bin/bash
#
# Jdis - Java daemon init-script for Red Hat / CentOS Linux
# Ali Erdinc Koroglu - http://ae.koroglu.org
# License : GNU GPL (http://www.gnu.org/licenses/gpl.html)
#
# You must install daemonize package from EPEL repository to use this script.
# How to add EPEL repository: http://ae.koroglu.org/docs/adding-epel-repository-on-centos/
#
# History:
# 2014-08-19 : First release
 
# chkconfig: 345 85 15
# description: Java daemon script
 
### BEGIN INIT INFO
# Provides:
# Required-Start: $local_fs $network $syslog $time
# Required-Stop: $local_fs $network $syslog $time
# Short-Description: start and stop Java daemons
# Description: Java daemon init script
### END INIT INFO
 
# source function library
. /etc/init.d/functions
 
# Java Home
java_home="/usr/java/jdk1.7.0_45"                                               # java path
 
# Service settings
service_name="hsqldb"                                                           # Service name
service_user="hsqldb"                                                           # User/group of process
pid_file="/var/run/$service_name.pid"                                           # Pid file
log_file="/var/log/$service_name/$service_name.log"                             # StdOut log file
errlog_file="/var/log/$service_name/$service_name-error.log"                    # StdErr log file
java="$java_home/bin/java"                                                      # Java binary
java_appdir="/opt/hsqldb/data"                                                  # Application path
java_applibdir="/opt/hsqldb/lib"                                                # Application Lib path
java_arg1="-server -Xms1G -Xmx6G -XX:NewSize=256m"                              # Argument 1
java_arg2="-XX:MaxNewSize=256m -XX:PermSize=512m -XX:MaxPermSize=512m"          # Argument 2
java_arg3="-Dproject.properties=$java_appdir/test.properties"                   # Argument 3
java_arg4="-classpath $java_applibdir/hsqldb.jar org.hsqldb.server.Server"      # Argument 4
java_args="$java_arg1 $java_arg2 $java_arg3 $java_arg4"                         # Java Arguments
 
RETVAL=0
start() {
    [ -x $java ] || exit 5
    echo -n $"Starting $service_name: "
    if [ $EUID -ne 0 ]; then
        RETVAL=1
        failure
    elif [ -s /var/run/$service_name.pid ]; then
        RETVAL=1
        echo -n $"already running.."
        failure
    else
        daemonize -u $service_user -p $pid_file -o $log_file -e $errlog_file -c $java_appdir $java $java_args && success || failure
        RETVAL=$?
        [ $RETVAL -eq 0 ] && touch /var/lock/subsys/$service_name
    fi;
    echo
    return $RETVAL
}
 
stop() {
    echo -n $"Stopping $service_name: "
    if [ $EUID -ne 0 ]; then
        RETVAL=1
        failure
    else
        killproc -p $pid_file
        RETVAL=$?
        [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$service_name
    fi;
    echo
    return $RETVAL
}
 
restart(){
    stop
    start
}
 
case "$1" in
    start)
        start
        RETVAL=$?
        ;;
    stop)
        stop
        RETVAL=$?
        ;;
    restart)
        restart
        RETVAL=$?
        ;;
    status)
        status $service_name
        RETVAL=$?
        ;;
    *)
        echo $"Usage: $0 {start|stop|status|restart}"
        RETVAL=2
esac
exit $RETVAL

lets check..

[root@zion ~]# /etc/init.d/hsqldb start
Starting hsqldb:                                           [  OK  ]
[root@zion ~]# ps aux | grep hsqldb
hsqldb     463  178  1.0 8555172 349364 ?      Ssl  15:04   0:10 /usr/java/jdk1.7.0_45/bin/java -server -Xms1G -Xmx6G -XX:NewSize=256m -XX:MaxNewSize=256m -XX:PermSize=512m -XX:MaxPermSize=512m -Dproject.properties=/usr/share/pronet/hsqldb/data/test.properties -classpath /usr/share/pronet/hsqldb/lib/hsqldb.jar org.hsqldb.server.Server
root       485  0.0  0.0 103244   848 pts/0    R+   15:04   0:00 grep hsqldb
[root@zion ~]# /etc/init.d/hsqldb status
hsqldb (pid  463) is running...
[root@zion ~]# /etc/init.d/hsqldb stop
Stopping hsqldb:                                           [  OK  ]

How to sync time properly: ntpdate or ntpd?

7th November 2013 by Ali Erdinç Köroğlu

Previously I explained how to install chrooted NTP server, but the question is how you’re going to sync time of your server with a NTP server. There are two options: Ntpdate and Ntpd.

Ntpdate is for the one-time synchronization only.
Ntpd (network time protocol daemon) is for automatically sync system time with a remote reference time server.

There are many examples like adding cronjobs for ntpdate hourly, daily, weekly etc. The main difference between ntpd and ntpdate; ntpd will run all the time and continuously adjust the system time when clocks drift but ntpdate will not. Also keep in mind that ntpdate is deprecated as of September 2012.

So why we need ntpdate at all ?
In ancient ages it was important to get the system time before starting ntpd and usually done by ntpdate. Over time, ntpd evolved and no longer necessary to set the time before starting ntpd.

To sum up; if you’re running time specific operations like application servers, database servers, email servers, clusters etc. ntpd is what you need.

Installation

Since NTP package is in the base repository no need to add extra repository.

yum install ntp
chkconfig ntpd on

Configuration

/etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
 
server 192.168.100.254          # your NTP server
server 0.tr.pool.ntp.org        # region releated ntp.org server
server ntp.ulakbim.gov.tr       # local authority

Since this will not be a ntp server for other so no need to listen on all interfaces.

/etc/sysconfig/ntpd
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -g -I eth0"

Starting..

[root@cache ~]# /etc/init.d/ntpd start
Starting ntpd:                                             [  OK  ]

NTP query result and network time synchronisation status

[root@cache ~]# ntpstat 
synchronised to NTP server (192.168.100.254) at stratum 4 
   time correct to within 108 ms
   polling server every 64 s
[root@cache ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*192.168.100.254 82.94.167.75     3 u    5   64  377    0.276  -21.198  25.027

And as you see everything ok..

/var/log/messages
Nov  7 13:48:51 cache ntpd[44248]: ntpd 4.2.4p8@1.1612-o Fri Feb 22 11:23:27 UTC 2013 (1)
Nov  7 13:48:51 cache ntpd[44249]: precision = 0.079 usec
Nov  7 13:48:51 cache ntpd[44249]: Listening on interface #0 wildcard, 0.0.0.0#123 Disabled
Nov  7 13:48:51 cache ntpd[44249]: Listening on interface #1 wildcard, ::#123 Disabled
Nov  7 13:48:51 cache ntpd[44249]: Listening on interface #2 lo, ::1#123 EnabledNov  7 13:48:51 cache ntpd[44249]: Listening on interface #3 eth0, fe80::20c:29ff:febd:d65f#123 EnabledNov  7 13:48:51 cache ntpd[44249]: Listening on interface #4 eth1, fe80::20c:29ff:febd:d669#123 Disabled
Nov  7 13:48:51 cache ntpd[44249]: Listening on interface #5 lo, 127.0.0.1#123 EnabledNov  7 13:48:51 cache ntpd[44249]: Listening on interface #6 eth0, 192.168.100.1#123 EnabledNov  7 13:48:51 cache ntpd[44249]: Listening on interface #7 eth1, 192.168.101.1#123 Disabled
Nov  7 13:48:51 cache ntpd[44249]: Listening on routing socket on fd #29 for interface updates
Nov  7 13:48:51 cache ntpd[44249]: kernel time sync status 2040

« Previous Entries