诗人余光中

「余光中」

这世界,许多灵魂忙着来,许多灵魂忙着去

在所有的诗里你都预言 

云只开一个晴日 

虹只驾一个黄昏 

莲只开一个夏季 

说是人生无常,却也是人生之常

永恒,刹那,刹那,永恒 

一无所有 

却拥有一切

期待是一种半清醒半疯狂的燃烧

你来不来都一样

凡美妙的,听我说,都该有印痕 

月色与雪色之间,你是第三种绝色

你不是谁,你是一切

是暮色吗昏昏?

是夜色吗沉沉?

或者所谓春天

最后也不过就是这样子

敲叩着一个人的名字

但已经太迟了

 

让·佩尔杜的文学急救箱

让·佩尔杜的文学急救箱

                                          –《小小巴黎书店》

从亚当斯到阿尼姆

药效快,针对思想和心灵被纷乱情感轻度或中度影响的患者。若无特殊医嘱,一次服用的剂量应易于消化(5~50页)。

如果情况允许,请在双脚温暖、膝头卧着猫咪时服用。

01——《银河系搭车客指南》(五部曲)

Douglas Adams.The Hitchhiker’s Guide to the Galaxy.

[英国]道格拉斯·亚当斯 著 姚向辉 译

疗效:大量服用可有效治疗病态乐观主义以及幽默感缺乏症。

适用人群:非常适合桑拿室中有暴露狂倾向的人。

⊙副作用:厌恶拥有事物的感觉,可能出现整天穿着浴袍的慢性症状[1]

 

02——《刺猬的优雅》

Muriel Barbery.L’élégance du hérisson.

[法国]妙莉叶·芭贝里 著 史妍、刘阳 译

疗效:大量服用可有效治疗那种总把“如果当初……”挂在嘴边的症状。

适用人群:推荐怀才不遇者、高智商电影爱好者和巴士司机厌恶者服用。

 

03——《堂吉诃德》

Miguel de Cervantes Saavedra.Don Quixote de la Mancha.

[西班牙]米格尔·德·塞万提斯·萨维德拉 著 杨绛 译

疗效:当理想与现实抵触时服用。

⊙副作用:对现代科技以及大机器带来的毁灭性影响感到焦虑,好像堂吉诃德大战风车一般与之搏斗。

 

04——《大机器停止》(原载于1909年《牛津剑桥评论》)

Edward Morgan Forster. “The Machine Stops”,a short story first published inThe Oxford and Cambridge Review,1909.

[英国] E.M.福斯特 著 康苏埃拉 译(译言旗下品牌“东西文库”出版)

慎用此药!

疗效:这是针对“网络专家治国论”[2]与苹果手机迷信症的特效解药,对于脸书上瘾和《黑客帝国》依赖症[3]也有疗效。

◎使用方法:只有盗版党[4]成员及网络愤青需少量服用。

 

05——《黎明的承诺》(暂译,中文版未引进,曾在1970年被改编为电影《母子泪》)

Romain Gary.La Promesse de l’aube.

[法国]罗曼·加里 著

疗效:服用后能更好地了解母爱,增强对童年时光缅怀之情的抵抗力。

⊙副作用:白日梦,相思病。

 

06——《将女人扔下桥》(暂译,中文版未引进)

Gunter Gerlach.Frauen von Brücken werfen.

[德国]甘特·格拉克 著

适用人群:针对失去创作灵感的作家,还有那些认为犯罪小说中的谋杀案其实没那么重要的人。

⊙副作用:失去真实感,思维扩张。

 

07——《阶段》(小说《玻璃球游戏》中的一首诗)

Hermann Hesse. “Stufen”, a poem inDas Glasperlenspiel.

[德国]赫尔曼·黑塞 著 张佩芬 译

疗效:治疗悲伤,鼓舞患者去相信。

 

08——《一只狗的研究》(作家生前未发表的作品,收录在《卡夫卡小说全集》中)

Franz Kafka. “Forschungen eines Hundes”.

[奥地利]弗兰茨·卡夫卡 著 王炳 译

疗效:治疗那种觉得谁都不理解自己的奇怪感受。

⊙副作用:悲观,渴望抚摩猫咪。

 

09——《埃里希·卡斯特纳医生的抒情药箱》(暂译,中文版未引进)

Erich Kästner.Das große Erich Kästner Lesebuch.

[德国]埃里希·卡斯特纳 著

疗效:据诗人医生卡斯特纳所说,这本书可以治疗多种病痛与不适,包括假装博学病、对于分手的冲动、日常烦躁与秋思怅惘。

 

10——《长袜子皮皮》

Astrid Lindgren.Pippi Lângstrump.

[瑞典]阿斯特丽德·林德格伦 著 李之义 译

疗效:有效抑制后天悲观(而非先天悲观)以及对于奇迹的恐惧。

⊙副作用:算数变差[5],边洗澡边唱歌。

 

11——《权力的游戏》(《冰与火之歌》系列小说的第一部)

George R.R. Martin.A Game of Thrones. The first in a series of five novel

[美国]乔治·R.R.马丁 著 屈畅、谭光磊 译

疗效:有助于戒除电视瘾,对付相思病、日常生活的烦恼与乏味的梦境。

⊙副作用:失眠,做令人不安的梦[6]

 

12——《白鲸记》(又名“莫比-迪克”)

Herman Melville.Moby-Dickor The Whale.

[美国]赫尔曼·麦尔维尔 著 曹庸 译适用人群:针对素食主义者。

⊙副作用:怕水。

 

13——《欲望巴黎—凯瑟琳的性爱自传》(繁体版)

Catherine Millet.La Vie Sexuelle de Catherine M.

[法国]凯瑟琳·米勒 著 白马 译

疗效:针对那终极一问——这段感情是否发展得太快了?本书将协助患者作答。

⊙附注:情况永远可能会更糟。

 

14——《没有个性的人》

Robert Musil.Der Mann ohne Eigenschaften.

[奥地利]罗伯特·穆齐尔 著 张荣昌 译适用人群:针对那些忘记了自己人生目标的人。疗效:治疗漫无目的。

⊙副作用(症状会逐渐出现):过了两年,你的人生会被永远改变。主要的风险是你会与所有朋友疏离,出现愤世嫉俗的倾向,经常被一模一样的梦境所折磨。

 

15——《情迷维纳斯》(暂译,中文版未引进,曾在1995年被改编为同名电影)

Anaïs Nin.Delta of Venus.

[美国]阿娜伊斯·宁 著

疗效:治疗精神萎靡,服用几天后就能恢复性欲。

 

16——《一九八四》

George Orwell.1984.

[英国]乔治·奥威尔 著 董乐山 译

疗效:消除轻信和冷漠,是治疗慢性乐观主义的祖传秘方,但已过保质期。

 

17——《汤姆的午夜花园》

Philippa Pearce.Tom’s Midnight Garden.

[英国]菲莉帕·皮尔斯 著 马爱农 译

适用人群:有效治疗不快乐的恋人们。

又及:凡是不涉及爱情的书籍,不快乐的恋人们都可以阅读,例如血腥小说、惊悚小说和蒸汽朋克风格的小说[7]

 

18——“碟形世界”系列小说[包括41部小说,第一部是《魔法的颜色》(The Color of Magic);最后一部是《牧羊人的皇冠》(暂译)(The Shepherd’s Crown)]

Terry Pratchett.The Discworld novels.

[英国]特里·普拉切特 著 胡纾、马骁、马爽等 译

适用人群:针对厌世者与过于天真者,即使是那些很少读书的人,也会被这一系列小说深深吸引。

 

19——《黑质三部曲》

Philip Pullman.His Dark Materials trilogy.

[英国]菲利普·普尔曼 著 周景兴、周倩、陈俊群 译

适用人群:针对那些不时出现幻听症状,并坚信自己会有只动物灵魂伴侣的人。

 

20——《睡前小祈祷》(暂译,中文版未引进)

Joachim Ringelnatz.Kindergebetchen.

[德国]亚西姆·林格尔纳茨 著

适用人群:针对那些曾感动得祈祷,但仅此一次的不可知论者。

⊙副作用:儿时夜晚的记忆突然重现。

 

21——《失明症漫记》

José Saramago.Ensaio sobre a Cegueira.

[葡萄牙]若泽·萨拉马戈 著 范维信 译

疗效:有助于缓解过度劳累,分清事情的轻重缓急,找到人生的目标。

 

22——《德拉库拉》

Bram Stoker.Dracula.

[爱尔兰]布莱姆·斯托克 著 冷杉、姜莉莉 译

适用人群:推荐那些容易因为无聊的梦境而多愁善感的人,以及终日瘫在电话机旁琢磨“他究竟会不会打给我”的人服用。

 

23——《骨灰祭》(暂译,中文版未引进;2002年首次出版时名为“Lo libre dels rituals”)

(这是一首奥克语祷词,将死者的信息传递给生者。)

Surre-Garcia,Alem,Françoise Meyruels.The Ritual of the Ashes.

[法国]苏尔-贾西亚、阿勒姆、弗朗西斯·梅吕埃 著

疗效:有助于缓解对至爱之人无休无止的哀悼,对于那些不相信祈祷力量的人,这篇祷词可作为他们的墓畔俗世咒语。

 

24——《自由人》(暂译,中文版未引进)

Jac Toes.De vrije man.

[荷兰]雅克·托兹 著

适用人群:针对那些舞会之外的探戈舞者,以及那些害怕去爱的人。

⊙副作用:让你重新审视与周围之人的关系。

 

25——《汤姆·索亚历险记》

Mark Twain.The Adventures of Tom Sawyer.

[美国]马克·吐温 著 成时 译

疗效:有助于克服成年人的烦恼,重新发掘内心的童真。

 

26——《迷人的四月》(暂译,中文版未引进,曾在1992年被改编为同名电影)

Elizabeth von Arnim.The Enchanted April.

[英国]伊丽莎白·冯·阿尼姆 著

疗效:治疗优柔寡断,增强对朋友的信任。

⊙副作用:爱上意大利,渴望前往南方,正义感增强。

 

诗人聂鲁达

选自聂鲁达《二十首情诗和一支绝望的歌》,从每首诗中选择了一到二句而完成,最后的2句来自其他诗选。

 

今夜我可以写下最哀伤的诗句

 

与你相关的回忆自围绕我的夜色中呈现

 

我在这里爱你,而且地平线徒然的隐藏你

 

每样事物都把我推得更远仿佛你就是白昼

 

至闷的悲叹,折磨人的阴暗的希望

 

那些寂寞的梦如何相信你会是我的

 

你不像任何人,因为我爱你

 

我以火的十字

 

像潮汐般永远的消逝

 

只有双眼藉晨露般张望

 

我喜欢你是寂静的,仿佛你消失了一样

 

有时候一片太阳,在我的双掌间如硬币燃烧

 

以灰而苦涩的声音,以及遭离弃而哀伤的浪水伪装自己

 

在我最荒脊的土地上,你是最后的玫瑰

 

我的孤独在极度的光亮中绵延不绝,化为火焰

 

我深切的渴望朝彼处迁徙

 

所以你会听见我

 

在没有重量的物质里,在倾斜的火焰中

 

一阵狂热兴奋中,我释放所有的箭束

 

自你灵魂中迅速生长

 

夜以他毁灭般的侵袭笼罩我

 

当华美的叶片落尽,生命的脉络才历历可见

 

爱太短,而遗忘太长

 

 

飞鸟集

自译自娱自乐—-泰戈尔《飞鸟集》节选前二十

 

1、Stray birds of summer come to my window to sing and fly away.And yellow leaves of autumn, which have no songs, flutter and fall there with a sigh.
夏日迷途之鸟,栖落窗前,温柔呢喃,飘然而去;
秋日枯黄之叶,无声无歌,蹁跹而落,空留叹息!

 

2、O Troupe of little vagrants of the world, leave your footprints in my words.
世人皆为旅者,唯有隽永恒存。

 

3、The world puts off its mask of vastness to its lover.It becomes small as one song, as one kiss of the eternal.
拭去情人神秘的面纱,世界小如一首欢歌,一个永恒热吻。

 

4、It is the tears of the earth that keep her smiles in bloom.
倘使生命有泪,仍要微笑如花。

 

5、The mighty desert is burning for the love of a blade of grass who shakes her head and laughs and flies away.
爱如沙漠之于草原,佳人摇头笑而离开。

 

6、If you shed tears when you miss the sun, you also miss the stars.
错过或过错,惜取眼前人。

 

7、The sands in your way beg for your song and your movement, dancing water. Will you carry the burden of their lameness?
沙之祈求,载歌而行,流水舞动,可愿艰难同路?

 

8、Her wishful face haunts my dreams like the rain at night.
佳人似梦,夜雨缠痴。

 

9、Once we dreamt that we were strangers.We wake up to find that we were dear to each other.
往昔若梦,相识之初;魂灵合一,相濡以沫!

 

10、Sorrow is hushed into peace in my heart like the evening among the silent trees.
明明心中悲伤暗涌,却如暗夜寂静之林。

 

11、Some unseen fingers, like an idle breeze, are playing upon my heart the music of the ripples.
似无形的纤手,或慵懒的微风,拨动我的心弦。

 

12、What language is thine, O sea?The language of eternal question.What language is thy answer, O sky?
The language of eternal silence.
大海的言语,永恒的疑问,天空的回答,无休止的沉默。

 

13、Listen, my heart, to the whispers of the world with which it makes love to you.
听,我心之世界的低语,是对你的爱。

 

14、The mystery of creation is like the darkness of night–it is great.Delusions of knowledge are like the fog of the morning.
造物之奇,暗夜之深,似懂非懂,清晨之雾。

 

15、Do not seat your love upon a precipice because it is high.
至高则危,悬崖边的爱。

 

16、I sit at my window this morning where the world like a passer-by stops for a moment, nods to me and goes.
我待世界如佳偶,世界待我如路人。

 

17、These little thoughts are the rustle of leaves; they have their whisper of joy in my mind.
这些微小的念头如落叶沙沙之声,在我心头欢快的低语。

 

18、What you are you do not see, what you see is your shadow.
你以为这是自己,其实只是你以为。

 

19、My wishes are fools, they shout across thy song, my Master.Let me but listen.
我的愿望如此痴傻,在你的歌里宣告“听天由命”。

 

20、I cannot choose the best.The best chooses me.
越美丽的东西越不可碰,被选择是最好的选择。

 

本来以为可以按照自己的理解翻译整本诗集,到这里发现没办法继续。

 

—–美丽的风景抵不过悲哀的诗行—–

ELRepo Project For Yum Update Kernel

Welcome to the ELRepo Project

“For the community, by the community.”

Welcome to ELRepo, an RPM repository for Enterprise Linux packages. ELRepo supports Red Hat Enterprise Linux (RHEL) and its derivatives (Scientific Linux, CentOS & others).

The ELRepo Project focuses on hardware related packages to enhance your experience with Enterprise Linux. This includes filesystem drivers, graphics drivers, network drivers, sound drivers, webcam and video drivers.

Star Get started

Import the public key:

Detailed info on the GPG key used by the ELRepo Project can be found on https://www.elrepo.org/tiki/key (external link)
If you have a system with Secure Boot enabled, please see the SecureBootKey page for more information.

To install ELRepo for RHEL-7, SL-7 or CentOS-7:

To make use of our mirror system, please also install yum-plugin-fastestmirror.

To install ELRepo for RHEL-6, SL-6 or CentOS-6:

To make use of our mirror system, please also install yum-plugin-fastestmirror.

 

elrepo-kernel

The elrepo-kernel channel provides both the long-term support kernels (which have been configured for RHEL-6 and RHEL-5) and the latest stable mainline kernels (which have been configured for RHEL-7 and RHEL-6) using sources available from the Linux Kernel Archives (external link). Please see the kernel-lt or kernel-ml pages for further details. This channel may be enabled in the /etc/yum.repos.d/elrepo.repo file or used with ‘yum –enablerepo=elrepo-kernel’.

yum –enablerepo=elrepo-kernel -y install kernel-ml

yum –enablerepo=elrepo-kernel -y install kernel-lt

LOAD BALANCING WITH NGINX

Load Balancing in Nginx

       ——《Nginx From Beginner to Pro》

Now that you have learned about the basics of load balancing and advantages of using a software load balancer, let’s move forward and work on the Nginx servers you already created in the previous chapters.
Clean Up the Servers
Before setting up anything new, clean up the previous applications so that you can start afresh. This is to keep things simpler. You will be settings up applications in different ways in the upcoming sections of this chapter. The idea is to give you information about different scenarios from a practical perspective.
1. Log on to the WFE1 using ssh -p 3026 user1@127.0.0.1
2. Remove everything from the Nginx home directory.
sudo rm -rf /usr/share/nginx/html/*
3. Reset your configuration ( sudo vi /etc/nginx/nginx.conf ) to the following:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main ‘$remote_addr – $remote_user – [$time_local] – $document_root –
$document_uri – ‘
‘$request – $status – $body_bytes_sent – $http_referer’;
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
index index.html index.htm;
include /etc/nginx/conf.d/*.conf;
}
4. Now, remove the entries in conf.d by using the following command:
sudo rm -f /etc/nginx/conf.d/*.conf
5. Repeat the steps for WFE2.

Create Web Content
Let’s create some content so that it is easy to identify which server served the request. In practical situations, the content on the WFE1 and WFE2 will be same for the same application. Run the following command on
both WFE1 and WFE2:
uname -n | sudo tee /usr/share/nginx/html/index.html
This command is pretty straightforward. It uses the output of uname -n and dumps it in a file called index.html in the default root location of Nginx. View the content and ensure that the output is different on both the servers.
$cat /usr/share/nginx/html/index.html
wfe1.localdomain

Configure WFE1 and WFE2
The content is available on both servers now, but since you have already cleaned up the configuration you will need to re-create the configuration file by using the following command:
sudo cp /etc/nginx/conf.d/default.template /etc/nginx/conf.d/main.conf

The command will create a copy of the configuration for a default website. If you recall, the default.
template contained the following text:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
• Restart the service: sudo systemctl restart nginx.
• Repeat the steps on WFE2.
• Once done, you should be able to execute curl localhost on both servers, and you should get output as wfe1.localdomain and wfe2.localdomain respectively. Notice that even though the request is same ( curl localhost ), the output is different. In practice, the output will be the same from both servers.
Set Up NLB Server
Setting up an NLB server is no different than setting up a regular web server. The installation steps are similar to what you have learned already. The configuration, however, is different and you will learn about it in the upcoming sections.
1. Create a new virtual machine called NLB.
2. Set up a NAT configuration as you have learned in previous chapters. It should look similar to Figure 8-4 .

3. Install Nginx (refer to chapter 2 ) on the NLB server.
4. Since it is a new server, when you execute curl localhost , you will see the
default welcome page. You can ignore it for the time being.
5. Open the configuration file ( /etc/nginx/conf.d/default.conf ) and make the
changes as follows:
upstream backend{
server 10.0.2.6;
server 10.0.2.7;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
6. Restart the service.
7. Try the following command a few times and notice how it gives you output from
WFE1 and WFE2 in an alternate fashion.
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe2.localdomain
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe2.localdomain

So, what just happened? Basically, you have set up a load balancer using Nginx and what you saw was the load balancer in action. It was extremely simple, right? There are a couple of directives at play here.
• upstream directive: The upstream directive defines a group of servers. Each server directive points to an upstream server. The server can listen on different ports if needed. You can also mix TCP and UNIX-domain sockets if required. You will learn more about it in the upcoming scenarios.
• proxy_pass directive: This directive sets the address of a proxied server. Notice that in this case, the address was defined as back end, and in turn contained multiple servers. By default, if a domain resolves to several addresses, all of them will be used in a round-robin fashion.
Load Balancing Algorithms
When a load balancer is configured, you need to think about various factors. It helps if you know the application and its underlying architecture. Once you have found the details, you will need to configure some parameters of Nginx so that you can route the traffic accordingly. There are various algorithms that you can use based on your need. You will learn about it next.
Round Robin
This is the default configuration. When the algorithm is not defined, the requests are served in round-robin fashion. At a glance, it might appear way too simple to be useful. But, it is actually quite powerful. It ensures that your servers are equally balanced and each one is working as hard.
Let’s assume that you have two servers, and due to the nature of your application you would like three requests to go to the first server (WFE1) and one request to the second server (WFE2). This way, you can route the traffic in a specific ratio to multiple servers. To achieve this, you can define weight to your server definitions in the configuration file as follows.
upstream backend{
server 10.0.2.6 weight=3;
server 10.0.2.7 weight=1;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
Reload Nginx configuration and try executing curl localhost multiple times. Note that three requests went to the WFE1 server, whereas one request went to WFE2.
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe1.localdomain
[root@nlb ~]# curl localhost
wfe2.localdomain

In scenarios where you cannot easily determine the ratio or weight, you can simply use the least connected algorithm . It means that the request will be routed to the server with the least number of active connections.
This often leads to a good load-balanced performance. To configure this, you can use the configuration file like so:
upstream backend{
least_conn;
server 10.0.2.6 weight=1;
server 10.0.2.7 weight=1;
}
Without a load testing tool, it will be hard to determine the output using command line. But the idea is fairly simple. Apart from the least number of active connections, you can also apply weight to the servers, and it would work as expected.
IP Hash
There are quite a few applications that maintain state on the server: especially the dynamic ones like PHP,Node, ASP.NET, and so on. To give a practical example, let’s say the application creates a temporary file for a specific client and updates him about the progress. If you use one of the round-robin algorithms, the subsequent request might land on another server and the new server might have no clue about the file processing that started on the previous server. To avoid such scenarios, you can make the session sticky, so that once the request from a specific client has reached a server, Nginx continues to route the traffic to the same server. To achieve this, you must use ip_hash directive like so:
upstream backend{
ip_hash;
server 10.0.2.6;
server 10.0.2.7;
}
The configuration above ensures that the request reaches only one specific server for the client based on the client’s IP hash key. The only exception is when the server is down, in which case the request can land on another server.
Generic Hash
A hash algorithm is conceptually similar to an IP hash. The difference here is that for each request the load
balancer calculates a hash that is based on the combination of text and Nginx variables that you can specify.
It sends all requests with that hash to a specific server. Take a look at the following configuration where hash
algorithm is used with variables $scheme (for http or https) and $request_uri (URI of the request):
upstream backend{
hash $scheme$request_uri;
server 10.0.2.6;
server 10.0.2.7;
}

Bear in mind that a hash algorithm will most likely not distribute the load evenly. The same is true for an IP hash. The reason why you still might end up using it is because of your application’s requirement of a sticky session. Nginx PLUS offers more sophisticated configuration options when it comes to session persistence. The best use case for using hash is probably when you have a dynamic page that makes data intensive operations that are cachable. In this case, the request to that dynamic page can go to one server only, which caches the result and keeps serving the cache result, saving the effort required at the database side and on all the other servers.
Least Time (Nginx PLUS), Optionally Weighted
Nginx PLUS has an additional algorithm that can be used. It is called the least time method where the load balancer mathematically combines two metrics for each server—the current number of active connections and a weighted average response time for past requests —and sends the request to the server with the lowest value. This is a smarter and more effective way of doing load balancing with heuristics.
You can choose the parameter on the least_time directive, so that either the time to receive the response
header or the time to receive the full response is considered by the directive. The configuration looks like so:
upstream backend{
least_time (header | last_byte);
server 10.0.2.6 weight=1;
server 10.0.2.7 weight=1;
}
Most Suitable Algorithm
There is no silver bullet or straightforward method to tell you which method will suit you best. There are plenty of variables that need to be carefully determined before you choose the most suitable method. In general, least connections and least time are considered to be best choices for the majority of the workloads.
Round robin works best when the servers have about the same capacity, host the same content, and the requests are pretty similar in nature. If the traffic volume pushes every server to its limit, round robin might push all the servers over the edge at roughly the same time, causing outages.
You should use load testing tools and various tests to figure out which algorithm works best for you. One thing that often helps you make good decision is the knowledge of the application’s underlying architecture.

If you are well aware about the application and its components, you will be more comfortable in doing
appropriate capacity planning.
You will learn about load testing tools, performance, and benchmarking in the upcoming chapters.
Load Balancing Scenarios
So far in this chapter you have seen an Nginx load balancer routing to the back-end Nginx servers. This is not a mandatory requirement. You can choose Nginx to route traffic to any other web server. As a matter of fact, that is what is done mostly in practical scenarios and as far as the request is HTTP based, it will just work.
Nginx routes the request based on the mapped URI. You can use Nginx easily to front end the PHP, ASP.
NET, Node.js, or any other application for that matter and enjoy the benefits of Nginx as you will see in the
upcoming scenarios.

Nginx Routing Request to Express/Node. js
If you recall, in the previous chapter you configured Nginx for MEAN stack. Assuming WFE1 and WFE2 are hosting applications based on MEAN stack and the application is running on port 3000, your NLB server’s configuration will look like the following:
upstream nodeapp {
server 10.0.2.6:3000;
server 10.0.2.7:3000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://nodeapp;
}
}
A common mistake that usually happens is that the additional ports are not opened in the firewall. So, you need to ensure that ports are opened explicitly by using the following command on both WFE1 and WFE2:
[user1@wfe1 ~]$ sudo firewall-cmd –permanent –add-port=3000/tcp
success
[user1@wfe1 ~]$ sudo firewall-cmd –reload
success
Once you have opened the ports, Nginx will start routing the request successfully. Note that the opened ports are not exposed to the Internet. It is just for Nginx that is load balancing the requests.
Passing the HOST Header
Since everything has been working in these simple demos, it might mislead you into thinking that all you need to pass to the back-end server is the URI. For real world applications you might have additional information in request headers that—if missed—will break the functionality of the application. In other words, the request coming from Nginx to the back-end servers will look different than a request coming directly from the client. This is because Nginx makes some adjustments to headers that it receives from the client. It is important that you are aware of these nuances.
• Nginx gets rid of any empty headers for performance reasons.
• Any header that contains an underscore is considered invalid and is eventually
dropped from the headers collection. You can override this behavior by explicitly
setting underscores_in_headers on ;
• The “HOST” header is set to the value of $proxy_host, which is a variable that
contains the domain name of IP address grabbed from the proxy_pass definition. In
the configuration that follows, it will be backend .
• Connection header is added and set to close.

You can tweak the header information before passing on by using the proxy_set_header directive.
Consider the following configuration in the NLB:
upstream backend{
server 10.0.2.6;
server 10.0.2.7;
}
server {
listen 80;
location / {
proxy_set_header HOST $host;
proxy_pass http://backend;
}
}
In the previous configuration, an explicit HOST header has been set using proxy_set_header directive.
To view the effect, follow these steps:
• Ensure that your NLB configuration appears as the previous configuration block.
Restart Nginx service.
• On WFE1, change the nginx.conf ( sudo vi /etc/nginx/nginx.conf ) such that the
log_format has an additional field called $host as follows:
log_format main ‘$host – $remote_addr – $remote_user – [$time_local] – $document_
root – $document_uri – $request – $status – $body_bytes_sent – $http_referer’;
• Save the file and exit. Restart Nginx service.
• Switch back to NLB and make a few requests using curl localhost
• View the logs on the WFE1 using sudo tail /var/log/nginx/access.log -n 3.
[user1@wfe1 ~]$ sudo tail /var/log/nginx/access.log -n 3
localhost – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
localhost – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
localhost – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
• As you can see, the requests had localhost as the hostname and it is because you have used proxy_set_header HOST $host.
• To view what the result would have looked like without this header change, comment the line in NLB’s configuration:
location / {
# proxy_set_header HOST $host;
proxy_pass http://backend;
}

• Restart Nginx on NLB and retry curl localhost a few times.
• If you view the logs on WFE1 using the tail command, you should see an output
similar to this:
localhost – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
backend – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
backend – 10.0.2.9 – – – – /usr/share/nginx/html – /index.html – GET / HTTP/1.0 – 200 – 17 – –
• Notice the last couple of lines where the hostname appears as back end. This is the default behavior of Nginx if you don’t set the HOST header explicitly. Based on
your application, you might need to set explicitly or ignore this header in the NLB configuration.
Forwarding IP Information
Since the requests are forwarded to the back end, it has no information about where the requests have actually come from. To the back-end servers, it knows the NLB as the client. There are scenarios where you might want to log information about the actual visitors. To do that, you can use proxy-set-header just as you did in the previous example but with different variables like so:
location / {
proxy_set_header HOST $proxy_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend;
}
In this configuration apart from setting HOST header, you are also setting the following headers:
• X-Real-IP is set to $remote_addr variable that contains the actual client IP.
• X-Forwarded-For is another header set here, which contains $proxy_add_x_forwarded_for. This variable contains a list of $remote_addr – client IPs – separated by a comma.
• To log the actual client IP, you should now modify the log_format to include $http_x_real_ip variable that contains the real client IP information.
• By default, X-Real-IP is stored in $http_x_real_ip. You can change this behavior
by using – real_ip_header X-Forwarded-For; – in your http, location or server directive in order to save the value of X-Forward-For header instead of X-Real-IP
header.

Buffering
As you can guess, with an NLB in between the real back-end server, there are two hops for every request. This may adversely affect the client’s experience. If the buffers are not used, data that is sent from the back-end server immediately gets transmitted to the client. If the clients are fast, they can consume this immediately and buffering can be turned off. For practical purposes, the clients will typically not be as fast as the server in consuming the data. In that case, turning buffering on will tell Nginx to hold the back-end data temporarily, and feed that data to the client. This feature allows the back ends to be freed up quickly since they have to simply work and ensure that the data is fed to Nginx NLB. By default, buffering is on in Nginx
and controlled using the following directives:
• proxy_buffering: Default value is on, and it can be set in http, server, and location blocks.
• proxy_buffers number size : proxy_buffers directive allows you to set the number of
buffers along with its size for a single connection. By default, the size is equal to one
memory page, and is either 4K or 8K depending on the platform.
• proxy_buffer_size size : The headers of the response are buffered separately from the
rest of the response. This directive sets that size, and defaults to proxy_buffers size.
• proxy_max_temp_file_size size : If the response is too large, it can be stored in a
temporary file. This directive sets the maximum size of the temporary file.
• proxy_temp_file_write_size size : This directive governs the size of data written to the
file at a time. If you use 0 as the value, it disables writing temporary files completely.
• proxy_temp_path path : This directive defines the directory where temporary files are
written.
Nginx Caching
Buffering in Nginx helps the back-end servers by offloading data transmission to the clients. But the request actually reaches the backend server to begin with. Quite often, you will have static content, like 3rd party

JavaScript libraries, CSS, Images, PDFs, etc. that doesn’t change at all, or rarely changes. In these cases, it makes sense to make a copy of the data on the NLB itself, so that the subsequent requests could be served directly from the NLB instead of fetching the data every time from the backend servers. This process is called caching.
To achieve this, you can use the proxy_cache_path directive like so in the HTTP block:
proxy_cache_path path levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m
Before you use this directive, create the path as follows and set appropriate permissions:
mkdir -p /data/nginx/cache
chown nginx /data/nginx/cache
chmod 700 /data/nginx/cache
• Levels define the number of subdirectories Nginx will create to maintain the cached files. Having a large number of files in one flat directory slows down access, so it is recommended to have at least a two-level directory hierarchy.
• keys_zone defines the area in memory which contains information about cached file keys. In this case a 10MB zone is created and it should be able to hold about 80,000
keys (roughly).

• max_size is used to allocate 10GB space for the cached files. If the size increases, cache manager process trims it down by removing files that were used least recently.
• inactive=60m implies the number of minutes the cache can remain valid in case it is not used. Effectively, if the file is not used for 60 minutes, it will be purged from the cache automatically.
By default, Nginx caches all responses to requests made with the HTTP GET and HEAD methods. You can cache dynamic content too where the data is fetched from a dynamic content management system, but changes less frequently, using fastcgi_cache . You will learn about caching details in chapter 12 .
Server Directive Additional Parameters
The server directive has more parameters that come in handy in certain scenarios. The parameters are fairly
straightforward to use and simply require you to use the following format:
server address [parameters]
You have already seen the server address in use with weight. Let’s learn more about some additional
parameters.
• max_fails=number: Sets the number of unsuccessful attempts before considering the server unavailable for a duration. If this value is set to 0, it disables the accounting of
attempts.
• fail_timeout=time: Sets the duration in which max_fails should happen. For example, if max_fails parameter is set to 3, and fail_timeout is set to 10 seconds, it would imply that there should be 3 failures in 10 seconds so that the server could be considered unavailable.
• backup: Marks the server as a backup server. It will be passed requests when the primary servers are unavailable.
• down: Marks the server as permanently unavailable.
• max_conns=number: Limits the maximum number of simultaneous active connections. Default value of 0 implies no limit.
Configure Nginx (PLUS) for Heath Checks
The free version of Nginx doesn’t have a very important directive, and it is called health_check. This feature is available in Nginx PLUS, and enabling it gives you a lot of options related to health of the upstream servers.
• interval=time : Sets the interval between two health checks. The default value is 5
seconds and it implies that the server checks the upstream servers every 5 seconds.
• fails=number : If the upstream server fails x number of times, it will be considered
unhealthy. The default value is 1.
• passes=number : Once considered unhealthy, the upstream server needs to pass the
test x number of times before it could be considered healthy. The default value is 1.
• uri = path : Defines the URI that is used to check requests. Default value is /.
• match=name : You can specify a block with its expected output in order the test to succeed. In the following configuration, the test is to ensure that the output has a status code of 200, and the body contains “Welcome to nginx!”
http {
server {
location / {
proxy_pass http://backend;
health_check match=welcome;
}
}
match welcome {
status 200;
header Content-Type = text/html;
body ~ “Welcome to nginx!”;
}
}
• If you specify multiple checks, any single failure will make the server be considered
unhealthy.
Activity Monitoring in Nginx (PLUS)
Nginx PLUS includes a real-time activity monitoring interface that provides load and performance metrics.
It uses a RESTful JSON interface, and hence it is very easy to customize. There are plenty of third-party monitoring tools that take advantage of JSON interface and provide you a comprehensive dashboard for performance monitoring.
You can also use the following configuration block to configure Nginx PLUS for status monitoring.
server {
listen 8080;
root /usr/share/nginx/html;
# Redirect requests for / to /status.html
location = / {
return 301 /status.html;
}
location = /status.html { }
location /status {
allow x.x.x.x/16; # permit access from local network
deny all; # deny access from everywhere else
status;
}
}

Status is a special handler in Nginx PLUS. The configuration here is using port 8080 to view the detailed status of Nginx requests. To give you a better idea of the console, the Nginx team has set up a live demo page that can be accessed at http://demo.nginx.com/status.html .
Summary
In this chapter, you have learned about the basic fundamentals of high availability and why it matters. You should also be comfortable with the basic concepts about hardware and software load balancing. Nginx is an awesome product for software load balancing and you have learned about how easily you can set it up in your web farm. The architecture of Nginx allows you to have a very small touch point for front-end servers, and the flexibility ensures that you can customize it precisely based on your requirements. You can scale out your farm easily with Nginx, and use Nginx PLUS to achieve even more robustness in your production farm when the need arises.

Synchronizing the system clock with NTP and the chrony suite

Synchronizing the system clock with NTP and the chrony suite

—–《CentOS 7 Linux Server Cookbook, 2nd Edition》

In this recipe, we will learn how to synchronize the system clock with an external time server using the Network Time Protocol (NTP) and the chrony suite. From the need to time-stamp documents, e-mails, and log files, to securing, running, and debugging a network, or to simply interact with shared devices and services, everything on your server is dependent on maintaining an accurate system clock, and it is the purpose of this recipe to show you how this can be achieved.

Getting ready

To complete this recipe, you will require a working installation of the CentOS 7 operating system with root privileges, a console-based text editor of your choice, and a connection to the Internet to facilitate downloading additional packages.

How to do it…

In this recipe, we will use the chrony service to manage our time synchronization. As chrony is not installed by default on CentOS minimal, we will start this recipe by installing it:

  1. To begin, log in as root and install the chrony service, then start it and verify that it is running:

yum install -y chrony systemctl  start chronyd systemctl status chronyd

  1. Also, if we want to use chrony permanently, we will have to enable it on server startup:

systemctl enable chronyd

  1. Next, we need to check whether the system already uses NTP to synchronize our system clock over the network:

timedatectl  | grep “NTP synchronized”

  1. If the output from the last step showed No for NTP synchronized, we need to enable it using:

timedatectl  set-ntp  yes

  1. If you run the command (from step 3) again, you should see that it is now synchronizing NTP.
  2. The default installation of chrony will use a public server that has access to the atomic clock, but in order to optimize the service we will need to make a few simple changes to streamline and optimize at what time servers are used. To do this, open the main chrony configuration file with your favorite text editor, as shown here:

vi /etc/chrony.conf

  1. In the file, scroll down and look for the lines containing the following:

server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst

  1. Replace the values shown with a list of preferred local time servers:

server 0.uk.pool.ntp.org iburst server 1.uk.pool.ntp.org iburst server 2.uk.pool.ntp.org iburst server 3.uk.pool.ntp.org iburst

Note

Visit http://www.pool.ntp.org/ to obtain a list of local servers geographically near your current location. Remember, the use of three or more servers will have a tendency to increase the accuracy of the NTP service.

  1. When complete, save and close the file before synchronizing your server using the sytstemctl command:

systemctl restart chronyd

  1. To check whether the modifications in the config file were successful, you can use the following command:

systemctl status chronyd

  1. To check whether chrony is taking care of your system time synchronization, use the following:

chronyc  tracking

  1. To check the network sources chrony uses for synchronization, use the following:

chronyc sources

How it works…

Our CentOS 7 operating system’s time is set on every boot based on the hardware clock, which is a small-battery driven clock located on the motherboard of your computer. Often, this clock is too inaccurate or has not been set right, therefore it’s better to get your system time from a reliable source over the Internet (that uses real atomic time). The chrony daemon, chronyd, sets and maintains system time through a process of synchronization with a remote server using the NTP protocol for communication.

So, what have we learned from this experience?

As a first step, we installed the chrony service, since it is not available by default on a CentOS 7 minimal installation. Afterwards, we enabled the synchronization of our system time with NTP using the timedatectl  set-ntp  yes command.

After that, we opened the main chrony configuration file, /etc/chrony.conf, and showed how to change the external time servers used. This is particularly useful if your server is behind a corporate firewall and have your own NTP server infrastructure.

Having restarted the service, we then learned how to check and monitor our new configuration using the chronyc command. This is a useful command line tool (c stands for client) for interacting and controlling a chrony daemon (locally or remotely). We used the tracking parameter with chronyc, which showed us detailed information of the current NTP synchronization process with a specific server. Please refer to the man pages of the chronyc command if you need further help about the properties shown in the output (man  chronyc).

We also used the sources parameter with the chronyc program, which showed us an overview of the used NTP time servers.

You can also use the older date command to validate correct time synchronization. It is important to realize that the process of synchronizing your server may not be instantaneous, and it can take a while for the process to complete. However, you can now relax in the full knowledge that you now know how to install, manage and synchronize your time using the NTP protocol.

There’s more…

In this recipe, we set our system’s time using the chrony service and the NTP protocol. Usually, system time is set as Coordinated Universal Time (UTC) or world time, which means it is one standard time used across the whole world. From it, we need to calculate our local time using time zones. To find the right time zone, use the following command (read the Navigating textfiles with less recipe to work with the output):

timedatectl  list-timezones

If you have found the right time zone, write it down and use it in the next command; for example, if you are located in Germany and are near the city of Berlin, use the following command:

timedatectl set-timezone Europe/Berlin

Use timedatectl again to check if your local time is correct now:

timedatectl | grep “Local time”

Finally, if it is correct, you can synchronize your hardware clock with your system time to make it more precise:

hwclock –systohc

Setting up Nagios as a monitoring server

Setting up Nagios as a monitoring server

——–《Mastering CentOS 7 Linux Server》

For this chapter, we are going to work with Nagios as our best choice, considering the performance and the simplicity of its setup and configuration. As we have already mentioned, Nagios is open source software that can be installed on multiple Linux distributions. In our case, we will be installing it on CentOS 7. It is a network, infrastructure, and server-monitoring tool. It will be monitoring switches, applications, and services. It has an alerting feature that helps inform users about all issues that occur while the infrastructure is being monitored. It also alerts the user if the issues have been fixed.

Other than monitoring, Nagios also has the ability to identify system or network issues that could cause problems, with real-time problem notification. Furthermore, it has some security features, by virtue of which it can identify security breaches in the infrastructure.

In this section, we are going to install Nagios on a machine. It will act as our monitoring server. We need a test client to have it monitored. The client will have some common services; we will try to mess with them a little to test the Nagios notification service.

Let’s talk a bit about the things we need before we start our monitoring server installation. First, we need to have the Linux Apache MySQL PHP (LAMP) services installed on our machine. Since Nagios will be accessible via the web interface, having a web server installed is something obvious. For a more detailed and secure web server installation, you can go back and check out Chapter3, Linux for Different Purposes.

Nagios won’t be installed from the CentOS 7 package manager. We have to download it and then compile it, so we need basic compiling tools and a downloading tool to download the Nagios source code archive. We will install these using Yum, the CentOS package manager:

$  sudo  yum  install  gcc  cpp  glibc  glibc-common  glibc-devel  glibc-headers  gd gddevel  kernel-headers  libgomp  libmpc  mpfr  make  net-snmp  openssl-devel xinetd

We wait until the installation is done and then proceed to the next step of the preparation. In order to run the Nagios process, we need to create a Nagios user and give it a password:

$ sudo useradd nagios

$  sudo  passwd  Really_Secure_Password

We need to make sure that we are using well-secured passwords while creating any. Next, we create a new group called nagcmd to allow external commands to be submitted through the web interface once it’s up-and-running . Then, we need to add both Nagios  and Apache to this group:

$ sudo groupadd nagcmd

$ sudo usermod -a -G nagcmd nagios

$ sudo usermod -a -G nagcmd apache

We move on to the final step, which is downloading the source archive for the latest version of Nagios. To do the downloading, we will be using Wget, a tool that we have already installed.

During this tutorial, we will be using Nagios 4:

$ wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios 4.1.1.tar.gz

After downloading the latest Nagios stable version, we need to extract it. Well, since Nagios will be installed at the position where we are going to extract its source, we are going to put it in an appropriate location. We have a choice between /usr/local and /opt, so we need to copy the source package file there and then extract it. For this example, we will just go with /usr/local:

$  sudo  cp  nagios-4.1.1.tar.gz  /usr/local/

$  cd  /usr/local/

$  sudo  tar  xzvf  nagios-4.1.1.tar.gz

After extracting the archive, there will be a new folder created, holding the named Nagios and including the corresponding version. We need to go inside the folder to start compiling  it:

$  cd  nagios-4.1.1/

Just before we start the compiling process, we need to run the configuration script that will help run the compiling process with no error by configuring it to use the available compiling tools that we have installed previously:

$  sudo  ./configure  –with-command-group=nagcmd

This configuration process has the option to set up the latest created group as the one that will be running the internal commands.

Now, we are actually able to start the compiling process:

$ sudo make all

This command can take a lot of time depending on the machine’s processing power. After doing this, we proceed to the installation phase. We need to install Nagios, its initialization scripts, some sample configuration files, and the Nagios web interface:

$ sudo make install

$ sudo make install-commandmode

$ sudo make install-init

$  sudo  make  install-config

$  sudo  make  install-webconf

Before moving on the next step, we need to set up our Nagios administrator user and password to access the web interface:

$  sudo  htpasswd  -c  /usr/local/nagios/etc/htpasswd.users  nagiosadmin

Then, we type in the password twice to have our web interface administrator well created and configured.

After Nagios has been installed, we can add some useful plugins. First, we need to download the latest stable source version of those plugins. We need to go to the /usr/local folder and download the plugin’s source archive there. This step installs everything there well organized for future diagnostics:

$  cd  /usr/local

Then, we start the download using Wget:

$    sudo    wget    http://nagiosplugins.org/download/nagiosplugins2.1.1.tar.gz

Note

We used the sudo command because during the download, the file is written in a folder with no user access to write on it.

After completing the download, we can start extracting the archive using the same command:

$  sudo  tar  xzvf  nagios-plugins-2.1.1.tar.gz

Then, we enter the directory we just created:

$   cd   nagios-plugins-2.1.1/

Again, we need to compile the source files. Just before compiling, we need to run the configuration script with some useful options, as follows:

$ sudo ./configure –with-nagios-user=nagios –with-nagios-group=nagios withopenssl

For the configuration option, we set the user and group Nagios as the default to access and use the plugins. Also, we use OpenSSL to secure the plugin usage.

Then, we start compiling the plugins:

$ sudo make

After that, we can start the installation:

$ sudo make install

Once this command is executed with no errors, we can say that our Nagios Plugins are well installed. We can move on to set up the Nagios Remote Plugin Executor (NRPE). This is a Nagios agent that simplifies remote system monitoring using scripts that are hosted on remote systems. We need to download, configure, compile, and install it in the same way. We first need to find the latest stable version of the source package, and then we download it to /usr/local:

$  cd  /usr/local/

$    sudo    wget    http://downloads.sourceforge.net/project/nagios/nrpe2.x/nrpe 2.15/nrpe2.15.tar.gz

Next, we extract it at the same location, and go inside the folder to start the compilation:

$  sudo  tar  xzvf  nrpe-2.15.tar.gz

$ cd     nrpe-2.15/

We start by running the NRPE configuration script. We define the user and the group using the Nagios process and the security tools:

$ sudo ./configure –enable-command-args –with-nagios-user=nagios –with- nagiosgroup=nagios   –with-ssl=/usr/bin/openssl   –with-ssl- lib=/usr/lib/x86_64-linuxgnu

Then, we run the compiling command, followed by the installation commands:

$ sudo make all

$ sudo make install

$  sudo  make  install-xinetd

$  sudo  make  install-plugin

$  sudo  make  install-daemon

$   sudo   make   install-daemon-config

Next, we configure the xinetd startup script:

$   sudo   nano   /etc/xinetd.d/nrpe

We need to look for the line that starts with only_from and then, add the IP address of the monitoring server. It can be a public or a private address depending on where we want to make the server accessible from:

only_from  =  127.0.0.1  10.0.2.1

Then, we save the file to give only our Nagios server the capacity to communicate with NRPE. After that, we add the following line to define the port number for the NRPE service:

$  sudo  echo  “nrpe  5666/tcp  #  NRPE”  >>  /etc/services

To have this configuration active and running, we need to restart xinetd to launch NRPE:

$  sudo  service  xinetd  restart

Now, we have our Nagios monitoring server officially installed. We can proceed with the configuration steps. We go to the Nagios main configuration file and activate the folder that will store all the configuration files:

$   sudo   nano   /usr/local/nagios/etc/nagios.cfg

Then, we uncomment the following line, save the file, and exit:

cfg_dir=/usr/local/nagios/etc/servers

Note

This is just an example of a server. It can also be done for network equipments or workstations or any other type of network-connected machine.

We create the configuration folder that will store the configuration file for each machine that will be monitored:

$   sudo   mkdir   /usr/local/nagios/etc/servers

Then, we move on to configure the Nagios contacts file to set the e-mail address associated with the Nagios administrator. Usually, it is used to receive alerts:

$    sudo    nano    /usr/local/nagios/etc/objects/contacts.cfg

Now, we need to change the administrator e-mail address. To do so, we need to type in the right one after the email option:

email               packtadmin@packt.co.uk                  ;   <<***** CHANGE THIS TO YOUR EMAIL ADDRESS ******

 

Then, we save the file and exit it.

Now, we proceed to the check_nrpe command configuration. We start by adding a new command to our Nagios server:

$  sudo  nano  /usr/local/nagios/etc/objects/commands.cfg

We add the following lines at the end:

define command{

command_name check_nrpe

command_line $USER1$/ check_nrpe -H $HOSTADDRESS$ -c $ARG1$

}

We save the file and exit to allow the new command to become usable.

Now, we go ahead and configure the access restriction to IP addresses that can access the Nagios web interface:

$  sudo  nano  /etc/httpd/conf.d/nagios.conf

We need to comment these two lines:

Order allow,deny Allow from all

Next, we uncomment the following three lines:

#     Order deny,allow

#     Deny from all

#     Allow from 127.0.0.1

Note

These lines appear twice in the configuration file, so we need to do the same thing twice in the same file. This step is only for reinforcing Nagios security.

We can always add any network or address to allow it to have access to the monitoring server:

Allow from 127.0.0.1 10.0.2.0/24

We can always check whether there is any configuration error in the Nagios configuration file using the following command:

$ /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Just before starting Nagios, we need to make the Nagios CGI accessible by changing

SELinux actions from enforcing mode to permissive:

$   sudo   nano   /etc/selinux/config

Then, we change this line to look like the following:

SELINUX=permissive

Now, we can restart the Nagios service and add it to the startup menu. We also need to restart the Apache service:

$ sudo systemctl start nagios.service

$  sudo  systemctl  enable  nagios.service

$ sudo systemctl restart httpd.service

We can now access the Nagios server, but still we need to be allowed to try accessing it from the server itself, or from a machine that is connected to the network that is allowed to access the server. So, we go to the web browser and type http://Nagios_server_IP_Address/nagios.  Then,  we  type  the  admin  username, nagiosadmin, and its password, which has already been defined earlier, to get access to the Nagios interface.

Now, we move on to our client server—the one that we want to monitor using Nagios. First, we need to install the required packages. For CentOS 7, we need to have the EPEL repository installed in order to get the required packages:

$  sudo  yum  install epel-release

Now, we can install the Nagios plugins and NRPE:

$ sudo yum install nrpe nagios-plugins-all openssl

Let’s start by updating the NRPE configuration file:

$   sudo   nano   /etc/nagios/nrpe.cfg

We have to find the line that starts with allowed_hosts and add the IP address of our monitoring server:

allowed_hosts=127.0.0.1,10.0.2.1

Then, we save and exit the file. To complete the configuration, we need to start the NRPservice and add it to the startup menu:

$  sudo  systemctl  start  nrpe.service

$  sudo  systemctl  enable  nrpe.service

Once we are done configuring the host that we want to monitor, we go to the Nagios server to add it to the configuration folder.

On the Nagios server, we need to create a file with the name of the machine. We can take the machine hostname or put something that indicates the role of the machine or any other indication:

$    sudo    nano    /usr/local/nagios/etc/servers/packtserver1.cfg

Then, we add the following lines, replacing host_name with the client hostname replacing the alias value with a short description of the server’s main job, and finally replacing address with the server IP address:

define host {

use                                                                   linux-server

host_name                                                     packtserver1

alias                                                                 Packt Apache server

address                                                          10.0.2.12

max_check_attempts                                   5

check_period                                                24×7

notification_interval                              30

notification_period                                  24×7

}

With this configuration saved, Nagios will only monitor whether the host is up or down. To make it do more, we need to add some services to monitor, such as HTTP and SSH. Also, we are adding the option to check whether the server is active. We need to open the same file and define a service block for each service that we want to monitor:

$ sudo nano /usr/local/nagios/etc/servers/packtserver1.cfg define service {

use                                                                   generic-service

host_name                                                     packtserver1

service_description                                    SSH

check_command                                             check_ssh command_line      $USER1$/check_ssh  $ARG1$  $HOSTADDRESS$ notifications_enabled                                  0

}

 

define service {

use                                                                   generic-service

host_name                                                     packtserver1

service_description                                    HTTP

check_command                                           check_http command_line      $USER1$/check_http I $HOSTADDRESS$ $ARG1$

notifications_enabled                            0

}

 

define service {

use                                                                   generic-service

host_name                                                     packtserver1

service_description                                  PING

check_command                                         check_ping!100.0,20%!500.0,60%

}

 

Then, we save the file and reload the Nagios service:

$  sudo  systemctl  reload  nagios.service

We will see the new server on the host list and its services on the services list. To test whether Nagios is doing its job, we disable the SSH service:

$ sudo systemctl stop sshd.service

Then, on the web interface, we can see how the service will go down from green to red. The red signal means that the test for that service has failed or has returned nothing, which means that the service is rather disabled or inaccessible. An error notification e-mail will be received by the Nagios administrator.

After that, we try the second test, to start the service:

$  sudo  systemctl  start  sshd.service

To indicate that the service is back, another e-mail is received with the new status, where all its information will turn to green, as shown in the following screenshot:

Now, after setting up the first server, we can go ahead and add all the machines, including the switches, printers, and workstations that we need to monitor. Also, to be more practical, we should add only those services that we care about. So, if we have a server that runs a number of services and we will be using only two of them, it is pointless to add all of them and overload the server dashboard and the mailbox of the administrator with things that we don’t care about, which are later treated as spam.

Now, we will configure the NRPE daemon to receive information from the clients about their status. First, at the Nagios server, we edit the Xinetd NRPE configuration file to add which IP address the server should listen from:

$   sudo   nano   /etc/xinetd.d/nrpe

We need to add the IP address after the only_from option:

only_from                =  127.0.0.1   10.0.2.1

Then, we need to add the NRPE service to the system services:

$  sudo  nano  /etc/services

We add the following line at the end of the file:

nrpe 5666/tcp # NRPE

To have it submitted, we restart the Xinetd service:

$  sudo  systemctl  restart  Xinetd

Then, we go to the client and make these modifications:

$  sudo  /usr/lib/nagios/plugins/check_users  -w  5  -c  10

$  sudo  /usr/lib/nagios/plugins/  check_load  -w  15,10,5  -c  30,25,20

$  sudo  /usr/lib/nagios/plugins/check_disk  -w  20%  -c  10%  -p  /dev/sda1

These three commands are used to activate the Nagios agent to send information about server load and disk storage. In our case, our disk is defined as sda1. We can check the naming of the disk using the lsblk command.

Setting up a VPN server On a CentOS 7 server

Setting up a VPN server

——-《Mastering CentOS 7 Linux Server》

OpenVPN is an opensource software application that implements virtual private network (VPN) techniques for creating secure point-to-point or site-to-site connections in routed or bridged configurations and remote access facilities.As a requirement for this section, we are in need for a CentOS 7 server with the capacity to install some packages and make some changes to the network configuration files (internet and root access). At a later stage, we may need to create some authentication certificates. We will cover how to do that too.First, we will start with the installation of the required packages. And before we do that, OpenVPN isn’t available in the default CentOS standard repository, so we need to add the EPEL repository that contains the popular additionalpackages:

$  sudo  yum  install epel-release

After this command is done, we can start OpenVPN. We also need to install an RSA generator to generate the SSL key pairs that we will use to secure the VPN connection:

$ sudo yum install openvpn easy-rsa

By the end of the execution of the command, the OpenVPN and the easy-rsa are successfully installed on the system.

Now we move to the configuration part of the OpenVPN. Since OpenVPN has an example of a configuration file in its documentation directory, we are going to use the server.conf file as our initial configuration and build on that. To do so, we need to copy it to the /etc directory:

$   sudo    cp   /usr/share/doc/openvpn-*/sample/sample-config-files/server.conf  /etc/openvpn/

Then we can edit it to suit our needs:

$  sudo  nano  /etc/openvpn/server.conf

After opening the file, we need to remove some commented lines and make some little changes as follows (using nano to look for the lines to change, we should use Ctrl + w, then type the word we are looking for).First, we need to set the RSA encryption length to be 2048 bytes, so we need to make sure that the option line that indicates the filename is going to be used like this.

dh  dh2048.pem

Note

Some articles suggest that a DH key with 1024 bytes is vulnerable, so we recommend using a DH key with 2048 bytes or more for better security. The vulnerability is called Logjam and for more details, you can read more about it at: http://sourceforge.net/p/openvpn/mailman/message/34132515/

Then we need to uncomment the line push  redirect-gateway  def1  bypass-dhcp””,which tells the client to redirect all traffic to OpenVPN.

Next we need to set a DNS server to the client, since it will not be able to use the one provided by the ISP. Again, I will go with the Google DNS 8.8.8.8 and 8.8.4.4:

push “dhcp-option DNS 8.8.8.8 push “dhcp-option DNS 8.8.4.4

Finally, to lead a smooth run for the OpenVPN, we need to run it through no privileges first. To do so we need to run it through a user and group called nobody:

user  nobody group nobody

Then save the file and exit.

By now, the configuration part of the OpenVPN service is done. We’ll move on to the certificate and key generation part, where we need to create some script using Easy RSA. We start by creating a directory of Easy RSA in the configuration folder of the OpenVPN:

$  sudo  mkdir  -p  /etc/openvpn/easy-rsa/keys

Then we need to populate the folder with the predefined scripts of Easy RSA that generate keys and certificates:

$  sudo  cp  -rf  /usr/share/easy-rsa/2.0/*  /etc/openvpn/easy-rsa/

To perform an easy VPN setup, we will start by typing our information once and for all in the vars file:

$  sudo  nano  /etc/openvpn/easy-rsa/vars

We are basically changing the lines that start with export  KEY_ to update their values to match the ones of the organization desired, and at some point we may need to uncomment them:

export KEY_COUNTRY=”UK” export  KEY_PROVINCE=”GL” export  KEY_CITY=”London” export    KEY_ORG=”City-Center”

export KEY_EMAIL=”user@packt.co.uk export KEY_OU=”PacktPublishing”

# X509 Subject Field export  KEY_NAME=”server”

export KEY_CN=”openvpn.packt.co.uk”

Then save the file and exit.

The field KEY_NAME represents the name of the files .key and .crt.

The field KEY_CN is where we should put the domain or the sub-domain that leads to our VPN server.

To make sure that no issues arise during our use of the OpenSSL configuration file due to a version update, we will remove the version from the filename:

$   sudo   cp   /etc/openvpn/easy-rsa/openssl-1.0.0.cnf   /etc/openvpn/easy- rsa/openssl.cnf

Now we move to the creation of certificate and keys creation. We need to be in the

/etc/openvpn/easy-ras folder to run the scripts:

$   cd   /etc/openvpn/easy-rsa

Then we start the source in the variables:

$ sudo source ./vars

After that we clean any old generated keys and certificates:

$  sudo  ./cleanall

Then we build the certification authority, which has its information already defined as default options:

$  sudo  ./buildca

Now we create the keys and certificates for our VPN server. We skip the challenge password phase by pressing Enter. Then we make sure to validate by typing Y for the last step:

$   sudo   ./build-key-server   server

When running this command, we should see the following message if it is running correctly:

Check that the request matches the signature Signature ok

The  Subject’s  Distinguished  Name  is  as  follows

countryName                         :PRINTABLE:’UK’

stateOrProvinceName        :PRINTABLE:’GL’

localityName                          :PRINTABLE:’London’

organizationName                :PRINTABLE:’City-Center’

organizationalUnitName:PRINTABLE:’PacktPublishing

commonName                       :PRINTABLE:’server’

name                                         :PRINTABLE:’server’

emailAddress                         :IA5STRING:’user@packt.co.uk’

Also, we need to generate the Diffie-Hellman (dh) key exchange. This may take a while longer, as compared to the other commands:

$  sudo  ./build-dh

After finishing this step, we will have all our keys and certificates ready. We need to copy them so they can be used by our OpenVPN service:

$   cd   /etc/openvpn/easy-rsa/keys

$  sudo  cp  dh2048.pem  ca.crt  server.crt  server.key  /etc/openvpn

All the clients of this VPN server need certificates to get authenticated. So we need to

share those keys and certificates with the desired clients. It is best to generate separate keys for each client that needs to connect.

For this example, we will only generate keys for one client:

$   cd   /etc/openvpn/easy-rsa

$  sudo  ./build-key  client

With this step, we can say that we are done with the certificates.

Now for the routing step. We will do the routing configuration using iptables directly without the need of using firewalld.

If we want to only use the iptables configuration, we will first make sure that its services are installed:

$  sudo  yum  install  iptablesservices

Then disable the firewalld service:

$  sudo  systemctl  mask  firewalld

$  sudo  systemctl  enable  iptables

$  sudo  systemctl  stop  firewalld

$  sudo  systemctl  start  iptables

$  sudo  iptables  –flush

Then we add the rule to iptables that does the forwarding of the routing to the OpenVPN subnet:

$ sudo iptables t nat -A POSTROUTING -s 10.0.1.0/24 -o eth0 -j MASQUERADE

$   sudo   iptables-save   >   /etc/sysconfig/iptables

Then we need to enable IP forwarding in sysctl by editing the file sysctl.conf:

$  sudo  nano  /etc/sysctl.conf Then add the following line: net.ipv4.ip_forward    =    1

Finally, restart the network service so this configuration can take effect:

$ sudo systemctl restart network.service

We can now start the OpenVPN service, but before we do this, we need to add it to

systemctl:

$ sudo systemctl f enable openvpn@server.service

Then we can start the service:

$ sudo systemctl start openvpn@server.service

If we want to check whether the service is running, we can use the command systemctl:

$  sudo  systemctl  status  openvpn@server.service

We should see this message with the activity status active (running):

openvpn@server.service OpenVPN Robust And Highly Flexible Tunneling Application  On  server

Loaded:  loaded  (/usr/lib/systemd/system/openvpn@.service;  enabled) Active:  active  (running)  since  Thu  2015-07-30  15:54:52  CET;  25s  ago

After this check, we can say that our VPN server configuration is done. We can now go to the client configuration regardless of the operating system. We need to copy the certificates and the keys from the server. We need to copy these three files:

/etc/openvpn/easy-rsa/keys/ca.crt

/etc/openvpn/easy-rsa/keys/client.crt

/etc/openvpn/easy-rsa/keys/client.key

There are a variety of tools to copy these files from the server to any client. The easiest one is scp, the shell copy command between two Unix machines. For Windows machines we can use folder sharing tools such as Samba, or we can use another tool equivalent to SCP called WinSCP.

From the client machine, we start by copying the desired files:

$   scp   user@openvpn.packt.co.uk:/etc/openvpn/easy-rsa/keys/ca.crt     /home/user/

$  scp    user@openvpn.packt.co.uk:/etc/openvpn/easy-rsa/keys/client.crt    /home/user/

$  scp    user@openvpn.packt.co.uk:/etc/openvpn/easy-rsa/keys/client.key   /home/user/

After the copying is done we should create a file, client.ovpn, which is a configuration file for the OpenVPN client that helps set up the client to get connected to the VPN network provided by the server. The file should contain the following:

client

dev tun

proto udp

remote server.packt.co.uk 1194

resolv-retry    infinite

nobind

persist-key

persist-tun

comp-lzo

verb 3

ca  /home/user/ca.crt

cert   /home/user/client.crt

key /home/user/client.key

We need to make sure that the first line contains the name of the client typed in the keys and certificate. After this, remote should be the public IP address of the server or its domain address. In the end, the correct location of the three client files should be copied from the server.

The file client.ovpn could be used with multiple VPN clients (OpenVPN client for Linux, Tunnelblick for MAC OS X, OpenVPN Community Edition Binaries for Windows) to get them configured to connect to the VPN.

On a CentOS 7 server we will use the OpenVPN client. To use this configuration, we use the command openvpn –config:

$  sudo  openvpn  –config  ~/path/to/client.ovpn

By getting the client connected to the VPN server, we can confirm that our VPN service is working well.

Security Baseline

Security Baseline   IT Security Management Create Date: 9 October, 2006    Last Update: 03 June, 2009    Version 1.2   Contact: ITSM 02 6852

Document Version History

 

Version No.

 

Date

 

Created By

 

Detail

 

Reviewed by

 

Authorized by

1.0 21/09/2007 Suthinan A. Initial release SMAD SMAD
1.1 28/07/2008 Naraongsak V Review ITSM ITSM
1.2 03/06/2009 Naraongsak V Review ITSM ITSM

Red Hat Enterprise Linux Security Baseline Checklist

Action Outcome and comments
Apply latest OS Patches
Validate your system before making changes
Configure SSH
Enable System Accounting
Remove unnecessary software package
Disable standard services
Disable telnet
Disable FTP
Disable rlogin/rsh/rcp
Disable TFTPServer
Disable IMAP
Disable POP
Set Daemon umask
Disable xinetd
Disable sendmail Server
Disable GUI Login
Disable X Font Server
Disable standard boot services
Disable SMB (Windows File Sharing) Processes
Disable NFS Server process
Disable NFS client processes
Disable NIS client processes
Disable NIS Server processes
Disable RPC Portmap process
Disable netfs script
Disable Printer Daemon
Disable Web Server processes
Disable SNMP
Disable DNS Server
Disable SQL Server processes
Disable Webmin
Disable Squid Cache Server.
Disable Kudzu Hardware Detection
Network Parameter Modifications
Additional Network Parameter Modifications
Capture messages send to syslog AUTHPRIV facility
Turn on additional logging for FTP daemon
Confirm permissions on system log files
Configure syslogd to send logs to a remote LogHost
Add ‘nodev’ option to appropriate partitions in  /etc/fstab

 

Add ‘nosuid’ and ‘nodev’ Option For Removable Media In /etc/fstab
Disable User-Mounted Removable File Systems
Verify passwd, shadow, and group File Permissions
World-Writable Directories Should Have Their Sticky Bit Set
Find Unauthorized World-Writable Files
Find Unauthorized SUID/SGID System Executables
Find All Unowned Files
Disable USB Devices (AKA Hotplugger)
Remove .rhosts Support In PAM Configuration Files
Create ftpusers Files
Prevent X Server From Listening On Port 6000/tcp
Restrict at/cron To Authorized Users
Restrict Permissions On crontab Files
Configure xinetd Access Control
Restrict Root Logins To System Console
Set LILO/GRUB Password
Require Authentication For Single-User Mode
Restrict NFS Client Requests To Privileged Ports
Only Enable syslog To Accept Messages If Absolutely Necessary
Block System Accounts
Verify That There Are No Accounts With Empty Password Fields
Set Account Expiration Parameters On Active Accounts
Verify No Legacy ‘+’ Entries Exist In passwd, shadow, And group Files
Verify That No UID 0 Accounts Exist Other Than Root
No ‘.’ or Group/World-Writable Directory In Root’s $PATH
User Home Directories Should Be Mode 750 or More Restrictive
No User Dot-Files Should Be World-Writable
Remove User .netrc Files
Set Default umask For Users
Disable Core Dumps
Limit Access To The Root Account From su
Create Warnings For Network And Physical Access Services
Create Warnings For GUI-Based Logins
Create “authorized only” Banners For vsftpd, If Applicable


R
ed Hat Enterprise Linux and Fedora Core 1, 2, 3 & 4

Recommendation

Before performing the following step it is strongly  recommended that administrators make backup copies of critical configuration files that may get modified.

Action:

Create the shell script for back up file as below:

 

#!/bin/sh

 

ext=`date ‘+%Y%m%d-%H:%M:%S’`

 

for file in /etc/.login                /etc/X11/gdm/gdm.conf     \

/etc/cron.d/at.allow /etc/cron.d/at.deny              \

/etc/cron.d/cron.allow      /etc/cron.d/cron.deny     \

/etc/default/cron           /etc/default/inetinit     \

/etc/default/init           /etc/default/keyserv      \

/etc/default/login          /etc/default/passwd       \

/etc/default/syslogd                                 \

/etc/dt/config/*/Xresources                           \

/etc/dt/config/*/sys.resources                        \

/etc/dt/config/Xconfig                                \

/etc/dt/config/Xservers                               \

/etc/ftpd/banner.msg /etc/ftpd/ftpaccess              \

/etc/ftpd/ftpusers                                   \

/etc/hosts.allow            /etc/hosts.deny           \

/etc/init.d/netconfig /etc/issue                       \

/etc/mail/sendmail.cf /etc/motd                       \

/etc/pam.conf        /etc/passwd                     \

/etc/profile         /etc/rmmount.conf                \

/etc/security/audit_class                             \

/etc/security/audit_control                           \

/etc/security/audit_event                             \

/etc/security/audit_startup                           \

/etc/security/audit_user                              \

/etc/security/policy.conf                             \

/etc/shadow                                          \

/etc/ssh/ssh_config         /etc/ssh/sshd_config      \

/etc/syslog.conf            /etc/system               \

/usr/openwin/lib/app-defaults/XScreenSaver

do

done

[ -f $file ] && cp -p $file $file-preAIS-$ext

 

 

mkdir -p -m 0700 /var/spool/cron/crontabs-preAIS-$ext cd /var/spool/cron/crontabs

tar cf – * | (cd ../crontabs-preAIS-$ext; tar xfp -)

 

Red Hat Enterprise Linux Security Baseline detail

1.  Patches, Packages and Initial Lockdown

1.1. Apply latest OS Patches
1.2. Validate your system before making changes
1.3. Configure SSH

 

Action :

 

unalias cp rm mv cd /etc/ssh

cp ssh_config ssh_config.tmp

awk ‘/^#? *Protocol/ { print “Protocol 2”; next };

{ print }’ ssh_config.tmp > ssh_config

if [ “`egrep -l ^Protocol ssh_config`” == “” ]; then echo ‘Protocol 2’ >> ssh_config

fi

rm –f ssh_config.tmp

 

cp sshd_config sshd_config.tmp

awk ‘/^#? *Protocol/ { print “Protocol 2”; next };

/^#? *X11Forwarding/ \

{ print “X11Forwarding no”; next };

/^#? *IgnoreRhosts/ \

{ print “IgnoreRhosts yes”; next };

/^#? *RhostsAuthentication/ \

{ print ” RhostsAuthentication no”; next };

/^#? *RhostsRSAAuthentication/ \

{ print “RhostsRSAAuthentication no”; next };

/^#? *HostbasedAuthentication/ \

{ print “HostbasedAuthentication no”; next };

/^#? *PermitRootLogin/ \

{ print “PermitRootLogin no”; next };

/^#? *PermitEmptyPasswords/ \

{ print “PermitEmptyPasswords no”; next };

/^#? *Banner/ \

{ print “Banner /etc/issue.net”; next };

{print}’ sshd_config.tmp > sshd_config rm -f sshd_config.tmp

1.4. Enable System Accounting

Install Package sysstat

1.5. Remove unnecessary software package

Action :

Use chkconfig command


  1. Minimiz
    e xinetd network services

 

You will need to unalias the mv and cp commands as some commands overwrite files and you may be prompted numerous times about overwriting these files: unalias mv cp

2.1. Disable standard services

Note: Bastille configuration does not cover all of these services

Action:

cd /etc/xinetd.d

for FILE in chargen chargen-udp cups-lpd cups daytime \ daytime-udp echo echo-udp eklogin finger gssftp imap \ imaps ipop2 ipop3 krb5-telnet klogin kshell ktalk ntalk \ pop3s rexec rlogin rsh rsync servers services sgi_fam \ talk telnet tftp time time-udp vsftpd wu-ftpd; do

CHK=`chkconfig –list | grep -w ${FILE}`

if [ “$CHK” != “” ]; then chkconfig ${FILE} off

fi done

2.3. Disable telnet

Action : chkconfig telnet off

2.4. Disable FTP

chkconfig vsftpd off

2.5. Disable rlogin/rsh/rcp

Action:

chkconfig shell off chkconfig rsh off chkconfig login off chkconfig rlogin off

2.6. Disable TFTPServer

Action:

chkconfig tftp off

2.7. Disable IMAP

Action:

chkconfig imaps off

2.8. Disable POP

Action:

chkconfig pop3s off


  1. Minimiz
    e boot services
3.1. Set Daemon umask

Action:

cd /etc/init.d

cp -f functions functions-preAIS

awk ‘($1==”umask”) { if ($2 < “027”) { $2=”027″;} }; \

{ print }’ functions-preAIS > functions if [ `grep -c umask functions` -eq 0 ]; then

echo “umask 027” >> functions fi

rm -f functions-preAIS

3.2. Disable xinetd

Action:

chkconfig –level 12345 xinetd off

3.3. Disable sendmail Server

Action:

cd /etc/sysconfig

if [ ` grep -ci “DAEMON=no” sendmail` = “0” ]; then echo DAEMON=no >> sendmail

echo QUEUE=1h >> sendmail fi

chown root:root sendmail chmod 644 sendmail chkconfig sendmail off

3.4. Disable GUI Login

Action:

cp -f  /etc/inittab /etc/inittab-preAIS

sed -e ‘s/id:5:initdefault:/id:3:initdefault:/’ \

< /etc/inittab-preAIS > /etc/inittab

chown root:root /etc/inittab chmod 0600 /etc/inittab

rm -f  /etc/inittab-preAIS

3.5. Disable X Font Server

Action:     

chkconfig xfs off

3.6. Disable standard boot services

Action:

for FILE in apmd canna FreeWnn gpm hpoj innd irda isdn \ kdcrotate lvs mars-nwe oki4daemon privoxy rstatd \ rusersd rwalld rwhod spamassassin wine; do

service $FILE stop chkconfig $FILE off

done

for FILE in nfs nfslock autofs ypbind ypserv yppasswdd \ portmap smb netfs lpd apache httpd tux snmpd \ named postgresql mysqld webmin kudzu squid cups \

ip6tables iptables pcmcia bluetooth mDNSResponder; do service $FILE stop

chkconfig $FILE off done

for USERID in rpc rpcuser lp apache http httpd named dns \ mysql postgres squid news netdump; do

usermod -L -s /sbin/nologin $USERID done

 

3.7. Disable SMB (Windows File Sharing) Processes

Action:

chkconfig smb off

3.8. Disable NFS Server process

Action:

chkconfig –level 345 nfs off

3.9. Disable NFS client processes

Action:

chkconfig –level 345 nfslock off chkconfig –level 345 autofs off

3.10. Disable NIS client processes

Action:

chkconfig ypbind off

3.11. Disable NIS Server processes

Action:

chkconfig ypserv off chkconfig yppasswdd off

3.12. Disable RPC Portmap process

Action:

chkconfig –level 345 portmap off

3.13. Disable netfs script

If this machine is not sharing files via the NFS, Novell Netware or Windows File Sharing protocols, then proceed with the actions below.

Action:    chkconfig –level 345 netfs off

 

3.14. Disable Printer Daemon

Action:

chkconfig cups off chkconfig hpoj off chkconfig lpd off

3.15. Disable Web Server processes

Action:

chkconfig apache off chkconfig httpd off chkconfig tux off

3.16. Disable SNMP

If hosts are not at this site remotely monitored by a tool (e.g., HP Open View, MRTG, Cricket) that relies on SNMP, then proceed with the actions below.

Action:

chkconfig snmpd off

3.17. Disable DNS Server

Action:

chkconfig named off

3.18. Disable SQL Server processes

Action:

chkconfig postgresql off chkconfig mysqld off

3.19. Disable Webmin

Action:

rpm -e webmin

3.20. Disable Squid Cache Server.

Action:

chkconfig squid off

3.21. Disable Kudzu Hardware Detection

Action:

chkconfig –level 345 kudzu off

 

 

  1. Kernel Tuning

 

  • Network Parameter Modifications Action:

cat /etc/sysctl.conf

net.ipv4.tcp_max_syn_backlog = 4096

net.ipv4.tcp_syncookies=1 net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.all.secure_redirects = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0

net.ipv4.conf.default.accept_redirects = 0

net.ipv4.conf.default.secure_redirects = 0

net.ipv4.icmp_echo_ignore_broadcasts = 1 END_SCRIPT

chown root:root /etc/sysctl.conf chmod 0600 /etc/sysctl.conf

4.2.  Additional Network Parameter Modifications Action:

cat /etc/sysctl.conf net.ipv4.ip_forward = 0

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.default.send_redirects = 0 END_SCRIPT

chown root:root /etc/sysctl.conf chmod 0600 /etc/sysctl.conf

 

5.  Logging

5.1. Capture messages send to syslog AUTHPRIV facility

Action:

if [ `grep -v ‘^#’ /etc/syslog.conf | grep -c ‘authpriv’` -eq 0 ]; then echo -e “authpriv.*\t\t\t\t/var/log/secure” >> /etc/syslog.conf

fi

touch /var/log/secure

chown root:root /var/log/secure chmod 600 /var/log/secure

5.2. Turn on additional logging for FTP daemon

Action:

if [ -f /etc/vsftpd.conf ]; then FILE=”/etc/vsftpd.conf”

else FILE=”/etc/vsftpd/vsftpd.conf” fi

if [ -f $FILE ]; then

cp -f  $FILE $FILE-preAIS

awk ‘/^#?xferlog_std_format/ \

{ print “xferlog_std_format=NO”; next };

/^#?log_ftp_protocol/ \

{ print “log_ftp_protocol=YES”; next };

{ print }’ ${FILE}-preAIS > ${FILE}

if [ `egrep -c log_ftp_protocol ${FILE}` == 0 ]; then echo “log_ftp_protocol=YES” >> ${FILE}

rm -f $FILE-preAIS fi

chmod 0600 $FILE chown root:root $FILE fi

5.3. Confirm permissions on system log files

Action:

cd /var/log

chmod o-rwx boot.log* cron* dmesg ksyms* httpd/* \ maillog* messages* news/* pgsql rpmpkgs* samba/* sa/* \ scrollkeeper.log secure* spooler* squid/* vbox/* wtmp chmod o-rx boot.log* cron* maillog* messages* pgsql \ secure* spooler* squid/* sa/*

chmod g-w boot.log* cron* dmesg httpd/* ksyms* \ maillog* messages* pgsql rpmpkgs* samba/* sa/* \ scrollkeeper.log secure* spooler*

chmod g-rx boot.log* cron* maillog* messages* pgsql \ secure* spooler*

chmod o-w gdm/ httpd/ news/ samba/ squid/ sa/ vbox/ chmod o-rx httpd/ samba/ squid/ sa/

chmod g-w gdm/ httpd/ news/ samba/ squid/ sa/ vbox/

chmod g-rx httpd/ samba/ sa/ chmod u-x kernel syslog loginlog chown -R root:root .

chgrp utmp wtmp

[ -e news ] && chown -R news:news news

[ -e pgsql ] && chown postgres:postgres pgsql

 

chown -R squid:squid squid

5.4. Configure syslogd to send logs to a remote LogHost Action:

In the script below, replace loghost with the proper name (FQDN, if necessary) of your loghost.

kern.warning;*.err;authpriv.none\t@loghost\n\

*.info;mail.none;authpriv.none;cron.none\t@loghost\n\

*.emerg\t@loghost\n\ local7.*\t@loghost\n” >> /etc/syslog.conf

 

6.  File/Directory Permissions/Access

6.1. Add ‘nodev’ option to appropriate partitions in /etc/fstab

Action:

cp -p /etc/fstab /etc/fstab.tmp

awk ‘($3 ~ /^ext[23]$/ && $2 != “/”) \

{ $4 = $4 “,nodev” }; \

{ print }’ /etc/fstab.tmp > /etc/fstab chown root:root /etc/fstab

chmod 0644 /etc/fstab rm -f /etc/fstab.tmp

6.2. Add ‘nosuid’ and ‘nodev’ Option For Removable Media In

/etc/fstab

Action:

cp -p /etc/fstab /etc/fstab.tmp

awk ‘($2 ~ /^\/m.*\/(floppy|cdrom)$/) && \ ($4 !~ /,nodev,nosuid/) \

{ $4 = $4 “nodev,nosuid” }; \

{ print }’ /etc/fstab.tmp > /etc/fstab chown root:root /etc/fstab

chmod 0644 /etc/fstab rm -f /etc/fstab.tmp chattr +i /etc/fstab

6.3. Disable User-Mounted Removable File Systems

If there is not a mission-critical reason to allow unprivileged users to mount CD-ROMs and

floppy disk file systems on this system, then perform the action below.

Action:

cd /etc/security

cp -f  console.perms console.perms-preAIS awk ‘($1 == “<console>”) && ($3 !~ \

/sound|fb|kbd|joystick|v4l|mainboard|gpm|scanner/) \

{ $1 = “#<console>” }; \

{ print }’ console.perms-preAIS > console.perms rm -f console.perms-preAIS

chown root:root console.perms chmod 0600 console.perms

6.4. Verify passwd, shadow, and group File Permissions

Action:

cd /etc

chown root:root passwd shadow group chmod 644 passwd group

chmod 400 shadow

6.5. World-Writable Directories Should Have Their Sticky Bit Set

Action:

for PART in `awk ‘($3 == “ext2” || $3 == “ext3”) \

{ print $2 }’ /etc/fstab`; do find $PART -xdev -type d \

\( -perm -0002 -a ! -perm -1000 \) -print

done

There should be no entries returned.

6.6. Find Unauthorized World-Writable Files

Action:

for PART in `grep -v ^# /etc/fstab | awk ‘($6 != “0”) { print $2 }’`; do

find $PART -xdev -type f \

\( -perm -0002 -a ! -perm -1000 \) -print Done

There should be no entries returned. If grub.conf shows up, its permissions will be adjusted in step 7 System Access, Authentication, and Authorization (Set LILO/GRUB Password)

6.7. Find Unauthorized SUID/SGID System Executables

Action:

Administrators who wish to obtain a list of the set-UID and set-GID programs currently

installed on the system may run the following commands:

 

for PART in `grep -v ^# /etc/fstab | awk ‘($6 != “0”) { print $2 }’`; do

find $PART \( -perm -04000 -o -perm -02000 \) \

-type f -xdev -print Done

6.8. Find All Unowned Files

Action:

for PART in `grep -v ^# /etc/fstab | awk ‘($6 != “0”) { print $2 }’`; do

find $PART -nouser -o -nogroup -print done

 

There should be no entries returned.

6.9. Disable USB Devices (AKA Hotplugger)

 

If there is not a mission-critical reason to allow use of PCMCIA or USB-based devices on this

System, then perform the action below.

 

Action:

rpm -e pcmcia-cs

rpm -e kernel-pcmcia-cs

 

 

 

# All versions except RHEL 4 and Fedora Core 4:

rpm -e hotplug

7.  System Access, Authentication, and Authorization

1. Remove .rhosts Support In PAM Configuration Files

Action:

for FILE in /etc/pam.d/*; do

grep -v rhosts_auth $FILE > ${FILE}.tmp mv -f ${FILE}.tmp $FILE

chown root:root $FILE chmod 644 $FILE

done

2. Create ftpusers Files

Action:

for NAME in `cut -d: -f1 /etc/passwd`; do  if [ `id -u $NAME` -lt 500 ]; then echo $NAME >> /etc/ftpusers

fi done

chown root:root /etc/ftpusers chmod 600 /etc/ftpusers

if [ -e /etc/vsftpd.conf ] || \

[ -e /etc/vsftpd/vsftpd.conf ]; then rm -f /etc/vsftpd.ftpusers

cp -fp /etc/ftpusers /etc/vsftpd.ftpusers fi

3. Prevent X Server From Listening On Port 6000/tcp

Action:

if [ -e /etc/X11/xdm/Xservers ]; then cd /etc/X11/xdm

cp -f  Xservers Xservers-preAIS

awk ‘($1 !~ /^#/ && $3 == “/usr/X11R6X”) \

{ $3 = $3 ” -nolisten tcp” };

{ print }’ Xservers-preAIS > Xservers rm –f Xservers-preAIS

chown root:root Xservers chmod 444 Xservers

fi

 

if [ -e /etc/X11/gdm/gdm.conf ]; then cd /etc/X11/gdm

cp -f  gdm.conf gdm.conf-preAIS awk -F= ‘($2 ~ /\/X$/) \

{ printf(“%s -nolisten tcp\n”, $0); next };

{ print }’ gdm.conf-preAIS > gdm.conf rm – fgdm.conf-preAIS

chown root:root gdm.conf chmod 644 gdm.conf

fi

 

if [ -d /etc/X11/xinit ]; then

cd /etc/X11/xinit

cp -f  xserverrc xserverrc-preAIS if [ -e xserverrc ]; then

awk ‘/X/ && !/^#/ \ { print $0 ” :0 -nolisten tcp \$@”; next }; \

{ print }’ xserverrc-preAIS > xserverrc

else

cat <<END > xserverrc

#!/bin/bash

exec X :0 -nolisten tcp \$@ END

fi

rm – fxserverrc-preAIS chown root:root xserverrc chmod 755 xserverrc

fi

 

4. Restrict at/cron To Authorized Users

Action:

cd /etc/

rm -f cron.deny at.deny echo root > cron.allow echo root > at.allow

chown root:root cron.allow at.allow chmod 400 cron.allow at.allow

5. Restrict Permissions On crontab Files

Action:

chown root:root /etc/crontab chmod 400 /etc/crontab

chown -R root:root /var/spool/cron chmod -R go-rwx /var/spool/cron

cd /etc

ls | grep cron | xargs chown -R root:root

ls | grep cron | xargs chmod -R go-rwx

6. Configure xinetd Access Control

Action:

Insert the following line into the “defaults” block in

/etc/xinetd.conf:

only_from = <net>/<num_bits> <net>/<num_bits>

where each <net>/<num_bits> combination represents one network block in use by your

organization. For example:

only_from = 192.168.1.0/24

would restrict connections to only the 192.168.1.0/24 network, with

the netmask

255.255.255.0.

Note: There are two <TAB>’s between the only_from and the = in the

above lines.

7. Restrict Root Logins To System Console

Action:

for i in `seq 1 6`; do

echo tty$i >> /etc/securetty

done

for i in `seq 1 11`; do

echo vc/$i >> /etc/securetty

done

echo console >> /etc/securetty chown root:root /etc/securetty chmod 400 /etc/securetty

 

8. Set LILO/GRUB Password

Action: (if you have an /etc/lilo.conf file):

1.  Add the following lines to the beginning of /etc/lilo.conf

restricted password=<password>

Replace <password> with an appropriate password for your organization.

2.  Execute the following commands as root:

chown root:root /etc/lilo.conf chmod 600 /etc/lilo.conf

lilo

 

Action (if you have an /etc/grub.conf file):

1.  Add this line to /etc/grub.conf before the first uncommented line.

password <password>

Replace <password> with an appropriate password for your organization.

2.  Execute the following commands as root:

chown root:root /etc/grub.conf

chmod 600 /etc/grub.conf

9. Require Authentication For Single-User Mode

Action:

cd /etc

if [ “`grep -l sulogin inittab`” = “” ]; then

awk ‘{ print }; /^id:[0123456sS]:initdefault:/ \

{ print “~~:S:wait:/sbin/sulogin” }’ \ inittab > inittab.tmp

mv -f inittab.tmp inittab chown root:root inittab chmod 644 inittab

fi

10 Restrict NFS Client Requests To Privileged Ports

Action:

Add the secure option to all entries in the /etc/exports file. The following Perl code

will perform this action automatically.

if [ -s /etc/exports ]; then

perl -i.orig -pe \

‘next if (/^\s*#/ || /^\s*$/);

($res, @hst) = split(” “); foreach $ent (@hst) { undef(%set);

($optlist) = $ent =~ /\((.*?)\)/; foreach $opt (split(/,/, $optlist)) {

$set{$opt} = 1;

}

delete($set{“insecure”});

$set{“secure”} = 1;

$ent =~ s/\(.*?\)//;

$ent .= “(” . join(“,”, keys(%set)) . “)”;

}

$hst[0] = “(secure)” unless (@hst);

$_ = “$res\t” . join(” “, @hst) . “\n”;’ \

/etc/exports fi

 

 

 

11. Only Enable syslog To Accept Messages If Absolutely Necessary

If this machine is a log server, or does it need to receive Syslog messages via the network from other systems, then perform the action below.

Action:

Read syslog manpage for the -l, -r and -s options.

Edit /etc/init.d/syslog and look for the line that says:

SYSLOGD_OPTIONS=”-m 0″

and add the entries that are appropriate for your site. An example entry would look like this:

SYSLOGD=”-m 0 -l loghost -r -s mydomain.com”

8.  User Accounts and Environment

1. Block System Accounts

Action:

for NAME in `cut -d: -f1 /etc/passwd`; do MyUID=`id -u $NAME`

if [ $MyUID -lt 500 -a $NAME != ‘root’ ]; then usermod -L -s /sbin/nologin $NAME

fi done

2. Verify That There Are No Accounts With Empty Password Fields

Action:

The command:

awk -F: ‘($2 == “”) { print $1 }’ /etc/shadow

should return no lines of output.

3. Set Account Expiration Parameters On Active Accounts

Action:

cd /etc

cp -f  login.defs login.defs-preAIS

awk ‘($1 ~ /^PASS_MAX_DAYS/) { $2=”90″ }  ($1 ~ /^PASS_MIN_DAYS/) { $2=”7″ }

($1 ~ /^PASS_WARN_AGE/) { $2=”28″ } ($1 ~ /^PASS_MIN_LEN/) { $2=”6″ }

{ print } ‘ login.defs-preAIS > login.defs chown root:root login.defs

chmod 640 login.defs

rm -f login.defs-preAIS useradd -D -f 7

 

for NAME in `cut -d: -f1 /etc/passwd`; do uid=`id -u $NAME`

if [ $uid -ge 500 -a $uid != 65534 ]; then chage -m 7 -M 90 -W 28 -I 7 $NAME

fi done

4. Verify No Legacy ‘+’ Entries Exist In passwd, shadow, And group Files

Action:

The command:

grep ^+: /etc/passwd /etc/shadow /etc/group

should return no lines of output.

5. Verify That No UID 0 Accounts Exist Other Than Root

Action:

The command:

awk -F: ‘($3 == 0) { print $1 }’ /etc/passwd

should return only the word “root”.

 

6. No ‘.’ or Group/World-Writable Directory In Root’s $PATH

Action:

The automated testing tool supplied with this baseline will alert the administrator if

action is required. To find ‘.’ in $PATH:

echo $PATH | egrep ‘(^|:)(\.|:|$)’

To find group- or world-writable directories in $PATH:

find `echo $PATH | tr ‘:’ ‘ ‘` -type d \

\( -perm -002 -o -perm -020 \) -ls

These commands should produce no output.

7. User Home Directories Should Be Mode 750 or More Restrictive

Action:

for DIR in \

`awk -F: ‘($3 >= 500) { print $6 }’ /etc/passwd`; do chmod g-w $DIR

chmod o-rwx $DIR done

8. No User Dot-Files Should Be World-Writable

Action:

for DIR in \

`awk -F: ‘($3 >= 500) { print $6 }’ /etc/passwd`; do for FILE in $DIR/.[A-Za-z0-9]*; do

if [ ! -h “$FILE” -a -f “$FILE” ]; then chmod go-w “$FILE”

fi done

done

9. Remove User .netrc Files

Action:

find / -name .netrc

for DIR in `cut -f6 -d: /etc/passwd`; do if [ -e $DIR/.netrc ]; then

echo “Removing $DIR/.netrc” rm -f $DIR/.netrc

fi done

 

Remarks:

.netrc files may contain unencrypted passwords which may be used to

attack other systems. While the above modifications are relatively

benign, making global modifications to user home directories without

alerting the user community can result in unexpected outages and

unhappy users. If the first command returns any results, carefully evaluate the ramifications of removing those files before executing the remaining commands as you may end up impacting an application that has not had time to revise its architecture to a more secure design.

 

10. Set Default umask For Users

Action:

cd /etc

for FILE in profile csh.login csh.cshrc bashrc; do

if ! egrep -q ‘umask.*77’ $FILE ; then echo “umask 077” >> $FILE

fi

chown root:root $FILE chmod 444 $FILE

done

 

cd /root

for FILE in .bash_profile .bashrc .cshrc .tcshrc; do if ! egrep -q ‘umask.*77’ $FILE ; then

echo “umask 077” >> $FILE # See description fi

chown root:root $FILE

 

done

11. Disable Core Dumps

If you don’t have developers who need to debug crashed programs or send low-level debugging

information to software developers/vendors, then perform the action below.

Action:

cd /etc/security

cat <<END_ENTRIES >> limits.conf

*  soft core 0

*  hard core 0 END_ENTRIES

12. Limit Access To The Root Account From su

Action:

WARNING: If you do not have immediate physical access to the server, ensure you have a user in the wheel group before running the below script. Failure to do so will prevent you from using su to become root.

 

cd /etc/pam.d/

cp -f  su /etc/pam.d-preAIS/su

awk ‘($1==”#auth” && $2==”required” && \

$3==”/lib/security/$ISA/pam_wheel.so”) \

{ print “auth required

/lib/security/$ISA/pam_wheel.so use_uid”; next };

{ print }’ /etc/pam.d-preAIS/su > su rm -f /etc/pam.d-preAIS/su

 

9.  Warning Banners

  • Create Warnings For Network And Physical Access Services Action:

1.1  Edit the banner currently in /etc/issue – this was created by Bastille and may need to be hanged for your Enterprise. Leave the words “its owner” as this will be replaced in the next step with the name of your organization.

  • Create banners for console access:

unalias cp mv cd /etc

# Remove OS indicators from banners for FILE in issue motd; do

cp -f ${FILE} ${FILE}.tmp

egrep -vi “redhat|kernel|fedora” ${FILE}.tmp > ${FILE} rm -f ${FILE}.tmp

done

 

COMPANYNAME=”AIS”

cp -f issue issue.tmp

sed -e “s/its owner/${ COMPANYNAME }/g” issue.tmp > issue rm -f issue.tmp

 

if [ “`grep -i authorized /etc/issue`” == “” ]; then

echo ” Any access to the AIS computer system or data must be authorized and shall comply with the AIS policies, regulations, criteria and/or memorandum regarding IT Security (\“IT Rules\”). Any breach of IT Rules will be punished and is subject to criminal prosecution. AIS may monitor, intercept, record, read, copy, or capture and disclose any use of the computer system or data stored in any type of media by the users.” >> /etc/issue

fi

 

if [ “`grep -i authorized /etc/motd`” == “” ]; then

echo ” Any access to the AIS computer system or data must be

authorized and shall comply with the AIS policies, regulations,

criteria and/or memorandum regarding IT Security (\“IT Rules\”). Any

breach of IT Rules will be punished and is subject to criminal prosecution. AIS may monitor, intercept, record, read, copy, or capture and disclose any use of the computer system or data stored in any type of media by the users.” >> /etc/motd

fi

 

1.3  Create banners for network access:

cp -fp /etc/issue /etc/issue.net

if [ “`grep -i authorized /etc/issue.net`” == “” ]; then

echo ” Any access to the AIS computer system or data must be

authorized and shall comply with the AIS policies, regulations,

criteria and/or memorandum regarding IT Security (\“IT Rules\”). Any breach of IT Rules will be punished and is subject to criminal prosecution. AIS may monitor, intercept, record, read, copy, or capture and disclose any use of the computer system or data stored in any type of media by the users.” >> /etc/issue.net

fi

 

1.4 Protect banner:

chown root:root /etc/motd /etc/issue /etc/issue.net chmod 644 /etc/motd /etc/issue /etc/issue.net

2. Create Warnings For GUI-Based Logins

Action:

if [ -e /etc/X11/xdm/Xresources ]; then cd /etc/X11/xdm

cp -f  Xresources Xresources-preAIS awk ‘/xlogin*greeting:/ \

{ print “xlogin*greeting: Authorized uses only”; next };

{ print }’ Xresources-preAIS > Xresources rm -f Xresources-preAIS

chown root:root Xresources chmod 644 Xresources

fi

 

if [ -e /etc/X11/xdm/kdmrc ]; then cd /etc/X11/xdm

cp -f  kdmrc kdmrc-preAIS awk ‘/GreetString=/ \

{ print “GreetString=Authorized uses only”; next };

{ print }’ kdmrc-preAIS > kdmrc rm -f kdmrc-preAIS

chown root:root kdmrc chmod 644 kdmrc

fi

 

if [ -e /etc/X11/gdm/gdm.conf ]; then cd /etc/X11/gdm

cp -pf gdm.conf gdm.conf.tmp

awk ‘/^Greeter=/ && /gdmgreeter/ \

{ printf(“#%s\n”, $0); next };

/^#Greeter=/ && /gdmlogin/ \

{ $1 = “Greeter=gdmlogin” }; /Welcome=/ \

{ print “Welcome=Authorized uses only”; next };

{ print }’ gdm.conf.tmp > gdm.conf rm -f gdm.conf.tmp

chown root:root gdm.conf chmod 644 gdm.conf

fi

3. Create “authorized only” Banners For vsftpd, If Applicable

Action:

cd /etc

if [ -d vsftpd ]; then

cd vsftpd fi

 

if [ -e vsftpd.conf ]; then

echo “ftpd_banner= Any access to the AIS computer system or

data must be authorized and shall comply with the AIS policies,

regulations, criteria and/or memorandum regarding IT Security (\“IT

Rules\”). Any breach of IT Rules will be punished and is subject to criminal prosecution. AIS may monitor, intercept, record, read, copy, or capture and disclose any use of the computer system or data stored in any type of media by the users.

” >> vsftpd.conf

fi

 

  1. 4. Reboot Action: init 6