How to Check Integrity of File and Directory Using “AIDE” in Linux

In our mega guide to hardening and securing CentOS 7, under the section “protect system internally”, one of the useful security tools we listed for internal system protection against viruses, rootkits, malware, and detection of unauthorized activities is AIDE.

AIDE (Advanced Intrusion Detection Environment) is a small yet powerful, free open source intrusion detection tool, that uses predefined rules to check file and directory integrity in Unix-like operating systems such as Linux. It is an independent static binary for simplified client/server monitoring configurations.

It is feature-rich: uses plain text configuration files and database making it easy to use; supports several message digest algorithms such as but not limited to md5, sha1, rmd160, tiger; supports common file attributes; also supports powerful regular expressions to selectively include or exclude files and directories to be scanned.

Also it can be compiled with exceptional support for Gzip compression, Posix ACL, SELinux, XAttrs and Extended file system attributes.


Aide works by creating a database (which is simply a snapshot of selected parts of the file system), from the regular expression rules defined in the configuration file(s). Once this database is initialized, you can verify the integrity of the system files against it. This guide will show how to install and use aide in Linux.

How to Install AIDE in Linux

Aide is packaged in official repositories of mainstream Linux distributions, to install it run the command for your distribution using a package manager.

# apt install aide [On Debian/Ubuntu]
# yum install aide [On RHEL/CentOS] # dnf install aide [On Fedora 22+]
# zypper install aide [On openSUSE]
# emerge aide [On Gentoo]

After installing it, the main configuration file is /etc/aide.conf. To view the installed version as well as compile time parameters, run the command below on your terminal:

# aide -v
Sample Output
Aide 0.14
Compiled with the following options:
WITH_MMAP
WITH_POSIX_ACL
WITH_SELINUX
WITH_PRELINK
WITH_XATTR
WITH_LSTAT64
WITH_READDIR64
WITH_ZLIB
WITH_GCRYPT
WITH_AUDIT
CONFIG_FILE = "/etc/aide.conf"

You can open the configuration using your favorite editor.

# vi /etc/aide.conf

It has directives that define the database location, report location, default rules, the directories/files to be included in the database.

Understanding Default Aide Rules

AIDE Default Rules

AIDE Default Rules

Using the above default rules, you can define new custom rules in the aide.conf file for example.

PERMS = p+u+g+acl+selinux+xattrs

The PERMS rule is used for access control only, it will detect any changes to file or directories based on file/directory permissions, user, group, access control permissions, SELinux context and file attributes.

This will only check file content and file type.

CONTENT = sha256+ftype

This is an extended version of the previous rule, it checks extended content, file type and access.

CONTENT_EX = sha256+ftype+p+u+g+n+acl+selinux+xattrs

The DATAONLY rule below will help detect any changes in data inside all files/directory.

DATAONLY = p+n+u+g+s+acl+selinux+xattrs+sha256
Configure Aide Rules

Configure Aide Rules

Defining Rules to Watch Files and Directories

Once you have defined rules, you can specify the file and directories to watch. Considering the PERMS rule above, this definition will check permissions for all files in root directory.

/root/\..* PERMS

This will check all files in the /root directory for any changes.

/root/ CONTENT_EX

To help you detect any changes in data inside all files/directory under /etc/, use this.

/etc/ DATAONLY 
Configure Aide Rules for Filesystem

Configure Aide Rules for Filesystem

Using AIDE to Check File and Directory Integrity in Linux

Start by constructing a database against the checks that will be performed using --init flag. This is expected to be done before your system is connected to a network.

The command below will create a database that contains all of the files that you selected in your configuration file.

# aide --init
Initialize Aide Database

Initialize Aide Database

Then rename the database to /var/lib/aide/aide.db.gz before proceeding, using this command.

# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

It is recommended to move the database to a secure location possibly in a read-only media or on another machines, but ensure that you update the configuration file to read it from there.

After the database is created, you can now check the integrity of the files and directories using the --check flag.

# aide --check

It will read the snapshot in the database and compares it to the files/directories found you system disk. If it finds changes in places that you might not expect, it generates a report which you can then review.

Run File Integrity Check

Run File Integrity Check

Since no changes have been made to the file system, you will only get an output similar to the one above. Now try to create some files in the file system, in areas defined in the configuration file.

# vi /etc/script.sh
# touch all.txt

Then run a check once more, which should report the files added above. The output of this command depends on the parts of the file system you configured for checking, it can be lengthy overtime.

# aide --check
Check File System Changes

Check File System Changes

You need to run aide checks regularly, and in case of any changes to already selected files or addition of new file definitions in the configuration file, always update the database using the --update option:

# aide --update

After running a database update, to use the new database for future scans, always rename it to /var/lib/aide/aide.db.gz:

# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

That’s all for now! But take note of these important points:

  • One characteristic of most intrusion detection systems AIDE inclusive, is that they will not provide solutions to most security loop holes on a system. They however, assist in easing the the intrusion response process by helping system administrators examine any changes to system files/directories. So you should always be vigilant and keep updating your current security measures.
  • It it highly recommended to keep the newly created database, the configuration file and the AIDE binary in a secure location such as read-only media (possible if you install from source).
  • For additional security, consider signing the configuration and/or database.

For additional information and configurations, see its man page or check out the AIDE Homepage: http://aide.sourceforge.net/

A Shell Script to Send Email Alert When Memory Gets Low

A powerful aspect of Unix/Linux shell programs such as bash, is their amazing support for common programming constructs that enable you to make decisions, execute commands repeatedly, create new functions, and so much more. You can write commands in a file known as a shell script and execute them collectively.

This offers you a reliable and effective means of system administration. You can write scripts to automate tasks, for instance daily back ups, system updates etc; create new custom commands/utilities/tools and beyond. You can write scripts to help you keep up with what’s unfolding on a server.

One of the critical components of a server is memory (RAM), it greatly impacts on overall performance of a system.

In this article, we will share a small but useful shell script to send an alert email to one or more system administrator(s), if server memory is running low.


This is script is particularly useful for keeping an eye on Linux VPS (Virtual Private Servers) with small amount of memory, say of about 1GB (approximately 990MB).

Testing Environment Setup

  1. A CentOS/RHEL 7 production server with mailx utility installed with working postfix mail server.

This is how the alertmemory.sh script works: first it checks the free memory size, then determines if amount of free memory is less or equal to a specified size (100 MB for the purpose of this guide), used as a bench mark for the least acceptable free memory size.

If this condition is true, it will generate a list of the top 10 processes consuming server RAM and sends an alert email to specified email addresses.

Note: You will have to make a few changes to script (especially the mail sender utility, use the appropriate flags) to meet your Linux distributions requirements.

Shell Script to Check Server Memory

#!/bin/bash #######################################################################################
#Script Name :alertmemory.sh
#Description :send alert mail when server memory is running low
#Args : #Author :Aaron Kili Kisinga
#Email :[email protected]
#License : GNU GPL-3 #######################################################################################
## declare mail variables
##email subject subject="Server Memory Status Alert"
##sending mail as
from="[email protected]"
## sending mail to
to="[email protected]"
## send carbon copy to
also_to="[email protected]"
## get total free memory size in megabytes(MB) free=$(free -mt | grep Total | awk '{print $4}')
## check if free memory is less or equals to 100MB
if [[ "$free" -le 100 ]]; then
## get top processes consuming system memory and save to temporary file ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head >/tmp/top_proccesses_consuming_memory.txt
file=/tmp/top_proccesses_consuming_memory.txt
## send email if system memory is running low
echo -e "Warning, server memory is running low!\n\nFree memory: $free MB" | mailx -a "$file" -s "$subject" -r "$from" -c "$to" "$also_to"
fi
exit 0

After creating your script /etc/scripts/alertmemory.sh, make it executable and symlink to cron.hourly.

# chmod +x /etc/scripts/alertmemory.sh
# ln -s -t /etc/cron.hourly/alertmemory.sh /etc/scripts/alertmemory.sh

This means that the above script will be run after every 1 hour as long as the server is running.

Tip: You can test if it is working as intended, set the bench mark value a little high to easily trigger an email to be sent, and specify a small interval of about 5 minutes.

Then keep on checking from the command line using the free command provided in the script. Once you confirm that it is working, define the actual values you would like to use.

Below is a screenshot showing a sample alert email.

Linux Memory Email Alert

Linux Memory Email Alert

That’s all! In this article, we explained how to use shell script to send alert emails to system administrators in case server memory (RAM) is running low. You can share any thoughts relating to this topic, with us via the feedback form below.

4 Ways to Speed Up SSH Connections in Linux

SSH is the most popular and secure method for managing Linux servers remotely. One of the challenges with remote server management is connection speeds, especially when it comes to session creation between the remote and local machines.

There are several bottlenecks to this process, one scenario is when you are connecting to a remote server for the first time; it normally takes a few seconds to establish a session. However, when you try to start multiple connections in succession, this causes an overhead (combination of excess or indirect computation time, memory, bandwidth, or other related resources to carry out the operation).

In this article, we will share four useful tips on how to speed up remote SSH connections in Linux.

1. Force SSH Connection Over IPV4

OpenSSH supports both IPv4/IP6, but at times IPv6 connections tend to be slower. So you can consider forcing ssh connections over IPv4 only, using the syntax below:

# ssh -4 [email protected]


Alternatively, use the AddressFamily (specifies the address family to use when connecting) directive in your ssh configuration file /etc/ssh/ssh_config (global configuration) or ~/.ssh/config (user specific file).

The accepted values are “any”, “inet” for IPv4 only, or “inet6”.

$ vi ~.ssh/config 
Disable SSH Connections on ipv6

Disable SSH Connections on ipv6

Here is a useful starter guide on configuring user specific ssh configuration file:

  1. How to Configure Custom SSH Connections to Simplify Remote Access

Additionally, on the remote machine, you can also instruct the sshd daemon to consider connections over IPv4 by using the above directive in the /etc/ssh/sshd_config file.

2. Disable DNS Lookup On Remote Machine

By default, sshd daemon looks up the remote host name, and also checks that the resolved host name for the remote IP address maps back to the very same IP address. This can result into delays in connection establishment or session creation.

The UseDNS directive controls the above functionality; to disable it, search and uncomment it in the /etc/ssh/sshd_config file. If it’s not set, add it with the value no.

UseDNS no
Disable SSH DNS Lookup

Disable SSH DNS Lookup

3. Reuse SSH Connection

An ssh client program is used to establish connections to an sshd daemon accepting remote connections. You can reuse an already-established connection when creating a new ssh session and this can significantly speed up subsequent sessions.

You can enable this in your ~/.ssh/config file.

Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%[email protected]%h-%p
ControlPersist 600

The above configuration (Host *) will enable connection re-use for all remote servers you connect to using these directives:

  • ControlMaster – enables the sharing of multiple sessions over a single network connection.
  • ControlPath – defines a path to the control socket used for connection sharing.
  • ControlPersist – if used together with ControlMaster, tells ssh to keep the master connection open in the background (waiting for future client connections) once the initial client connection has been closed.
Reuse SSH Connections

Reuse SSH Connections

You can enable this for connections to a specific remote server, for instance:

Host server1
HostName www.example.com
IdentityFile ~/.ssh/webserver.pem
User username_here
ControlMaster auto
ControlPath ~/.ssh/sockets/%[email protected]%h-%p
ControlPersist 600

This way you only suffer the connection overhead for the first connection, and all subsequent connections will be much faster.

4. Use Specific SSH Authentication Method

Another way of speeding up ssh connections is to use a given authentication method for all ssh connections, and here we recommend configuring ssh passwordless login using ssh keygen in 5 easy steps.

Once that is done, use the PreferredAuthentications directive, within ssh_config files (global or user specific) above. This directive defines the order in which the client should try authentication methods (you can specify a command separated list to use more than one method).

PreferredAuthentications=publickey 
SSH Authentication Method

SSH Authentication Method

Optionally, use this syntax below from the command line.

# ssh -o "PreferredAuthentications=publickey" [email protected]

If you prefer password authentication which is deemed unsecure, use this.

# ssh -o "PreferredAuthentications=password" [email protected]

Finally, you need to restart your sshd daemon after making all the above changes.

# systemctl restart sshd #Systemd
# service sshd restart #SysVInit

For more information about the directives used here, see the ssh_config and sshd_config man pages.

# man ssh_config
# man sshd_config 

Also check out these useful guides for securing ssh on Linux systems:

  1. 5 Best Practices to Secure and Protect SSH Server
  2. How to Disconnect Inactive or Idle SSH Connections in Linux

That’s all for now! Do you have any tips/tricks for speeding up SSH connections. We would love to hear of other ways of doing this. Use the comment form below to share with us.

How to Configure Basic HTTP Authentication in Nginx

Basic HTTP authentication is a security mechanism to restrict access to your website/application or some parts of it by setting up simple username/password authentication. It can be used essentially to protect the whole HTTP server, individual server blocks (virtual hosts in Apache) or location blocks.

Read Also: How to Setup Name-based and IP-based Virtual Hosts (Server Blocks) with NGINX

As the name suggests, it is not a security method to rely on; you should use it in conjunction with other more reliable security measures. For instance if your web application is running on HTTP, then user credentials are transmitted in plain text, so you should consider enabling HTTPS.

The purpose of this guide is to help you add a small but useful layer of security to protect private/privileged content on your web applications (such as, but not limited to administrator sides). You can also use it to prevent access to a website or application which is still in the development phase.

Requirements

  1. Install LEMP Stack in CentOS/RHEL 7
  2. Install LEMP Stack in Ubuntu/Debian

Create HTTP Authentication User File


You should start by creating a file that will store username:password pairs. We will use the htpasswd utility from Apache HTTP Server, to create this file.

First check that apache2-utils or httpd-tools, the packages which provide htpasswd utility are installed on your system, otherwise run the appropriate command for your distribution to install it:

# yum install httpd-tools [RHEL/CentOS]
$ sudo apt install apache2-utils [Debian/Ubuntu]

Next, run htpasswd command below to create the password file with the first user. The -c option is used to specify the passwd file, once you hit [Enter], you will be asked to enter the user password.

# htpasswd -c /etc/nginx/conf.d/.htpasswd developer

Add a second user, and do not use the -c option here.

# htpasswd /etc/nginx/conf.d/.htpasswd admin

Now that you have the password file ready, proceed to configure the parts of your web server that you want to restrict access to. To view the password file content (which includes usernames and encrypted passwords), use the cat command below.

# cat /etc/nginx/conf.d/.htpasswd 
View HTTP Password File

View HTTP Password File

Configure HTTP Authentication for Nginx

As we mentioned earlier on, you can restrict access to your webserver, a single web site (using its server block) or a location directive. Two useful directives can be used to achieve this.

  • auth_basic – turns on validation of user name and password using the “HTTP Basic Authentication” protocol.
  • auth_basic_user_file – specifies the password file.

Password Protect Nginx Virtual Hosts

To implement basic authentication for the whole web server, which applies to all server blocks, open the /etc/nginx/nginx.conf file and add the lines below in the http context:

htpp{
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; ……...
}

Password Protect Nginx Website or Domain

To enable basic authentication for a particular domain or sub-domain, open its configuration file under /etc/nginx/conf.d/ or /etc/nginx/conf/sites-available (depending on how you installed Nginx), then add the configuration below in server block or context:

server {
listen 80;
server_name example.com;
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; location / {
……..
}
……...
}

Password Protect Web Directory in Nginx

You can also enable basic authentication within a location directive. In the example below, all users trying to access the /admin location block will be asked to authenticate.

server {
listen 80;
server_name example.com www.example.com;
location / {
……..
}
location /admin/ {
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; }
location /public/{
auth_basic off; #turns off basic http authentication off for this block
}
……..
}

If you have configured basic HTTP authentication, all user who try to access you webserver or a sub-domain or specific part of a site (depending on where you implemented it), will be asked for a username and password as shown in the screen shot below.

Nginx Basic Authentication

Nginx Basic Authentication

In case of a failed user authentication, a “401 Authorization Required” error will be displayed as shown below.

401 Authorization Required Error

401 Authorization Required Error

You can find more information at restricting Access with Basic HTTP Authentication.

You might also like to read these following useful Nginx HTTP server related guides.

  1. How to Password Protect Web Directories in Nginx
  2. The Ultimate Guide to Secure, Harden and Improve Performance of Nginx
  3. Setting Up HTTPS with Let’s Encrypt SSL Certificate For Nginx

In this guide, we showed how to implement basic HTTP authentication in Nginx HTTP web server. To ask any questions, use the feedback form below.

The Unity A to Z Game Development Bundle

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘The Unity A to Z Game Development Bundle’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/11/Unity-Game-Development-Course.jpg’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

How to Download and Extract Tar Files with One Command

Tar (Tape Archive) is a popular file archiving format in Linux. It can be used together with gzip (tar.gz) or bzip2 (tar.bz2) for compression. It is the most widely used command line utility to create compressed archive files (packages, source code, databases and so much more) that can be transferred easily from machine to another or over a network.

In this article, we will show you how to download tar archives using two well known command line downloaders – wget or cURL and extract them with one single command.

How to Download and Extract File Using Wget Command

The example below shows how to download, unpack the latest GeoLite2 Country databases (use by the GeoIP Nginx module) in the current directory.

# wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | tar -xz
Download and Extract File with Wget

Download and Extract File with Wget

The wget option -O specifies a file to which the documents is written, and here we use -, meaning it will written to standard output and piped to tar and the tar flag -x enables extraction of archive files and -z decompresses, compressed archive files created by gzip.

To extract tar files to specific directory, /etc/nginx/ in this case, include use the -C flag as follows.

Note: If the extracting directories to a file that requires root permissions, use the sudo command to run tar.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | sudo tar -xz -C /etc/nginx/
Download and Extract File to Directory

Download and Extract File to Directory

Alternatively, you can use the following command, here, the archive file will be downloaded on your system before you can extract it.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && tar -xzf  GeoLite2-Country.tar.gz

To extract compressed archive file to a specific directory, use the following command.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf  GeoLite2-Country.tar.gz -C /etc/nginx/

How to Download and Extract File Using cURL Command

Considering the previous example, this is how you can use cURL to download and unpack archives in the current working directory.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | tar -xz 
Download and Extract File with cURL

Download and Extract File with cURL

To extract file to different directory while downloading, use the following command.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | sudo tar -xz  -C /etc/nginx/
OR
$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

That’s all! In this short but useful guide, we showed you how to download and extract archive files in one single command. If you have any queries, use the comment section below to reach us.

How to Download and Extract Tar Files with One Command

Tar (Tape Archive) is a popular file archiving format in Linux. It can be used together with gzip (tar.gz) or bzip2 (tar.bz2) for compression. It is the most widely used command line utility to create compressed archive files (packages, source code, databases and so much more) that can be transferred easily from machine to another or over a network.

Read Also: 18 Tar Command Examples in Linux

In this article, we will show you how to download tar archives using two well known command line downloaders – wget or cURL and extract them with one single command.

How to Download and Extract File Using Wget Command

The example below shows how to download, unpack the latest GeoLite2 Country databases (use by the GeoIP Nginx module) in the current directory.

# wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | tar -xz
Download and Extract File with Wget

Download and Extract File with Wget


The wget option -O specifies a file to which the documents is written, and here we use -, meaning it will written to standard output and piped to tar and the tar flag -x enables extraction of archive files and -z decompresses, compressed archive files created by gzip.

To extract tar files to specific directory, /etc/nginx/ in this case, include use the -C flag as follows.

Note: If extracting files to particular directory that requires root permissions, use the sudo command to run tar.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | sudo tar -xz -C /etc/nginx/
Download and Extract File to Directory

Download and Extract File to Directory

Alternatively, you can use the following command, here, the archive file will be downloaded on your system before you can extract it.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && tar -xzf GeoLite2-Country.tar.gz

To extract compressed archive file to a specific directory, use the following command.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

How to Download and Extract File Using cURL Command

Considering the previous example, this is how you can use cURL to download and unpack archives in the current working directory.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | tar -xz 
Download and Extract File with cURL

Download and Extract File with cURL

To extract file to different directory while downloading, use the following command.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | sudo tar -xz -C /etc/nginx/
OR
$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

That’s all! In this short but useful guide, we showed you how to download and extract archive files in one single command. If you have any queries, use the comment section below to reach us.

5 new OpenStack resources

As OpenStack has continued to mature and move from the first stages of adoption to use in production clouds, the focus of the OpenStack community has shifted as well, with more focus than ever on integrating OpenStack with other infrastructure projects. Today’s cloud architects and engineers need to be familiar with a wide range of projects and how they might be of use in their data center, and OpenStack is often the glue stitching the different pieces together.

More on OpenStack

Keeping up with everything you need to know can be tough. Fortunately, learning new skills is made a little easier by the large number of resources available to help you. Along with project documentation, support from your vendors and the community at large, printed books and other publications, and certification and training programs, there are many wonderful community-created resources as well.

Every month we share some of the best OpenStack-related content we come across, from guides and tutorials to deep-dives and technical notes. Have a look at what we found this month.

  • Security is always important in cloud applications, but sometimes security protocols require conformance to certain exact specifications. In this guide on how to build security hardened images with volumes, learn how to take advantage of changes introduced in the Queens release of OpenStack which allow for using volumes for your images, giving you greater flexibility when resizing filesystems.

  • Real-time systems impose certain operating constraints, including determinism and guaranteed performance levels, which have been historically difficult to find in the cloud. This guide to deploying real-time OpenStack shows you how recent feature additions in Nova can allow for real-time applications in an OpenStack environment. While focused on CentOS and DevStack, with a few modifications this tutorial could be used on other installation profiles as well.

  • The rapid pace of development with OpenStack means an entirely new release becomes available every six months. But in a production environment running mission-critical systems, upgrading at that pace can be difficult. One approach to dealing with this issue is allowing for quick upgrades across multiple releases of OpenStack at a time. TripleO fast-forward upgrades allow this possibility, and this guide will walk you through a rough demo of how it works.

  • Have you wanted to try out the recently open sourced AWX, which is the upstream of Ansible Tower, for managing Ansible projects? You’re in luck. Here is a simple guide to deploying AWX to an OpenStack RDO cloud.

  • Finally this month, in case you missed it, earlier this month we ran a great tutorial for getting started with Gnocchi. Gnocchi is a tool which enables indexing and storage of time series data, and purpose-built for large-scale environments like clouds. While now cloud-agnostic, Gnocchi is commonly installed with OpenStack to manage logging and metrics needs.


Thanks for checking out this month’s roundup. If you’d like to learn more, take a look back at our entire collection of OpenStack guides, how-tos, and tutorials with more than three years of community-made content. Did we leave out a great guide or tutorial that you found? Let us know in the comments below, and we’ll consider putting it in our next edition.

How to Test Website Loading Speed in Linux Terminal

A website response time can have a great impact on user experience, and if you are a web developer, or simply a server administrator who is particularly responsible for organizing the pieces together, then you have to make it a point that users don’t feel frustrated while accessing your site – so there is really “need for speed”.

Read Also: httpstat – A Curl Statistics Tool to Check Website Performance

This guide will show you how to test a website response time from the Linux command line. Here, we will show how to check the time in seconds, it takes:

  • to perform name resolution.
  • for TCP connection to the server.
  • for the file transfer to begin.
  • for the first byte to be transferred.
  • for the complete operation.

Additionally, for HTTPS-enabled sites, we will also see how to test the time, in seconds, it takes: for a redirect, and SSL connection/handshake to the server to be completed. It sounds good right, okay, let’s get started.


cURL is a powerful command line tool to transfer data from or to a server, using protocols such as FILE, FTP, FTPS, HTTP, HTTPS and many others. In most cases, it is used as a command line downloader, or for checking HTTP headers. However, here, we will describe one of its lesser-known functionalities.

cURL has a useful option: -w for printing information on stdout after a completed operation. It has some variables that we can use to test the different response times listed above, of a website.

We will use some of the time-related variables, which can be passed in a given format as a literal string or inside a file.

So open your terminal and run the command below:

$ curl -s -w 'Testing Website Response Time for :%{url_effective}\n\nLookup Time:\t\t%{time_namelookup}\nConnect Time:\t\t%{time_connect}\nPre-transfer Time:\t%{time_pretransfer}\nStart-transfer Time:\t%{time_starttransfer}\n\nTotal Time:\t\t%{time_total}\n' -o /dev/null http://www.google.com
Test Website Loading Speed

Test Website Loading Speed

The variables in the above format are:

  • time_namelookup – time, in seconds, it took from the start until the name resolving was completed.
  • time_connect – time, in seconds, it took from the start until the TCP connect to the remote host (or proxy) was completed.
  • time_pretransfer – time, in seconds, it took from the start until the file transfer was just about to begin.
  • time_starttransfer – time, in seconds, it took from the start until the first byte was just about to be transferred.
  • time_total – total time, in seconds, that the full operation lasted (millisecond resolution).

If the format is too long, you can write it in a file and use the syntax below to read it:

$ curl -s -w "@format.txt" -o /dev/null http://www.google.com

In the above command, the flag:

  • -s – tells curl to work silently.
  • -w – print the information on stdout.
  • -o – used to redirect output (here we discard the output by redirecting it to /dev/null).

For HTTPS sites, you can run the command below:

$ curl -s -w 'Testing Website Response Time for :%{url_effective}\n\nLookup Time:\t\t%{time_namelookup}\nConnect Time:\t\t%{time_connect}\nAppCon Time:\t\t%{time_appconnect}\nRedirect Time:\t\t%{time_redirect}\nPre-transfer Time:\t%{time_pretransfer}\nStart-transfer Time:\t%{time_starttransfer}\n\nTotal Time:\t\t%{time_total}\n' -o /dev/null https://www.google.com
Test HTTPS Website Speed

Test HTTPS Website Speed

In the above format, the new time variables are:

  • time_appconnect – time, in seconds, it took from the start until the SSL connect/handshake to the remote host was completed.
  • time_redirect – time, in seconds, it took for all redirection steps including name lookup, connect, pretransfer and transfer before the final transaction was started; it computes the full execution time for multiple redirections.

Important points to be noted.

  • You will notice that the response time values keep on changing (due to several factors) as you run different tests, therefore it is advisable to collect several values and get an average speed.
  • Secondly, from the results of the commands above, you can see that accessing a website over HTTP is much faster than over HTTPS.

For more information, see the cURL man page:

$ man curl

Last but not least, if your results are not pleasing, then you have some adjustments to make on your server or within the code. You may consider using the following tutorials which explain programs and tips to make website(s) load faster in Linux:

  1. Install Nginx with Ngx_Pagespeed (Speed Optimization) on Debian and Ubuntu
  2. Speed Up Nginx Performance with Ngx_Pagespeed on CentOS 7
  3. Learn How to Speed Up Websites Using Nginx and Gzip Module
  4. How to Boost Linux Server Internet Speed with TCP BBR

That’s all! Now you know how to test website response time from the command line. You can ask questions via the feedback form below.

How to Install Zen Cart E-commerce Shopping Store in Linux

This topic will cover the step by step installation process of Zen Cart open source e-commerce platform in Debian-based Linux distributions and in RHEL and CentOS 7 Linux operating systems.

Zen Cart is an easy to manage and popular shopping CMS platform, written in PHP server-side programming language and deployed on top of LAMP stack that is mainly used to create online stores for advertising products and merchandises.

Requirements

  1. LAMP stack installed in CentOS 7
  2. LAMP stack installed in Ubuntu
  3. LAMP stack installed in Debian

Step 1: Install System Pre-Requirements for Zen Cart

1. On the first step, log in to your server console and issue the following commands in order to install unzip and curl utilities in your system.

# yum install unzip zip curl [On CentOS/RHEL]
# apt install zip unzip curl [On Debian/Ubuntu]

2. Zen Cart online e-commerce platform is very often installed on top of LAMP stack in Linux systems. If LAMP stack is already installed in your machine you should also make sure you install the following PHP extensions required by Zen Cart e-commerce application by issuing the following command.

------------------ On CentOS/RHEL ------------------ # yum install epel-release
# yum install php-curl php-xml php-gd php-mbstring
------------------ On Debian/Ubuntu ------------------
# apt install php7.0-curl php7.0-xml php7.0-gd php7.0-mbstring


3. After all required PHP modules and installed in your system, open default PHP configuration file specific to your Linux distribution and update the below PHP settings.

Issue the below command according to your distribution to open and edit PHP configuration file.

# vi /etc/php.ini [On CentOS/RHEL]
# nano /etc/php/7.0/apache2/php.ini [On Debian/Ubuntu]

Search and replace the following PHP settings as shown in the below excerpt:

file_uploads = On
allow_url_fopen = On
memory_limit = 64M
upload_max_file_size = 64M
date.timezone = Europe/Bucharest

Visit the official PHP time zone list in order to find the correct timezone according to your server geographical location.

4. After you’ve updated PHP configuration file with the required settings, save and close the file and restart Apache service to re-read the configurations by issuing the following command.

# systemctl restart httpd [On CentOS/RHEL]
# systemctl restart apache2 [On Debian/Ubuntu]

5. Zen Cart e-commerce platform needs a RDBMS database to store application data. To create a Zen Cart database, log to MySQL server console and issue the below command to create Zen Cart database and the credentials needed to access the database.

Replace the database name, user and password variables with your own settings.

# mysql -u root -p
MariaDB [(none)]> create database zencart_shop;
MariaDB [(none)]> grant all privileges on zencart_shop.* to 'your_user'@'localhost' identified by 'your_password';
MariaDB [(none)]> flush privileges; MariaDB [(none)]> exit

Step 2: Install Zen Cart in CentOS, Debian and Ubuntu

6. In order to install Zen Cart e-commerce application, first download the latest Zen Cart zip archive file in your system by issuing the below command.

# wget https://sourceforge.net/projects/zencart/files/CURRENT%20-%20Zen%20Cart%201.5.x%20Series/zen-cart-v1.5.5e-03082017.zip 

7. After Zen Cart zip file download finishes, issue the following commands to extract the zip archive and copy the installation files to web server document root path.

# unzip zen-cart-v1.5.5e-03082017.zip
# cp -rf zen-cart-v1.5.5e-03082017/* /var/www/html/

8. Next, issue the following command to grant Apache HTTP server full write permission to Zen Cart installation files from server’s document root path.

# chown -R apache:apache /var/www/html/ [On CentOS/RHEL]
# chown -R www-data:www-data /var/www/html/ [On Debian/Ubuntu]

9. Next, open a browser and navigate to your server IP address or domain name via HTTP protocol and hit on Click here link in order to start the installation process of Zen Cart.

http://your_domain.tld/
ZenCart Installation Wizard

ZenCart Installation Wizard

10. In the next step, Zen Cart installer will inspect your system and report eventual problems in case the system configuration does not meet all requirements to install the shopping platform. If no warnings or errors are displayed, click on Continue button to move to the next step.

ZenCart System Check

ZenCart System Check

11. On the next installation stage, check to agree the license terms and verify your store frontend URL addresses as illustrated in the below screenshot. Replace the IP address or domain name to match your server configuration. When you finish hit the Continue button to move forward with the installation process.

ZenCart System Setup

ZenCart System Setup

12. Next, supply MySQL database information (database host address, database name and credentials), check the Load Demo Data into Zen Cart database and select database character set, database prefix and SQL Cache Method as illustrated in the below screenshot. Click on Continue button when you finish in order to further configure Zen Cart.

ZenCart Database Setup

ZenCart Database Setup

13. In the next installation screen, supply an Admin Superuser name that will be used to log in to store backed and an email address for the Superuser admin account. Write or make a picture of the Admin temporary password and Admin directory name and hit on Continue button to start the installation process.

ZenCart Admin Setup

ZenCart Admin Setup

14. Wait for the installation process to finish and you will redirected to Zen Cart final installation screen. Here you will find two links for accessing Zen Cart Admin Backed dashboard and Your Storefront link, as illustrated in the below screenshot. Make sure you note the store admin backend address.

ZenCart Installation Finished

ZenCart Installation Finished

15. Now, before actually logging in to your store backend panel, first return to your server bash console and issue the below command in order to delete the installation directory.

# rm -rf /var/www/html/zc_install/

16. Afterwards, go back to browser and click on Admin backend link in order to be redirected to Zen Cart backend dashboard login page. Log in to Zen Cart admin panel with the admin user and password configured earlier and you should be prompted to change admin account temporary password in order to secure your store.

ZenCart Admin Login

ZenCart Admin Login

Setup ZenCart Admin Password

Setup ZenCart Admin Password

17. When you first log in to Zen Cart backend panel, a new initial setup wizard will be displayed in your screen. In the initial wizard add your store name, owner, store owner email address, store country, store zone and store address and click on Update button to save changes. After completing this last step you can start managing your online store, configure locations and taxes and add some products.

ZenCart Initial Setup Wizard

ZenCart Initial Setup Wizard

18. Finally, in order to visit your Zen Cart frontend store, navigate to your server IP address or domain name via HTTP protocol, as illustrated in the below screenshot. This is the webpage where your advertised products will be displayed for your clients.

http://ww.yourdomain.tld 
ZenCart Store Frontend

ZenCart Store Frontend

Congratulations! You have successfully deployed Zen Cart online e-commerce platform in your system.