How to Run Multiple Commands on Multiple Linux Servers

If you are managing multiple Linux servers, and you want to run multiple commands on all the Linux servers, but you have no idea about how to do it. There is no need to worry, in this simple server management guide, we will show you how to run multiple commands on multiple Linux servers simultaneously.

To achieve, this you can use the pssh (parallel ssh) program, a command line utility for executing ssh in parallel on a number of hosts. With it, you can send input to all of the ssh processes, from a shell script.

Requirements

  1. Install Pssh to Run Commands on Multiple Remote Linux Servers
  2. You must be using SSH passwordless authentication for all remote servers.

Create a Shell Script

Therefore, you need to start by preparing a script which contains the Linux commands you want to execute on the different servers. In this example, we will write a script that will collect the following information from multiple servers:

  • Check uptime of servers
  • Check who is logged on and what they are doing
  • List top 5 running processes according to memory usage.

First create a script called commands.sh with your favorite editor.

# vi commands.sh


Next, add the following commands to the script as shown.

#!/bin/bash ###############################################################################
#Script Name : commands.sh #Description : execute multiple commands on multiple servers #Author : Aaron Kili Kisinga #Email : [email protected] ################################################################################
echo
# show system uptime
uptime
echo
# show who is logged on and what they are doing
who
echo
# show top 5 processe by RAM usage ps -eo cmd,pid,ppid,%mem,%cpu --sort=-%mem | head -n 6
exit 0

Save the file and close it. Then make the script executable as shown.

# chmod +x commands.sh

Create PSSH Hosts File

Next, add the list of servers that you want to run the commands on, in a hosts.txt file, in the format [[email protected]]host[:port] or simply give the server IP addresses.

But we suggest you use ssh aliases which can be specified in .ssh/config file as explained in how to configure custom ssh connections to simplify remote access.

This method is more efficient and reliable, it allows you to specify configuration options (such as host name, identify file, port, username etc..) for each remote server.

Following is our sample ssh hosts aliases file a.k.a user specific ssh configuration file.

# vi ~/.ssh/config
SSH Hosts Aliases File

SSH Hosts Aliases File

Next, create a hosts.txt file, here you can simply specify the aliases (names defined using Host keyword in .ssh/config file) as shown.

# vi hosts.txt 

Add the server aliases.

server1
server2
server3

Run Commands via a Script on Multiple Linux Servers

Now run the following pssh command by specifying hosts.txt file along with the script that contains multiple commands to run on multiple remote servers.

# pssh -h hosts.txt -P -I<./commands.sh

Meaning of the flags used in the above command:

  • -h – reads the hosts file.
  • -P – tells pssh to display output as it arrives.
  • -I – reads input and sends to each ssh process.
Run Multiple Commands On Remote Servers

Run Multiple Commands On Remote Servers

That’s It! In this article, we showed how to execute multiple commands on multiple servers in Linux. You can share any thoughts relating to this topic via the comment section below.

Learn Ethical Hacking with Ultimate White Hat Hacker 2018 Bundle

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘Learn Ethical Hacking with Ultimate White Hat Hacker 2018 Bundle’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/12/Learn-White-Hat-Hacking.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

How to Install X-Cart Shopping Cart in Linux

X-Cart is a commercial open source e-commerce CMS platform written in PHP used for creating online stores for businesses and sell products.

In this topic we’ll learn how to install X-Cart e-commerce platform in Debian 9, Ubuntu 16.04 or CentOS 7, in order to create a business online shopping store.

Requirements

  1. LAMP stack installed in CentOS 7
  2. LAMP stack installed in Ubuntu
  3. LAMP stack installed in Debian

Step 1: Initial Configurations for X-Cart Installation

1. On the first step, install unzip utility in your system by issuing the following command.

# yum install unzip zip [On CentOS/RHEL]
# apt install zip unzip [On Debian/Ubuntu]

2. X-Cart is a web based e-commerce platform which is deployed on top of LAMP stack in Linux. In order to install X-Cart in your system, first install all required application’s PHP modules in your LAMP stack by issuing the following command.

------------------ On CentOS/RHEL ------------------ # yum install epel-release
# yum install php-mbstring php-curl php-gd php-xml
------------------ On Debian/Ubuntu ------------------
# apt install php7.0-mbstring php7.0-curl php7.0-gd php7.0-xm


3. Next, update the following PHP variables from default configuration file and setup the PHP timezone to match your system geographical location. The list of time zones provided by PHP can be found at official PHP timezones page.

Edit PHP configuration file by issuing the below commands according to your own distribution.

# vi /etc/php.ini [On CentOS/RHEL]
# nano /etc/php/7.0/apache2/php.ini [On Debian/Ubuntu]

Update the following variables in php.ini configuration file.

file_uploads = On
allow_url_fopen = On
memory_limit = 128 M
upload_max_file_size = 64M
date.timezone = Europe/Bucharest

4. Save and close PHP configuration file and restart Apache daemon to reflect changes by issuing the following command.

# systemctl restart httpd [On CentOS/RHEL]
# systemctl restart apache2 [On Debian/Ubuntu]

5. Next, log in to MariaDB/MySQL database console and create X-Cart application database with the proper credentials, by issuing the following commands.

Replace the database name, user and password with your own values.

# mysql -u root -p
MariaDB [(none)]> create database xcart;
MariaDB [(none)]> grant all privileges on xcart.* to 'xcartuser'@'localhost' identified by 'your_password';
MariaDB [(none)]> flush privileges; MariaDB [(none)]> exit

Step 2: Install X-Cart in CentOS, Debian and Ubuntu

6. To install X-Cart, first go to X-Cart download page from a Desktop machine download the latest zip package by filling the required web form from their website.

Then, copy the downloaded zip file to the server /tmp directory via scp command or sftp protocols, as illustrated in the below examples.

# scp x-cart-5.3.3.4-gb.zip [email protected]_server_IP:/tmp [Using SCP]
# sftp://[email protected]_server_IP:/tmp [Using sFTP] 

7. After you’ve copied the X-Cart zip archive to server /tmp directory, go back to server terminal and extract the archive by issuing the below command.

# cd /tmp
# unzip x-cart-5.3.3.4-gb.zip

8. Then, create a directory named shop in /vaw/www/html/ path and copy the content of xcart directory to web server document root path to shop directory, by issuing the following command. Also, copy the hidden file .htaccess to webroot /shop directory path.

# mkdir /vaw/www/html/shop
# cp -rf xcart/* /var/www/html/shop/
# cp xcart/.htaccess /var/www/html/shop/

9. Next, make sure all files from webroot path /shop directory are owned by Apache user. Issue ls command to list /var/www/html/shop/ directory permissions.

# chown -R apache:apache /var/www/html/shop [On CentOS/RHEL]
# chown -R www-data:www-data /var/www/html/shop [On Debian/Ubuntu]
# ls -al /var/www/html/shop

10. Next, go to your server IP address via HTTP protocol to /shop URL and hit on Click here link in order to start the installation process.

http://your_domain.tld/shop/
Install X-Cart Shopping Store

Install X-Cart Shopping Store

11. Next, check I accept the License Agreement and the Privacy policy and hit on Next button to accept the license and move to the next installation screen.

Accept X-Cart License Agreement

Accept X-Cart License Agreement

12. On the next screen add your email address and setup a password for admin account and hit the Next button to continue the installation process.

Create X-Cart Admin Account

Create X-Cart Admin Account

13. Next, add X-Cart MySQL database name and credentials created earlier, check Install a sample catalog and hit on Next button to continue.

Configure X-Cart Database Settings

Configure X-Cart Database Settings

14. Wait for the installation process to complete and you will see two links for accessing X-Cart Administration zone (backoffice) panel and X-cart frontend (Customer zone) of your store, as illustrated in the below image.

X-Cart Installation Completed

X-Cart Installation Completed

15. Visit your X-cart store frontend, by hitting on Customer zone link. You can also visit the store frontend by navigating to your server IP address or domain name to /shop URL as shown in the below example.

http://yourdomain.tld/shop
X-Cart Shopping Store

X-Cart Shopping Store

16. Next, go back to server console and secure your X-Cart backed admin panel, by issuing the below commands:

# chown -R root /var/www/html/shop/etc/
# chown root /var/www/html/shop/config.php

17. Finally, access X-Cart backed panel by hitting on Administrator zone (Backoffice) link or by navigating to your server IP address or domain name via HTTP protocol to /shop/admin.php URL, as shown in the below example.

http://your_domain.tld/stop/admin.php
X-Cart Admin Login

X-Cart Admin Login

18. After logging in to X-Cart backed admin panel with the credentials configured during the installation process you should activate your X-Cart edition and start managing your online store.

X-Cart Admin Dashboard

X-Cart Admin Dashboard

Congratulations! You have successfully installed and configured X-Cart e-commerce platform in your server.

12 MySQL/MariaDB Security Best Practices for Linux

MySQL is the world’s most popular open source database system and MariaDB (a fork of MySQL) is the world’s fastest growing open source database system. After installing MySQL server, it is insecure in it’s default configuration, and securing it is one of the essential tasks in general database management.

Read Also: Learn MySQL/MariaDB for Beginners – Part 1

This will contribute to hardening and boosting of overall Linux server security, as attackers always scan vulnerabilities in any part of a system, and databases have in the past been key target areas. A common example is the brute-forcing of the root password for the MySQL database.

In this guide, we will explain useful MySQL/MariaDB security best practice for Linux.

1. Secure MySQL Installation


This is the first recommended step after installing MySQL server, towards securing the database server. This script facilitates in improving the security of your MySQL server by asking you to:

  • set a password for the root account, if you didn’t set it during installation.
  • disable remote root user login by removing root accounts that are accessible from outside the local host.
  • remove anonymous-user accounts and test database which by default can be accessed by all users, even anonymous users.
# mysql_secure_installation

After running it, set the root password and answer the series of questions by entering [Yes/Y] and press [Enter].

Secure MySQL Installation

Secure MySQL Installation

2. Bind Database Server To Loopback Address

This configuration will restrict access from remote machines, it tells the MySQL server to only accept connections from within the localhost. You can set it in main configuration file.

# vi /etc/my.cnf [RHEL/CentOS] # vi /etc/mysql/my.conf [Debian/Ubuntu] OR
# vi /etc/mysql/mysql.conf.d/mysqld.cnf [Debian/Ubuntu] 

Add the following line below under [mysqld] section.

bind-address = 127.0.0.1

3. Disable LOCAL INFILE in MySQL

As part of security hardening, you need to disable local_infile to prevent access to the underlying filesystem from within MySQL using the following directive under [mysqld] section.

local-infile=0

4. Change MYSQL Default Port

The Port variable sets the MySQL port number that will be used to listen on TCP/ IP connections. The default port number is 3306 but you can change it under the [mysqld] section as shown.

Port=5000

5. Enable MySQL Logging

Logs are one of the best ways to understand what happens on a server, in case of any attacks, you can easily see any intrusion-related activities from log files. You can enable MySQL logging by adding the following variable under the [mysqld] section.

log=/var/log/mysql.log

6. Set Appropriate Permission on MySQL Files

Ensure that you have appropriate permissions set for all mysql server files and data directories. The /etc/my.conf file should only be writeable to root. This blocks other users from changing database server configurations.

# chmod 644 /etc/my.cnf

7. Delete MySQL Shell History

All commands you execute on MySQL shell are stored by the mysql client in a history file: ~/.mysql_history. This can be dangerous, because for any user accounts that you will create, all usernames and passwords typed on the shell will recorded in the history file.

# cat /dev/null > ~/.mysql_history

8. Don’t Run MySQL Commands from Commandline

As you already know, all commands you type on the terminal are stored in a history file, depending on the shell you are using (for example ~/.bash_history for bash). An attacker who manages to gain access to this history file can easily see any passwords recorded there.

It is strongly not recommended to type passwords on the command line, something like this:

# mysql -u root -ppassword_
Connect MySQL with Password

Connect MySQL with Password

When you check the last section of the command history file, you will see the password typed above.

# history 
Check Command History

Check Command History

The appropriate way to connect MySQL is.

# mysql -u root -p
Enter password:

9. Define Application-Specific Database Users

For each application running on the server, only give access to a user who is in charge of a database for a given application. For example, if you have a wordpress site, create a specific user for the wordpress site database as follows.

# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE osclass_db;
MariaDB [(none)]> CREATE USER 'osclassdmin'@'localhost' IDENTIFIED BY '[email protected]%!2';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON osclass_db.* TO 'osclassdmin'@'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit

and remember to always remove user accounts that are no longer managing any application database on the server.

10. Use Additional Security Plugins and Libraries

MySQL includes a number of security plugins for: authenticating attempts by clients to connect to mysql server, password-validation and securing storage for sensitive information, which are all available in the free version.

You can find more here: https://dev.mysql.com/doc/refman/5.7/en/security-plugins.html

11. Change MySQL Passwords Regularly

This is a common piece of information/application/system security advice. How often you do this will entirely depend on your internal security policy. However, it can prevent “snoopers” who might have been tracking your activity over an long period of time, from gaining access to your mysql server.

MariaDB [(none)]> USE mysql;
MariaDB [(none)]> UPDATE user SET password=PASSWORD('YourPasswordHere') WHERE User='root' AND Host = 'localhost';
MariaDB [(none)]> FLUSH PRIVILEGES;

12. Update MySQL Server Package Regularly

It is highly recommended to upgrade mysql/mariadb packages regularly to keep up with security updates and bug fixes, from the vendor’s repository. Normally packages in default operating system repositories are outdated.

# yum update
# apt update

After making any changes to the mysql/mariadb server, always restart the service.

# systemctl restart mariadb #RHEL/CentOS
# systemctl restart mysql #Debian/Ubuntu

Read Also: 15 Useful MySQL/MariaDB Performance Tuning and Optimization Tips

That’s all! We love to hear from you via the comment form below. Do share with us any MySQL/MariaDB security tips missing in the above list.

How to Check Integrity of File and Directory Using “AIDE” in Linux

In our mega guide to hardening and securing CentOS 7, under the section “protect system internally”, one of the useful security tools we listed for internal system protection against viruses, rootkits, malware, and detection of unauthorized activities is AIDE.

AIDE (Advanced Intrusion Detection Environment) is a small yet powerful, free open source intrusion detection tool, that uses predefined rules to check file and directory integrity in Unix-like operating systems such as Linux. It is an independent static binary for simplified client/server monitoring configurations.

It is feature-rich: uses plain text configuration files and database making it easy to use; supports several message digest algorithms such as but not limited to md5, sha1, rmd160, tiger; supports common file attributes; also supports powerful regular expressions to selectively include or exclude files and directories to be scanned.

Also it can be compiled with exceptional support for Gzip compression, Posix ACL, SELinux, XAttrs and Extended file system attributes.


Aide works by creating a database (which is simply a snapshot of selected parts of the file system), from the regular expression rules defined in the configuration file(s). Once this database is initialized, you can verify the integrity of the system files against it. This guide will show how to install and use aide in Linux.

How to Install AIDE in Linux

Aide is packaged in official repositories of mainstream Linux distributions, to install it run the command for your distribution using a package manager.

# apt install aide [On Debian/Ubuntu]
# yum install aide [On RHEL/CentOS] # dnf install aide [On Fedora 22+]
# zypper install aide [On openSUSE]
# emerge aide [On Gentoo]

After installing it, the main configuration file is /etc/aide.conf. To view the installed version as well as compile time parameters, run the command below on your terminal:

# aide -v
Sample Output
Aide 0.14
Compiled with the following options:
WITH_MMAP
WITH_POSIX_ACL
WITH_SELINUX
WITH_PRELINK
WITH_XATTR
WITH_LSTAT64
WITH_READDIR64
WITH_ZLIB
WITH_GCRYPT
WITH_AUDIT
CONFIG_FILE = "/etc/aide.conf"

You can open the configuration using your favorite editor.

# vi /etc/aide.conf

It has directives that define the database location, report location, default rules, the directories/files to be included in the database.

Understanding Default Aide Rules

AIDE Default Rules

AIDE Default Rules

Using the above default rules, you can define new custom rules in the aide.conf file for example.

PERMS = p+u+g+acl+selinux+xattrs

The PERMS rule is used for access control only, it will detect any changes to file or directories based on file/directory permissions, user, group, access control permissions, SELinux context and file attributes.

This will only check file content and file type.

CONTENT = sha256+ftype

This is an extended version of the previous rule, it checks extended content, file type and access.

CONTENT_EX = sha256+ftype+p+u+g+n+acl+selinux+xattrs

The DATAONLY rule below will help detect any changes in data inside all files/directory.

DATAONLY = p+n+u+g+s+acl+selinux+xattrs+sha256
Configure Aide Rules

Configure Aide Rules

Defining Rules to Watch Files and Directories

Once you have defined rules, you can specify the file and directories to watch. Considering the PERMS rule above, this definition will check permissions for all files in root directory.

/root/\..* PERMS

This will check all files in the /root directory for any changes.

/root/ CONTENT_EX

To help you detect any changes in data inside all files/directory under /etc/, use this.

/etc/ DATAONLY 
Configure Aide Rules for Filesystem

Configure Aide Rules for Filesystem

Using AIDE to Check File and Directory Integrity in Linux

Start by constructing a database against the checks that will be performed using --init flag. This is expected to be done before your system is connected to a network.

The command below will create a database that contains all of the files that you selected in your configuration file.

# aide --init
Initialize Aide Database

Initialize Aide Database

Then rename the database to /var/lib/aide/aide.db.gz before proceeding, using this command.

# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

It is recommended to move the database to a secure location possibly in a read-only media or on another machines, but ensure that you update the configuration file to read it from there.

After the database is created, you can now check the integrity of the files and directories using the --check flag.

# aide --check

It will read the snapshot in the database and compares it to the files/directories found you system disk. If it finds changes in places that you might not expect, it generates a report which you can then review.

Run File Integrity Check

Run File Integrity Check

Since no changes have been made to the file system, you will only get an output similar to the one above. Now try to create some files in the file system, in areas defined in the configuration file.

# vi /etc/script.sh
# touch all.txt

Then run a check once more, which should report the files added above. The output of this command depends on the parts of the file system you configured for checking, it can be lengthy overtime.

# aide --check
Check File System Changes

Check File System Changes

You need to run aide checks regularly, and in case of any changes to already selected files or addition of new file definitions in the configuration file, always update the database using the --update option:

# aide --update

After running a database update, to use the new database for future scans, always rename it to /var/lib/aide/aide.db.gz:

# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

That’s all for now! But take note of these important points:

  • One characteristic of most intrusion detection systems AIDE inclusive, is that they will not provide solutions to most security loop holes on a system. They however, assist in easing the the intrusion response process by helping system administrators examine any changes to system files/directories. So you should always be vigilant and keep updating your current security measures.
  • It it highly recommended to keep the newly created database, the configuration file and the AIDE binary in a secure location such as read-only media (possible if you install from source).
  • For additional security, consider signing the configuration and/or database.

For additional information and configurations, see its man page or check out the AIDE Homepage: http://aide.sourceforge.net/

A Shell Script to Send Email Alert When Memory Gets Low

A powerful aspect of Unix/Linux shell programs such as bash, is their amazing support for common programming constructs that enable you to make decisions, execute commands repeatedly, create new functions, and so much more. You can write commands in a file known as a shell script and execute them collectively.

This offers you a reliable and effective means of system administration. You can write scripts to automate tasks, for instance daily back ups, system updates etc; create new custom commands/utilities/tools and beyond. You can write scripts to help you keep up with what’s unfolding on a server.

One of the critical components of a server is memory (RAM), it greatly impacts on overall performance of a system.

In this article, we will share a small but useful shell script to send an alert email to one or more system administrator(s), if server memory is running low.


This is script is particularly useful for keeping an eye on Linux VPS (Virtual Private Servers) with small amount of memory, say of about 1GB (approximately 990MB).

Testing Environment Setup

  1. A CentOS/RHEL 7 production server with mailx utility installed with working postfix mail server.

This is how the alertmemory.sh script works: first it checks the free memory size, then determines if amount of free memory is less or equal to a specified size (100 MB for the purpose of this guide), used as a bench mark for the least acceptable free memory size.

If this condition is true, it will generate a list of the top 10 processes consuming server RAM and sends an alert email to specified email addresses.

Note: You will have to make a few changes to script (especially the mail sender utility, use the appropriate flags) to meet your Linux distributions requirements.

Shell Script to Check Server Memory

#!/bin/bash #######################################################################################
#Script Name :alertmemory.sh
#Description :send alert mail when server memory is running low
#Args : #Author :Aaron Kili Kisinga
#Email :[email protected]
#License : GNU GPL-3 #######################################################################################
## declare mail variables
##email subject subject="Server Memory Status Alert"
##sending mail as
from="[email protected]"
## sending mail to
to="[email protected]"
## send carbon copy to
also_to="[email protected]"
## get total free memory size in megabytes(MB) free=$(free -mt | grep Total | awk '{print $4}')
## check if free memory is less or equals to 100MB
if [[ "$free" -le 100 ]]; then
## get top processes consuming system memory and save to temporary file ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head >/tmp/top_proccesses_consuming_memory.txt
file=/tmp/top_proccesses_consuming_memory.txt
## send email if system memory is running low
echo -e "Warning, server memory is running low!\n\nFree memory: $free MB" | mailx -a "$file" -s "$subject" -r "$from" -c "$to" "$also_to"
fi
exit 0

After creating your script /etc/scripts/alertmemory.sh, make it executable and symlink to cron.hourly.

# chmod +x /etc/scripts/alertmemory.sh
# ln -s -t /etc/cron.hourly/alertmemory.sh /etc/scripts/alertmemory.sh

This means that the above script will be run after every 1 hour as long as the server is running.

Tip: You can test if it is working as intended, set the bench mark value a little high to easily trigger an email to be sent, and specify a small interval of about 5 minutes.

Then keep on checking from the command line using the free command provided in the script. Once you confirm that it is working, define the actual values you would like to use.

Below is a screenshot showing a sample alert email.

Linux Memory Email Alert

Linux Memory Email Alert

That’s all! In this article, we explained how to use shell script to send alert emails to system administrators in case server memory (RAM) is running low. You can share any thoughts relating to this topic, with us via the feedback form below.

4 Ways to Speed Up SSH Connections in Linux

SSH is the most popular and secure method for managing Linux servers remotely. One of the challenges with remote server management is connection speeds, especially when it comes to session creation between the remote and local machines.

There are several bottlenecks to this process, one scenario is when you are connecting to a remote server for the first time; it normally takes a few seconds to establish a session. However, when you try to start multiple connections in succession, this causes an overhead (combination of excess or indirect computation time, memory, bandwidth, or other related resources to carry out the operation).

In this article, we will share four useful tips on how to speed up remote SSH connections in Linux.

1. Force SSH Connection Over IPV4

OpenSSH supports both IPv4/IP6, but at times IPv6 connections tend to be slower. So you can consider forcing ssh connections over IPv4 only, using the syntax below:

# ssh -4 [email protected]


Alternatively, use the AddressFamily (specifies the address family to use when connecting) directive in your ssh configuration file /etc/ssh/ssh_config (global configuration) or ~/.ssh/config (user specific file).

The accepted values are “any”, “inet” for IPv4 only, or “inet6”.

$ vi ~.ssh/config 
Disable SSH Connections on ipv6

Disable SSH Connections on ipv6

Here is a useful starter guide on configuring user specific ssh configuration file:

  1. How to Configure Custom SSH Connections to Simplify Remote Access

Additionally, on the remote machine, you can also instruct the sshd daemon to consider connections over IPv4 by using the above directive in the /etc/ssh/sshd_config file.

2. Disable DNS Lookup On Remote Machine

By default, sshd daemon looks up the remote host name, and also checks that the resolved host name for the remote IP address maps back to the very same IP address. This can result into delays in connection establishment or session creation.

The UseDNS directive controls the above functionality; to disable it, search and uncomment it in the /etc/ssh/sshd_config file. If it’s not set, add it with the value no.

UseDNS no
Disable SSH DNS Lookup

Disable SSH DNS Lookup

3. Reuse SSH Connection

An ssh client program is used to establish connections to an sshd daemon accepting remote connections. You can reuse an already-established connection when creating a new ssh session and this can significantly speed up subsequent sessions.

You can enable this in your ~/.ssh/config file.

Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%[email protected]%h-%p
ControlPersist 600

The above configuration (Host *) will enable connection re-use for all remote servers you connect to using these directives:

  • ControlMaster – enables the sharing of multiple sessions over a single network connection.
  • ControlPath – defines a path to the control socket used for connection sharing.
  • ControlPersist – if used together with ControlMaster, tells ssh to keep the master connection open in the background (waiting for future client connections) once the initial client connection has been closed.
Reuse SSH Connections

Reuse SSH Connections

You can enable this for connections to a specific remote server, for instance:

Host server1
HostName www.example.com
IdentityFile ~/.ssh/webserver.pem
User username_here
ControlMaster auto
ControlPath ~/.ssh/sockets/%[email protected]%h-%p
ControlPersist 600

This way you only suffer the connection overhead for the first connection, and all subsequent connections will be much faster.

4. Use Specific SSH Authentication Method

Another way of speeding up ssh connections is to use a given authentication method for all ssh connections, and here we recommend configuring ssh passwordless login using ssh keygen in 5 easy steps.

Once that is done, use the PreferredAuthentications directive, within ssh_config files (global or user specific) above. This directive defines the order in which the client should try authentication methods (you can specify a command separated list to use more than one method).

PreferredAuthentications=publickey 
SSH Authentication Method

SSH Authentication Method

Optionally, use this syntax below from the command line.

# ssh -o "PreferredAuthentications=publickey" [email protected]

If you prefer password authentication which is deemed unsecure, use this.

# ssh -o "PreferredAuthentications=password" [email protected]

Finally, you need to restart your sshd daemon after making all the above changes.

# systemctl restart sshd #Systemd
# service sshd restart #SysVInit

For more information about the directives used here, see the ssh_config and sshd_config man pages.

# man ssh_config
# man sshd_config 

Also check out these useful guides for securing ssh on Linux systems:

  1. 5 Best Practices to Secure and Protect SSH Server
  2. How to Disconnect Inactive or Idle SSH Connections in Linux

That’s all for now! Do you have any tips/tricks for speeding up SSH connections. We would love to hear of other ways of doing this. Use the comment form below to share with us.

How to Configure Basic HTTP Authentication in Nginx

Basic HTTP authentication is a security mechanism to restrict access to your website/application or some parts of it by setting up simple username/password authentication. It can be used essentially to protect the whole HTTP server, individual server blocks (virtual hosts in Apache) or location blocks.

Read Also: How to Setup Name-based and IP-based Virtual Hosts (Server Blocks) with NGINX

As the name suggests, it is not a security method to rely on; you should use it in conjunction with other more reliable security measures. For instance if your web application is running on HTTP, then user credentials are transmitted in plain text, so you should consider enabling HTTPS.

The purpose of this guide is to help you add a small but useful layer of security to protect private/privileged content on your web applications (such as, but not limited to administrator sides). You can also use it to prevent access to a website or application which is still in the development phase.

Requirements

  1. Install LEMP Stack in CentOS/RHEL 7
  2. Install LEMP Stack in Ubuntu/Debian

Create HTTP Authentication User File


You should start by creating a file that will store username:password pairs. We will use the htpasswd utility from Apache HTTP Server, to create this file.

First check that apache2-utils or httpd-tools, the packages which provide htpasswd utility are installed on your system, otherwise run the appropriate command for your distribution to install it:

# yum install httpd-tools [RHEL/CentOS]
$ sudo apt install apache2-utils [Debian/Ubuntu]

Next, run htpasswd command below to create the password file with the first user. The -c option is used to specify the passwd file, once you hit [Enter], you will be asked to enter the user password.

# htpasswd -c /etc/nginx/conf.d/.htpasswd developer

Add a second user, and do not use the -c option here.

# htpasswd /etc/nginx/conf.d/.htpasswd admin

Now that you have the password file ready, proceed to configure the parts of your web server that you want to restrict access to. To view the password file content (which includes usernames and encrypted passwords), use the cat command below.

# cat /etc/nginx/conf.d/.htpasswd 
View HTTP Password File

View HTTP Password File

Configure HTTP Authentication for Nginx

As we mentioned earlier on, you can restrict access to your webserver, a single web site (using its server block) or a location directive. Two useful directives can be used to achieve this.

  • auth_basic – turns on validation of user name and password using the “HTTP Basic Authentication” protocol.
  • auth_basic_user_file – specifies the password file.

Password Protect Nginx Virtual Hosts

To implement basic authentication for the whole web server, which applies to all server blocks, open the /etc/nginx/nginx.conf file and add the lines below in the http context:

htpp{
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; ……...
}

Password Protect Nginx Website or Domain

To enable basic authentication for a particular domain or sub-domain, open its configuration file under /etc/nginx/conf.d/ or /etc/nginx/conf/sites-available (depending on how you installed Nginx), then add the configuration below in server block or context:

server {
listen 80;
server_name example.com;
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; location / {
……..
}
……...
}

Password Protect Web Directory in Nginx

You can also enable basic authentication within a location directive. In the example below, all users trying to access the /admin location block will be asked to authenticate.

server {
listen 80;
server_name example.com www.example.com;
location / {
……..
}
location /admin/ {
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; }
location /public/{
auth_basic off; #turns off basic http authentication off for this block
}
……..
}

If you have configured basic HTTP authentication, all user who try to access you webserver or a sub-domain or specific part of a site (depending on where you implemented it), will be asked for a username and password as shown in the screen shot below.

Nginx Basic Authentication

Nginx Basic Authentication

In case of a failed user authentication, a “401 Authorization Required” error will be displayed as shown below.

401 Authorization Required Error

401 Authorization Required Error

You can find more information at restricting Access with Basic HTTP Authentication.

You might also like to read these following useful Nginx HTTP server related guides.

  1. How to Password Protect Web Directories in Nginx
  2. The Ultimate Guide to Secure, Harden and Improve Performance of Nginx
  3. Setting Up HTTPS with Let’s Encrypt SSL Certificate For Nginx

In this guide, we showed how to implement basic HTTP authentication in Nginx HTTP web server. To ask any questions, use the feedback form below.

The Unity A to Z Game Development Bundle

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘The Unity A to Z Game Development Bundle’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/11/Unity-Game-Development-Course.jpg’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

How to Download and Extract Tar Files with One Command

Tar (Tape Archive) is a popular file archiving format in Linux. It can be used together with gzip (tar.gz) or bzip2 (tar.bz2) for compression. It is the most widely used command line utility to create compressed archive files (packages, source code, databases and so much more) that can be transferred easily from machine to another or over a network.

Read Also: 18 Tar Command Examples in Linux

In this article, we will show you how to download tar archives using two well known command line downloaders – wget or cURL and extract them with one single command.

How to Download and Extract File Using Wget Command

The example below shows how to download, unpack the latest GeoLite2 Country databases (use by the GeoIP Nginx module) in the current directory.

# wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | tar -xz
Download and Extract File with Wget

Download and Extract File with Wget


The wget option -O specifies a file to which the documents is written, and here we use -, meaning it will written to standard output and piped to tar and the tar flag -x enables extraction of archive files and -z decompresses, compressed archive files created by gzip.

To extract tar files to specific directory, /etc/nginx/ in this case, include use the -C flag as follows.

Note: If extracting files to particular directory that requires root permissions, use the sudo command to run tar.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | sudo tar -xz -C /etc/nginx/
Download and Extract File to Directory

Download and Extract File to Directory

Alternatively, you can use the following command, here, the archive file will be downloaded on your system before you can extract it.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && tar -xzf GeoLite2-Country.tar.gz

To extract compressed archive file to a specific directory, use the following command.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

How to Download and Extract File Using cURL Command

Considering the previous example, this is how you can use cURL to download and unpack archives in the current working directory.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | tar -xz 
Download and Extract File with cURL

Download and Extract File with cURL

To extract file to different directory while downloading, use the following command.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | sudo tar -xz -C /etc/nginx/
OR
$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

That’s all! In this short but useful guide, we showed you how to download and extract archive files in one single command. If you have any queries, use the comment section below to reach us.