Cryptmount – A Utility to Create Encrypted Filesystems in Linux

Cryptmount is a powerful utility which allows any user to access encrypted filesystems on-demand under GNU/Linux systems without requiring root privileges. It requires Linux 2.6 or higher. It handles both encrypted partitions as well as encrypted files.

It makes it as easy (compared to older approaches such as cryptoloop device driver and dm-crypt device-mapper target) for ordinary users to access encrypted filesystems on-demand using the newer devmapper mechanism. Cryptmount helps a system administrator in creating and managing encrypted filesystems based on the kernel’s dm-crypt device-mapper target.

Cryptmount offers the following advantages:

  • access to enhanced functionality in the kernel.
  • support for filesystems stored on either raw disk partitions or loopback files.
  • different encryption of filesystem access keys, enabling access passwords to be modified without re-encrypting the whole filesystem.
  • keeping various encrypted filesystems on a single disk partition, using a designated subset of blocks for each.
  • hardly used filesystems don’t need to be mounted during system startup.
  • un-mounting of every filesystem is locked so that this can only be conducted by the user that mounted it, or the root user.
  • encrypted filesystems compatible with cryptsetup.
  • support for encrypted swap partitions (superuser only).
  • support for creating encrypted filesystems or crypto-swap at system boot-up.

How to Install and Configure Cryptmount in Linux

On Debian/Ubuntu distributions, you can install Cryptmount using the apt command as shown.

$ sudo apt install cryptmount


On RHEL/CentOS/Fedora distributions, you can install it from source. First start installing the required package(s) to successfully build and use cryptmount.

# yum install device-mapper-devel

Then download the latest Cryptmount source files using the wget command and install it as shown.

# wget https://downloads.sourceforge.net/project/cryptmount/cryptmount/cryptmount-5.2/cryptmount-5.2.4.tar.gz
# tar -xzf cryptmount-5.2.4.tar.gz
# cd cd cryptmount-5.2.4
# ./configure
# make
# make install 

After installation successful, it’s time to configure cyptmount and create an encrypted filesystem using the cyptmount-setup utility as superuser, otherwise use the sudo command as shown.

# cyptmount-setup
OR
$ sudo cyptmount-setup

Running the above command will ask you a series of questions to setup a secure filing-system that will be managed by cryptmount. It will ask for target name for your filesystem, user who should own the encrypted filesystem, location and size of the filesystem, filename (absolute name) for your encrypted container, location of the key as well as password for the target.

In this example, we are using the name 'tecmint' for the target filesystem. The following is a sample output of the crytmount-setup command output.

Create Encrypted Filesystem in Linux

Create Encrypted Filesystem in Linux

Once the new encrypted filesystem is created, you can access it as follows (enter the name you specified for your target – tecmint), you will be prompted to enter the password for the target.

# cryptmount tecmint
# cd /home/crypt

To unmount the target run cd command to get out of the encrypted filesystem, then use the -u switch to unmount as shown.

# cd
# cryptmount -u tecmint
Access Encrypted Filesystem

Access Encrypted Filesystem

In case you have created more than one encrypted filesystem, use the -l switch to list them.

# cryptsetup -l 
List Encrypted Filesystems in Linux

List Encrypted Filesystems in Linux

To change the old password for a specific target (encrypted filesystem), use the -c flag as shown.

# cryptsetup -c tecmint
Change Encrypted Filesystem Password

Change Encrypted Filesystem Password

Take a note of the following important points while using this critical tool.

  • Do not forget your password, once you forget it, it’s not recoverable.
  • It is strongly recommended to save a backup copy of the key-file. Deleting or corrupting the key-file implies that the encrypted filesystem will in effect be impossible to access.
  • In case you forget the password or delete the key, you can fully remove the encrypted filesystem and start over, however you will lose your data (which is not recoverable).

If you wish to use more advanced setup options, the setup process will depend on your host system, you can refer to cryptmount and cmtab man pages or visit cyptmount homepage under the “files” sections for a comprehensive guide.

# man cryptmount
# man cmtab
Conclusion

cryptmount enables for management and user-mode mounting of encrypted filesystems on GNU/Linux systems. In this article, we have explained you how to install it on various Linux distributions. You can ask questions or share your thoughts about it, with us via the comment section below.

How to Change FTP Port in Linux

FTP or File Transfer Protocol is one of the oldest network protocol used today as standard file transfers over computer networks. FTP protocol uses the standard port 21/TCP as command port. Although, there are a lot of implementations of FTP protocol in server-side in Linux, in this guide we’ll cover how to change the port number in the Proftpd service implementation.

In order to change Proftpd service default port in Linux, first open Proftpd main configuration file for editing with your favorite text editor by issuing the below command. The opened file has different paths, specific to your own installed Linux distribution, as follows.

# nano /etc/proftpd.conf [On CentOS/RHEL]
# nano /etc/proftpd/proftpd.conf [On Debian/Ubuntu]

In proftpd.conf file, search and comment the line that begins with Port 21. You need to add a hashtag (#) in front of the line in order to comment the line.

Then, under this line, add a new port line with the new port number. You can add any TCP non-standard port between 1024 to 65535, with the condition that the new port is not already taken in your system by other application which binds on it.


In this example we’ll bind FTP service on port 2121/TCP.

#Port 21
Port 2121
Change FTP Port in Debian & Ubuntu

Change FTP Port in Debian & Ubuntu

In RHEL based distributions, the Port line is not present in Proftpd configuration file. To change the port, just add a new port line at the top of the configuration file, as illustrated in the below excerpt.

Port 2121
Change FTP Port in CentOS & RHEL

Change FTP Port in CentOS & RHEL

After you’ve changed the port number, restart the Proftpd daemon to apply changes and issue netstat command to confirm that FTP service listens on the new 2121/TCP port.

# systemctl restart proftpd
# netstat -tlpn| grep ftp
OR
# ss -tlpn| grep ftp
Confirm FTP Port

Confirm FTP Port

Under CentOS or RHEL Linux based distributions, install policycoreutils package and add the below SELinux rules in order for the FTP daemon to bind on the 2121 port.

# yum install policycoreutils
# semanage port -a -t http_port_t -p tcp 2121
# semanage port -m -t http_port_t -p tcp 2121
# systemctl restart proftpd

Finally, update your Linux distribution firewall rules in order to allow inbound traffic on the new FTP port. Also, check FTP server passive port range and make sure you also update the firewall rules to reflect passive port range.

How to Block Ping ICMP Requests to Linux Systems

Some system administrators often block ICMP messages to their servers in order to hide the Linux boxes to outside world on rough networks or to prevent some kind of IP flooding and denial of service attacks.

The most simple method to block ping command on Linux systems is by adding an iptables rule, as shown in the below example. Iptables is a part of Linux kernel netfilter and, usually, is installed by default in most Linux environments.

# iptables -A INPUT --proto icmp -j DROP
# iptables -L -n -v [List Iptables Rules]

Another general method of blocking ICMP messages in your Linux system is to add the below kernel variable that will drop all ping packets.

# echo “1” > /proc/sys/net/ipv4/icmp_echo_ignore_all

In order to make the above rule permanent, append following line to /etc/sysctl.conf file and, subsequently, apply the rule with sysctl command.

# echo “net.ipv4.icmp_echo_ignore_all = 1” >> /etc/sysctl.conf # sysctl -p


In Debian-based Linux distributions that ship with UFW application firewall, you can block ICMP messages by adding the following rule to /etc/ufw/before.rules file, as illustrated in the below excerpt.

-A ufw-before-input -p icmp --icmp-type echo-request -j DROP
Block Ping ICMP Request in UFW Firewall

Block Ping ICMP Request in UFW Firewall

Restart UFW firewall to apply the rule, by issuing the below commands.

# ufw disable && ufw enable

In CentOS or Red Hat Enterprise Linux distribution that use Firewalld interface to manage iptables rules, add the below rule to drop ping messages.

# firewall-cmd --zone=public --remove-icmp-block={echo-request,echo-reply,timestamp-reply,timestamp-request} --permanent # firewall-cmd --reload

In order to test if the firewall rules had been successfully applied in all the cases discussed above, try to ping your Linux machine IP address from a remote system. In case ICMP messages are blocked to your Linux box, you should get a “Request timed out” or “Destination Host unreachable” messages on the remote machine.

How to Check and Patch Meltdown CPU Vulnerability in Linux

Meltdown is a chip-level security vulnerability that breaks the most fundamental isolation between user programs and the operating system. It allows a program to access the operating system kernel’s and other programs’ private memory areas, and possibly steal sensitive data, such as passwords, crypto-keys and other secrets.

Spectre is a chip-level security flaw that breaks the isolation between different programs. It enables a hacker to trick error-free programs into leaking their sensitive data.

These flaws affect mobile devices, personal computers and cloud systems; depending on the cloud provider’s infrastructure, it might be possible to access/steal data from other customers.

We came across a useful shell script that scans your Linux system to verify whether your kernel has the known correct mitigations in place against Meltdown and Spectre attacks.


spectre-meltdown-checker is a simple shell script to check if your Linux system is vulnerable against the 3 “speculative executionCVEs (Common Vulnerabilities and Exposures) that were made public early this year. Once you run it, it will inspect your currently running kernel.

Optionally, if you have installed multiple kernels and you’d like to inspect a kernel you’re not running, you can specify a kernel image on the command line.

It will significantly try to detect mitigations, including backported non-vanilla patches, not considering the kernel version number advertised on the system. Note that you should launch this script with root privileges to get accurate information, using the sudo command.

$ git clone https://github.com/speed47/spectre-meltdown-checker.git $ cd spectre-meltdown-checker/
$ sudo ./spectre-meltdown-checker.sh
Check Meltdown and Spectre Vulnerabilities

Check Meltdown and Spectre Vulnerabilities

From the results of the above scan, our test kernel is vulnerable to the 3 CVEs. In addition, here are a few important points to note about these processor bugs:

  • If your system has a vulnerable processor and runs an unpatched kernel, it is not safe to work with sensitive information without the chance of leaking the information.
  • Fortunately, there are software patches against Meltdown and Spectre, with details provided in Meltdown and Spectre research homepage.

The latest Linux kernels have been redesigned to defang these processor security bug. Therefore update your kernel version and reboot the server to apply updates as shown.

$ sudo yum update [On CentOS/RHEL]
$ sudo dnf update [On Fedora]
$ sudo apt-get update [On Debian/Ubuntu]
# pacman -Syu [On Arch Linux]

After reboot make sure to scan again with spectre-meltdown-checker.sh script.

You can find a summary of the CVEs from the spectre-meltdown-checker Github repository.

How to Check Integrity of File and Directory Using “AIDE” in Linux

In our mega guide to hardening and securing CentOS 7, under the section “protect system internally”, one of the useful security tools we listed for internal system protection against viruses, rootkits, malware, and detection of unauthorized activities is AIDE.

AIDE (Advanced Intrusion Detection Environment) is a small yet powerful, free open source intrusion detection tool, that uses predefined rules to check file and directory integrity in Unix-like operating systems such as Linux. It is an independent static binary for simplified client/server monitoring configurations.

It is feature-rich: uses plain text configuration files and database making it easy to use; supports several message digest algorithms such as but not limited to md5, sha1, rmd160, tiger; supports common file attributes; also supports powerful regular expressions to selectively include or exclude files and directories to be scanned.

Also it can be compiled with exceptional support for Gzip compression, Posix ACL, SELinux, XAttrs and Extended file system attributes.


Aide works by creating a database (which is simply a snapshot of selected parts of the file system), from the regular expression rules defined in the configuration file(s). Once this database is initialized, you can verify the integrity of the system files against it. This guide will show how to install and use aide in Linux.

How to Install AIDE in Linux

Aide is packaged in official repositories of mainstream Linux distributions, to install it run the command for your distribution using a package manager.

# apt install aide [On Debian/Ubuntu]
# yum install aide [On RHEL/CentOS] # dnf install aide [On Fedora 22+]
# zypper install aide [On openSUSE]
# emerge aide [On Gentoo]

After installing it, the main configuration file is /etc/aide.conf. To view the installed version as well as compile time parameters, run the command below on your terminal:

# aide -v
Sample Output
Aide 0.14
Compiled with the following options:
WITH_MMAP
WITH_POSIX_ACL
WITH_SELINUX
WITH_PRELINK
WITH_XATTR
WITH_LSTAT64
WITH_READDIR64
WITH_ZLIB
WITH_GCRYPT
WITH_AUDIT
CONFIG_FILE = "/etc/aide.conf"

You can open the configuration using your favorite editor.

# vi /etc/aide.conf

It has directives that define the database location, report location, default rules, the directories/files to be included in the database.

Understanding Default Aide Rules

AIDE Default Rules

AIDE Default Rules

Using the above default rules, you can define new custom rules in the aide.conf file for example.

PERMS = p+u+g+acl+selinux+xattrs

The PERMS rule is used for access control only, it will detect any changes to file or directories based on file/directory permissions, user, group, access control permissions, SELinux context and file attributes.

This will only check file content and file type.

CONTENT = sha256+ftype

This is an extended version of the previous rule, it checks extended content, file type and access.

CONTENT_EX = sha256+ftype+p+u+g+n+acl+selinux+xattrs

The DATAONLY rule below will help detect any changes in data inside all files/directory.

DATAONLY = p+n+u+g+s+acl+selinux+xattrs+sha256
Configure Aide Rules

Configure Aide Rules

Defining Rules to Watch Files and Directories

Once you have defined rules, you can specify the file and directories to watch. Considering the PERMS rule above, this definition will check permissions for all files in root directory.

/root/\..* PERMS

This will check all files in the /root directory for any changes.

/root/ CONTENT_EX

To help you detect any changes in data inside all files/directory under /etc/, use this.

/etc/ DATAONLY 
Configure Aide Rules for Filesystem

Configure Aide Rules for Filesystem

Using AIDE to Check File and Directory Integrity in Linux

Start by constructing a database against the checks that will be performed using --init flag. This is expected to be done before your system is connected to a network.

The command below will create a database that contains all of the files that you selected in your configuration file.

# aide --init
Initialize Aide Database

Initialize Aide Database

Then rename the database to /var/lib/aide/aide.db.gz before proceeding, using this command.

# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

It is recommended to move the database to a secure location possibly in a read-only media or on another machines, but ensure that you update the configuration file to read it from there.

After the database is created, you can now check the integrity of the files and directories using the --check flag.

# aide --check

It will read the snapshot in the database and compares it to the files/directories found you system disk. If it finds changes in places that you might not expect, it generates a report which you can then review.

Run File Integrity Check

Run File Integrity Check

Since no changes have been made to the file system, you will only get an output similar to the one above. Now try to create some files in the file system, in areas defined in the configuration file.

# vi /etc/script.sh
# touch all.txt

Then run a check once more, which should report the files added above. The output of this command depends on the parts of the file system you configured for checking, it can be lengthy overtime.

# aide --check
Check File System Changes

Check File System Changes

You need to run aide checks regularly, and in case of any changes to already selected files or addition of new file definitions in the configuration file, always update the database using the --update option:

# aide --update

After running a database update, to use the new database for future scans, always rename it to /var/lib/aide/aide.db.gz:

# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

That’s all for now! But take note of these important points:

  • One characteristic of most intrusion detection systems AIDE inclusive, is that they will not provide solutions to most security loop holes on a system. They however, assist in easing the the intrusion response process by helping system administrators examine any changes to system files/directories. So you should always be vigilant and keep updating your current security measures.
  • It it highly recommended to keep the newly created database, the configuration file and the AIDE binary in a secure location such as read-only media (possible if you install from source).
  • For additional security, consider signing the configuration and/or database.

For additional information and configurations, see its man page or check out the AIDE Homepage: http://aide.sourceforge.net/

How to Configure Basic HTTP Authentication in Nginx

Basic HTTP authentication is a security mechanism to restrict access to your website/application or some parts of it by setting up simple username/password authentication. It can be used essentially to protect the whole HTTP server, individual server blocks (virtual hosts in Apache) or location blocks.

Read Also: How to Setup Name-based and IP-based Virtual Hosts (Server Blocks) with NGINX

As the name suggests, it is not a security method to rely on; you should use it in conjunction with other more reliable security measures. For instance if your web application is running on HTTP, then user credentials are transmitted in plain text, so you should consider enabling HTTPS.

The purpose of this guide is to help you add a small but useful layer of security to protect private/privileged content on your web applications (such as, but not limited to administrator sides). You can also use it to prevent access to a website or application which is still in the development phase.

Requirements

  1. Install LEMP Stack in CentOS/RHEL 7
  2. Install LEMP Stack in Ubuntu/Debian

Create HTTP Authentication User File


You should start by creating a file that will store username:password pairs. We will use the htpasswd utility from Apache HTTP Server, to create this file.

First check that apache2-utils or httpd-tools, the packages which provide htpasswd utility are installed on your system, otherwise run the appropriate command for your distribution to install it:

# yum install httpd-tools [RHEL/CentOS]
$ sudo apt install apache2-utils [Debian/Ubuntu]

Next, run htpasswd command below to create the password file with the first user. The -c option is used to specify the passwd file, once you hit [Enter], you will be asked to enter the user password.

# htpasswd -c /etc/nginx/conf.d/.htpasswd developer

Add a second user, and do not use the -c option here.

# htpasswd /etc/nginx/conf.d/.htpasswd admin

Now that you have the password file ready, proceed to configure the parts of your web server that you want to restrict access to. To view the password file content (which includes usernames and encrypted passwords), use the cat command below.

# cat /etc/nginx/conf.d/.htpasswd 
View HTTP Password File

View HTTP Password File

Configure HTTP Authentication for Nginx

As we mentioned earlier on, you can restrict access to your webserver, a single web site (using its server block) or a location directive. Two useful directives can be used to achieve this.

  • auth_basic – turns on validation of user name and password using the “HTTP Basic Authentication” protocol.
  • auth_basic_user_file – specifies the password file.

Password Protect Nginx Virtual Hosts

To implement basic authentication for the whole web server, which applies to all server blocks, open the /etc/nginx/nginx.conf file and add the lines below in the http context:

htpp{
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; ……...
}

Password Protect Nginx Website or Domain

To enable basic authentication for a particular domain or sub-domain, open its configuration file under /etc/nginx/conf.d/ or /etc/nginx/conf/sites-available (depending on how you installed Nginx), then add the configuration below in server block or context:

server {
listen 80;
server_name example.com;
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; location / {
……..
}
……...
}

Password Protect Web Directory in Nginx

You can also enable basic authentication within a location directive. In the example below, all users trying to access the /admin location block will be asked to authenticate.

server {
listen 80;
server_name example.com www.example.com;
location / {
……..
}
location /admin/ {
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; }
location /public/{
auth_basic off; #turns off basic http authentication off for this block
}
……..
}

If you have configured basic HTTP authentication, all user who try to access you webserver or a sub-domain or specific part of a site (depending on where you implemented it), will be asked for a username and password as shown in the screen shot below.

Nginx Basic Authentication

Nginx Basic Authentication

In case of a failed user authentication, a “401 Authorization Required” error will be displayed as shown below.

401 Authorization Required Error

401 Authorization Required Error

You can find more information at restricting Access with Basic HTTP Authentication.

You might also like to read these following useful Nginx HTTP server related guides.

  1. How to Password Protect Web Directories in Nginx
  2. The Ultimate Guide to Secure, Harden and Improve Performance of Nginx
  3. Setting Up HTTPS with Let’s Encrypt SSL Certificate For Nginx

In this guide, we showed how to implement basic HTTP authentication in Nginx HTTP web server. To ask any questions, use the feedback form below.

How to Lock User Accounts After Failed Login Attempts

This guide will show how to lock a system user’s account after a specifiable number of failed login attempts in CentOS, RHEL and Fedora distributions. Here, the focus is to enforce simple server security by locking a user’s account after consecutive number of unsuccessful authentications.

Read Also: Use Pam_Tally2 to Lock and Unlock SSH Failed Login Attempts

This can be achieved by using the pam_faillock module which helps to temporary lock user accounts in case of multiple failed authentication attempts and keeps a record of this event. Failed login attempts are stored into per-user files in the tally directory which is /var/run/faillock/ by default.

pam_faillock is part of Linux PAM (Pluggable Authentication Modules), a dynamic mechanism for implementing authentication services in applications and various system services which we briefly explained under configuring PAM to audit user login shell activity.

How to Lock User Accounts After Consecutive Failed Authentications


You can configure the above functionality in the /etc/pam.d/system-auth and /etc/pam.d/password-auth files, by adding the entries below to the auth section.

auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600
auth [default=die] pam_faillock.so authfail audit deny=3 unlock_time=600

Where:

  • audit – enables user auditing.
  • deny – used to define the number of attempts (3 in this case), after which the user account should be locked.
  • unlock_time – sets the time (300 seconds = 5 minutes) for which the account should remain locked.

Note that the order of these lines is very important, wrong configurations can cause all user accounts to be locked.

The auth section in both files should have the content below arranged in this order:

auth required pam_env.so
auth required pam_faillock.so preauth silent audit deny=3 unlock_time=300
auth sufficient pam_unix.so nullok try_first_pass
auth [default=die] pam_faillock.so authfail audit deny=3 unlock_time=300
auth requisite pam_succeed_if.so uid >= 1000 quiet_success
auth required pam_deny.so

Now open these two files with your choice of editor.

# vi /etc/pam.d/system-auth
# vi /etc/pam.d/password-auth 

The default entries in auth section both files looks like this.

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required pam_env.so
auth sufficient pam_fprintd.so
auth sufficient pam_unix.so nullok try_first_pass
auth requisite pam_succeed_if.so uid >= 1000 quiet
auth required pam_deny.so

After adding the above settings, it should appear as follows.

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required pam_env.so
auth required pam_faillock.so preauth silent audit deny=3 unlock_time=300
auth sufficient pam_fprintd.so
auth sufficient pam_unix.so nullok try_first_pass
auth [default=die] pam_faillock.so authfail audit deny=3 unlock_time=300
auth requisite pam_succeed_if.so uid >= 1000 quiet
auth required pam_deny.so

Then add the following highlighted entry to the account section in both of the above files.

account required pam_unix.so
account sufficient pam_localuser.so
account sufficient pam_succeed_if.so uid < 500 quiet
account required pam_permit.so
account required pam_faillock.so

How to Lock Root Account After Failed Login Attempts

To lock the root account after failed authentication attempts, add the even_deny_root option to the lines in both files in the auth section like this.

auth required pam_faillock.so preauth silent audit deny=3 even_deny_root unlock_time=300
auth [default=die] pam_faillock.so authfail audit deny=3 even_deny_root unlock_time=300

Once you have configured everything. You can restart remote access services like sshd, for the above policy to take effect that is if users will employ ssh to connect to the server.

# systemctl restart sshd [On SystemD]
# service sshd restart [On SysVInit]

How to Test SSH User Failed Login Attempts

From the above settings, we configured the system to lock a user’s account after 3 failed authentication attempts.

In this scenario, the user tecmint is trying to switch to user aaronkilik, but after 3 incorrect logins because of a wrong password, indicated by the “Permission denied” message, the user aaronkilik’s account is locked as shown by “authentication failure” message from the fourth attempt.

Test User Failed Login Attempts

Test User Failed Login Attempts

The root user is also notified of the failed login attempts on the system, as shown in the screen shot below.

Failed Login Attempts Message

Failed Login Attempts Message

How to View Failed Authentication Attempts

You can see all failed authentication logs using the faillock utility, which is used to display and modify the authentication failure log.

You can view failed login attempts for a particular user like this.

# faillock --user aaronkilik
View User Failed Login Attempts

View User Failed Login Attempts

To view all unsuccessful login attempts, run faillock without any argument like so:

# faillock 

To clear a user’s authentication failure logs, run this command.

# faillock --user aaronkilik --reset OR
# fail --reset #clears all authentication failure records

Lastly, to tell the system not to lock a user or user’s accounts after several unsuccessful login attempts, add the entry marked in red color, just above where pam_faillock is first called under the auth section in both files (/etc/pam.d/system-auth and /etc/pam.d/password-auth) as follows.

Simply add full colon separated usernames to the option user in.

auth required pam_env.so
auth [success=1 default=ignore] pam_succeed_if.so user in tecmint:aaronkilik auth required pam_faillock.so preauth silent audit deny=3 unlock_time=600
auth sufficient pam_unix.so nullok try_first_pass
auth [default=die] pam_faillock.so authfail audit deny=3 unlock_time=600
auth requisite pam_succeed_if.so uid >= 1000 quiet_success
auth required pam_deny.so

For more information, see the pam_faillock and faillock man pages.

# man pam_faillock
# man faillock 

You might also like to read these following useful articles:

  1. TMOUT – Auto Logout Linux Shell When There Isn’t Any Activity
  2. Single User Mode: Resetting/Recovering Forgotten Root User Account Password
  3. 5 Best Practices to Secure and Protect SSH Server
  4. How to Get Root and User SSH Login Email Alerts

That’s all! In this article, we showed how to enforce simple server security by locking a user’s account after x number of incorrect logins or failed authentication attempts. Use the comment form below to share your queries or thoughts with us.

How to Enable or Disable SELinux Boolean Values

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to Enable or Disable SELinux Boolean Values’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/10/Enable-Disable-SELinux-Boolean.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

How To Protect Hard and Symbolic Links in CentOS/RHEL 7

In Linux, hard and soft links are referenced to files, which are very important, if not protected very well, any vulnerabilities in them can be exploited by malicious system users or attackers.

A common vulnerability is the symlink race. It is a security vulnerability in software, that comes about when a program insecurely creates files (especially temporary files), and a malicious system user can create a symbolic (soft) link to such a file.

Read Also: How to Create Hard and Symbolic Link in Linux

This is practically happens; a program checks if a temp file exists or not, in case it doesn’t, it creates the file. But in that short period of time between checking the file and creating it, an attacker can possibly create a symbolic link to a file and he or she is not permitted to access.


So when the program runs with valid privileges creates the file with the same name as the one created by the attacker, it literally creates the target (linked-to) file the attacker was intending to access. This, hence, could give the attacker a path way to steal sensitive information from the root account or execute a malicious program on the system.

Therefore, in this article, we will show you how to secure hard and symbolic links from malicious users or hackers in CentOS/RHEL 7 distributions.

On CentOS/RHEL 7 exists a vital security feature which only permits links to be created or followed by programs only if some conditions are satisfied as described below.

For Hard Links

For a system user to create a link, one of the following conditions has to be fulfilled.

  • the user can only link to files that he or she owns.
  • the user must first have read and write access to a file, that he or she wants to link to.

For Symbolic Links

Processes are only allowed to follow links that are outside of world-writable (other users are allowed to write to) directories that have sticky bits, or one of the following has to be true.

  • the process following the symbolic link is the owner of the symbolic link.
  • the owner of the directory is also the owner of the symbolic link.

Enable or Disable Protection on Hard and Symbolic Links

Importantly, by default, this feature is enabled using the kernel parameters in the file /usr/lib/sysctl.d/50-default.conf (value of 1 means enable).

fs.protected_hardlinks = 1
fs.protected_symlinks = 1

However, for one reason or the other, if you want to disable this security feature; create a file called /etc/sysctl.d/51-no-protect-links.conf with these kernel options below (value of 0 means disable).

Take note of that 51 in the filename (51-no-protect-links.conf), it has to be read after the default file to override the default settings.

fs.protected_hardlinks = 0
fs.protected_symlinks = 0

Save and close the file. Then use the the command below to effect the above changes (this command actually loads settings from each and every system configuration file).

# sysctl --system
OR
# sysctl -p #on older systems

You might also like to read these following articles.

  1. How to Password Protect a Vim File in Linux
  2. 5 ‘chattr’ Commands to Make Important Files IMMUTABLE (Unchangeable) in Linux

That’s all! You can post your queries or share any thoughts relating to this topic via the feedback form below.

TMOUT – Auto Logout Linux Shell When There Isn’t Any Activity

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘TMOUT – Auto Logout Linux Shell When There Isn’t Any Activity’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/10/Tmout-Auto-Logout-Linux-Shell.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>