Configuration of Zone Minder on Debian 9

In an earlier article, the installation of the security monitoring system Zone Minder on Debian 9 was covered. The next step in getting Zone Minder working is to configure storage. By default Zone Minder will store camera information in /var/cache/zoneminder/*. This could be problematic for systems that don’t have large amounts of local storage.

This part of the configuration is primarily important for individuals wishing to offload the storage of the recorded imagery to a secondary storage system. The system that is being setup in this lab has approximately 140GB of storage locally. Depending on the amount, quality, and retention of videos/images being taken by Zone Minder, this small amount of storage space can quickly be exhausted.

Zone Minder Lab Environment

While this is a simplification of most IP camera installations, the concepts will still work assuming that the cameras have network connectivity to the Zone Minder server.

Zone Minder Lab Setup Diagram

Zone Minder Lab Setup Diagram

Server Specifications:

Since Zone Minder will potentially be saving lots of video/images, the biggest components necessary for this server will be network and storage capacity. Other items to take into consideration are the number of cameras, the quality of the images/video being sent to the server, number of users connecting to the Zone Minder system, and viewing the streams live through the Zone Minder system.


Important: The server being used in this guide, while old, is not the typical home user system. Please make sure to thoroughly evaluate usage requirements before setting up a Zone Minder system.

Zone Minder wiki article for Specs: https://wiki.zoneminder.com/How_Many_Cameras

System Specs:

  • 1 HP DL585 G1 (4 x Dual core CPU’s)
  • RAM: 18 GB
  • 1 x 1Gbps network connections for IP cameras
  • 1 x 1Gbps network connection for management
  • Local Storage: 4 x 72GB in RAID 10 (OS only; ZM images/video will be offloaded later)
  • 1 x 1.2 TB HP MSA20 (Storage of Images/Videos)

Changing ZoneMinder Image/Video Storage Location

Important: This step is only necessary for those wishing to move the storage of the images/videos that Zone Minder captures to another location. If this is not desired, skip to the next article: Setting up Monitors [Coming Soon].

As mentioned in the lab setup, this particular box has very little local storage but does have a large external storage array attached for video and images. In this case, the images and videos will be offloaded to that larger storage location. The image below shows the lab server’s setup.

List ZoneMinder Devices

List ZoneMinder Devices

From the output of ‘lsblk’, two sets of hard drives can be seen. The second disk array (c1d0) is the large storage shelf attached to this server and ultimately where Zone Minder will be instructed to store images/videos.

To start the process, Zone Minder needs to be stopped using the following command.

# systemctl stop zoneminder.service

Once Zone Minder has been stopped, the storage location needs to be partitioned and prepared. Many tools can accomplish this task but this guide will use ‘cfdisk’.

The drive can be setup to use the whole space as one mount point or a separate partition can be used for each of the two Zone Minder directories. This guide will walk through using two partitions. (Be sure to change the ‘/dev/cciss/c1d0’ portion in the commands below to the proper device path for different environments).

# cfdisk /dev/cciss/c1d0

Once in the ‘cfdisk’ utility, select the partitioning type (dos is usually sufficient). The next prompt will be display the current partitions on the disk.

In this case, there aren’t any so they will need to be created. Planning ahead, video from the cameras is likely to take up more space than images and with 1.1 Terabytes available, a 75/25 or so split should be more than sufficient for this system.

Partition 1: ~825GB
Partition 2: ~300GB
cfdisk Partition Utility

cfdisk Partition Utility

Cfdisk is text/keyboard based, use the arrow keys to highlight the ‘[ New ]’ menu and hit the ‘Enter’ key. This will prompt the user for the size of the new partition.

ZoneMinder New Partition Size

ZoneMinder New Partition Size

The next prompt will be for the partition type. Since only two partitions will be needed in this install, ‘Primary’ will be sufficient.

Set ZoneMinder Partition Type Primary

Set ZoneMinder Partition Type Primary

Once the partition type has been selected, cfdisk will refresh the current changes waiting to be written to the disk. The remaining free space needs to be partitioned as well by highlighting the free space and then clicking on the ‘[ New ]’ menu option again.

Cfdisk Partition Menu

Cfdisk Partition Menu

Cfdisk will automatically place the remaining free space amount in the size prompt. In this example the rest of the disk space is going to be the second partition anyways. Pressing the ‘Enter’ key, cfdisk will use the rest of the storage capacity.

ZoneMinder Second Partition

ZoneMinder Second Partition

Since there will only be 2 partitions on this particular unit, another primary partition can be used. Simply press the ‘Enter’ key to continue selecting a primary partition.

Once cfdisk has completed updating the changes to the partitions, the changes will need to actually be written to the disk. In order to accomplish this, there is a ‘[ Write ]’ menu option down at the bottom of the screen.

Use the arrows to move over to highlight this option and hit the ‘Enter’ key. Cfdisk will prompt for confirmation so simply type ‘yes’ and hit the ‘Enter’ key one more time.

Write Changes to Partitions

Write Changes to Partitions

Once confirmed, highlight and click ‘[ Quit ]’ to exit out of cfdisk. Cfdisk will exit and it is suggested that user’s double check the partitioning process with the ‘lsblk’ command.

Notice in the image below the two partitions, ‘c1d0p1’ and ‘c1d0p2’, show up in the output of lsblk confirming that the system recognizes the new partitions.

# lsblk
Confirm ZoneMinder Partitions

Confirm ZoneMinder Partitions

Now that the partitions are ready, they need to have a filesystem written to them and mounted to the Zone Minder system. The filesystem type chosen is user preference but many people have opted to use non-journaled file-systems like ext2 and accept the potential loss of data for the speed increase.

This guide will use ext4 due to the addition of a journal and the reasonable write performance and superior read performance over ext2/3. Both partitions can be formated with the ‘mkfs’ tool using the following commands:

# mkfs.ext4 -L "ZM_Videos" /dev/cciss/c1d0p1
# mkfs.ext4 -L "ZM_Images" /dev/cciss/c1d0p2

The next step in the process is to persistently mount the new partitions so Zone Minder can use the space to store images and videos. In order to make the storage available at boot time, entries will need to be added to ‘/etc/fstab’ file.

To accomplish this task, the ‘blkid’ command with root privileges will be used.

# blkid /dev/cciss/c1d0p1 >> /etc/fstab
# blkid /dev/cciss/c1d0p2 >> /etc/fstab

Important: Make ABSOLUTELY sure the double ‘>>’ symbol is used! This will write the correct UUID information to the persistent mounts file.

This will need some clean up though. Enter the file with a text editor to clean up the necessary information. The information in red is what ‘blkid’ inserted into the file. As it stands initially, the formatting won’t be correct for the system to properly mount the directories.

ZoneMinder Partitions Mounted

ZoneMinder Partitions Mounted

The item in red is what the two ‘blkid’ commands above placed into the file. The important parts in this output are the UUID and TYPE strings. The format of the fstab file is vary specific. The format will need to be as follows:

<UUID:> <mount point> <Fileystem type> <Options> <Dump> <fsck>

For this instance, the mount point will be the two Zone Minder directories for images and recorded events, the file-system – ext4, default options, 0 – dump, and 2 for the filesystem check.

The image below illustrates how this particular system’s fstab file is setup. Pay attention to the removed double quotes around the file-system type and UUID!

Zone Minder Directories

Zone Minder Directories

The first directory ‘/var/cache/zoneminder/events’ is the larger partition on this system and will be used for recorded events. The second directory ‘/var/cache/zoneminder/images’ will be used for still images. Once the proper changes have been made to this file, save the changes and exit the text editor.

Zone Minder will have already created these folders during installation so they should be removed before mounting the new partitions.

Caution, if following this article on an already running/configured Zone Minder system, this command will remove ALL imagery already stored! It is suggested you move the files instead.

Remove these directories with the following command:

# rm -rf /var/cache/zoneminder/{events,images}

Once the directories have been removed, the folders need to be created and mounted on the new disk space. The permissions also need set to allow Zone Minder to read/write to the new storage locations. Use the following commands to accomplish this:

# mount -a # mkdir /var/cache/zoneminder/{images,events} # mount -a (May be needed to mount directories after re-creation on new disk)
# chown www-data:www-data /var/cache/zoneminder/{images,events}
# chmod 750 /var/cache/zoneminder/{images,events}
Create Zone Minder Directories

Create Zone Minder Directories

The final step is to start the Zone Minder process again and begin further configuration of the system! Use the following command to start Zone Minder again and pay attention to any errors that may display.

# systemctl start zoneminder.service

At this point, Zone Minder will be storing the images/events to the much larger MSA storage system attached to this server. Now it is time to begin further configuration of Zone Minder.

The next article will look at how to configure Zone Minder monitors to interface with the IP cameras in this lab setup.

How to Block Ping ICMP Requests to Linux Systems

Some system administrators often block ICMP messages to their servers in order to hide the Linux boxes to outside world on rough networks or to prevent some kind of IP flooding and denial of service attacks.

The most simple method to block ping command on Linux systems is by adding an iptables rule, as shown in the below example. Iptables is a part of Linux kernel netfilter and, usually, is installed by default in most Linux environments.

# iptables -A INPUT --proto icmp -j DROP
# iptables -L -n -v [List Iptables Rules]

Another general method of blocking ICMP messages in your Linux system is to add the below kernel variable that will drop all ping packets.

# echo “1” > /proc/sys/net/ipv4/icmp_echo_ignore_all

In order to make the above rule permanent, append following line to /etc/sysctl.conf file and, subsequently, apply the rule with sysctl command.

# echo “net.ipv4.icmp_echo_ignore_all = 1” >> /etc/sysctl.conf # sysctl -p


In Debian-based Linux distributions that ship with UFW application firewall, you can block ICMP messages by adding the following rule to /etc/ufw/before.rules file, as illustrated in the below excerpt.

-A ufw-before-input -p icmp --icmp-type echo-request -j DROP
Block Ping ICMP Request in UFW Firewall

Block Ping ICMP Request in UFW Firewall

Restart UFW firewall to apply the rule, by issuing the below commands.

# ufw disable && ufw enable

In CentOS or Red Hat Enterprise Linux distribution that use Firewalld interface to manage iptables rules, add the below rule to drop ping messages.

# firewall-cmd --zone=public --remove-icmp-block={echo-request,echo-reply,timestamp-reply,timestamp-request} --permanent # firewall-cmd --reload

In order to test if the firewall rules had been successfully applied in all the cases discussed above, try to ping your Linux machine IP address from a remote system. In case ICMP messages are blocked to your Linux box, you should get a “Request timed out” or “Destination Host unreachable” messages on the remote machine.

How to Install Skype 8.13 on Debian, Ubuntu and Linux Mint

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to Install Skype 8.13 on Debian, Ubuntu and Linux Mint’,media: ‘https://www.tecmint.com/wp-content/uploads/2012/11/Install-Skype-on-Debian-Ubuntu-Linux-Mint.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

How to Install Skype 8.13 on CentOS, RHEL and Fedora

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to Install Skype 8.13 on CentOS, RHEL and Fedora’,media: ‘https://www.tecmint.com/wp-content/uploads/2018/01/Install-Skype-in-CentOS-RHEL-Fedora.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

Install ZoneMinder – Video Surveillance Software on Debian 9

Whether it’s in the home or the enterprise, physical security is always a foundational component of an all encompassing security policy. The usage of security cameras tends to be a corner stone of a physical security monitoring solution.

One of the biggest challenges with cameras tends to be the management and the storage of the video feeds/images. One of the best known open source solutions for addressing this task is Zone Minder.

Zone Minder presents users with a large number of solutions for monitoring, managing, and analyzing the video feeds from security cameras. Some of the highlights of Zone Minder include:

  • Free, Open source and constantly updating.
  • Works with most IP cameras (even ones with special functionality like PTZ, night vision, and 4k resolutions).
  • Web based management console.
  • Android and iOS applications for monitoring from anywhere.

To see more features of Zone Minder please visit the project’s home page at: https://zoneminder.com/features/


This article will cover the installation of Zone Minder on Debian 9 Stretch and another article will cover the configuration of Zone Minder to monitor security camera feeds.

Zone Minder Lab Setup Diagram

Zone Minder Lab Setup Diagram

Zone Minder Lab Setup Diagram

While this is a simplification of most IP camera installations, the concepts will still work assuming that the cameras have network connectivity to the Zone Minder server.

This article will assume that the reader already has a minimal base install of Debian 9 Stretch up and running. A bare install with SSH connectivity is all that is assumed.

A graphical environment is not needed on the server as everything will be served through the Apache web server to the clients connecting to the Zone Minder web interface.

Please see this article on Tecmint for installing Debian 9: https://www.tecmint.com/installation-of-debian-9-minimal-server/.

Server Specifications:

Since Zone Minder will potentially be saving lots of video/images, the biggest components necessary for this server will be network and storage capacity. Other items to take into consideration are the number of cameras, the quality of the images/video being sent to the server, number of users connecting to the Zone Minder system, and viewing the streams live through the Zone Minder system.

Important: The server being used in this guide, while old, is not the typical home user system. Please make sure to thoroughly evaluate usage requirements before setting up a Zone Minder system.

Zone Minder wiki article for Specs: https://wiki.zoneminder.com/How_Many_Cameras

System Specs:

  • 1 HP DL585 G1 (4 x Dual core CPU’s)
  • RAM: 18 GB
  • 1 x 1Gbps network connections for IP cameras
  • 1 x 1Gbps network connection for management
  • Local Storage: 4 x 72GB in RAID 10 (OS only; ZM images/video will be offloaded later)
  • 1 x 1.2 TB HP MSA20 (Storage of Images/Videos)

Installation of Zone Minder

The installation of Zone Minder is very straight forward and assumes root or sudo access on the particular server that Zone Minder is being installed.

Debian Stretch doesn’t have Zone Minder 1.30.4 in the repositories by default. Luckily a newer version of Zone Minder is available in Debian Stretch backports.

To enable backports in a clean installation of Debian, issue the following command:

# echo -e “\n\rdeb http://ftp.debian.org/debian stretch-backports main” >> /etc/apt/sources.list

Once backports have been enabled, the system will likely have a series of updates that will need to occur. Run the following commands to update the packages in preparation for the rest of this article.

# apt-get update
# apt-get upgrade
# apt-get dist-upgrade

The first step for the installation and configuration of Zone Minder is to install the necessary dependencies for with the following commands:

# apt-get install php mariadb-server php-mysql libapache2-mod-php7.0 php7.0-gd zoneminder

During this installation process, the MariaDB server installation may prompt the user to configure a root password for the database, **DO NOT FORGET THIS PASSWORD**.

Once the installation is complete, it is strongly suggested that the database be secured using the following command:

# mysql_secure_installation

The above command may prompt for the root password created during the MariaDB installation first and then will ask the user several security questions about disabling a test user, remote root login to the database, and removing testing databases. It is safe and suggested that ‘Yes’ be the answer to all of these questions.

Now the database needs to be prepared and a Zone Minder user for the database. The Zone Minder package provides the necessary schema for import. The import will create the user ‘zmuser’, the database ‘zm’, and setup a default password on the system *See below on how to change this*.

The following commands will prompt the user for the MariaDB database root user password.

# mariadb -u root -p < /usr/share/zoneminder/db/zm_create.sql
# mariadb -u root -p -e "grant all on zm.* to ‘zmuser’@localhost identified by ‘zmpass’;"

This part is only needed if the user wants to change the default user/password for the database! It may be desirable to change the database name, username, or password for the database.

For example, say the admin wanted to use a different user/password combination:

User: zm_user_changed
Password: zmpass-test

This would change the above MariaDB user command to:

# mariadb -u root -p -e "grant all on zm.* to ‘zm_user_changed’@localhost identified by ‘zmpass-test’;"

By doing this though, Zone Minder will need to be made aware of the changed database and user name. Make the proper changes in the ZM configuration file at ‘/etc/zm/zm.conf’.

Locate and change the following lines:

  • ZM_DB_USER = zmuser ? Change ‘zmuser’ to the new user above. ‘zm_user_changed
  • ZM_DB_PASS = zmpass ? Change ‘zmpass’ to the new password used above. ‘zmpass-test

The next step is to fix ownership of the Zone Minder configuration file so that it can be read by the apache user (www-data) using the following command:

# chgrp www-data /etc/zm/zm.conf

The www-data user also needs to be a part of the ‘video’ group on this system. To accomplish this the following command should be used:

# usermod -aG video www-datada

It is also necessary to set the proper time zone in the php.ini file locate at ‘/etc/php/7.0/apache2/php.ini’. Find the proper time zone and then using a text editor, locate the follow line and append the timezone information.

# nano /etc/php/7.0/apache2/php.ini

Change the line ‘;date.timezone =‘ to ‘date.timezone = America/New_York’.

Now Apache needs to be configured to serve out the Zone Minder web interface. The first step is to disable the default Apache page and enable the Zone Minder configuration file.

# a2dissite 000-default.conf
# a2enconf zoneminder

There are also some Apache modules that need to be enabled for Zone Minder to function properly. This can be accomplished with the following commands:

# a2enmod cgi
# a2enmod rewrite

The final steps are to enable and start Zone Minder! Use the following commands to accomplish this:

# systemctl enable zoneminder.service
# systemctl restart apache2.service
# systemctl start zoneminder.service

Now if everything went well, navigating to the server’s IP and Zone Minder directory should yield the Zone Minder management console as such:

http://10.0.0.10/zm
Zone Minder Dashboard

Zone Minder Dashboard

Congratulations! Zone Minder is now up and running on Debian 9. In next upcoming articles we will walk through the configuration of storage, cameras, and alerts within the Zone Minder console.

An overview of the Perl 5 engine

As I described in “My DeLorean runs Perl,” switching to Perl has vastly improved my development speed and possibilities. Here I’ll dive deeper into the design of Perl 5 to discuss aspects important to systems programming.

Some years ago, I wrote “OpenGL bindings for Bash” as sort of a joke. The implementation was simply an X11 program written in C that read OpenGL calls on stdin (yes, as text) and emitted user input on stdout. Then I had a little bash include file that would declare all the OpenGL functions as Bash functions, which echoed the name of the function into a pipe, starting the GL interpreter process if it wasn’t already running. The point of the exercise was to show that OpenGL (the 1.4 API, not the newer shader stuff) could render a lot of graphics with just a few calls per frame by using GL display lists. The OpenGL library did all the heavy lifting, and Bash just printed a few dozen lines of text per frame.

In the end though, Bash is a really horrible glue language, both from high overhead and limited available operations and syntax. Perl, on the other hand, is a great glue language.

Syntax aside…

If you’re not a regular Perl user, the first thing you probably notice is the syntax.

Perl 5 is built on a long legacy of awkward syntax, but more recent versions have removed the need for much of the punctuation. The remaining warts can mostly be avoided by choosing modules that give you domain-specific “syntactic sugar,” which even alter the Perl syntax as it is parsed. This is in stark contrast to most other languages, where you are stuck with the syntax you’re given, and infinitely more flexible than C’s macros. Combined with Perl’s powerful sparse-syntax operators, like map, grep, sort, and similar user-defined operators, I can almost always write complex algorithms more legibly and with less typing using Perl than with JavaScript, PHP, or any compiled language.

So, because syntax is what you make of it, I think the underlying machine is the most important aspect of the language to consider. Perl 5 has a very capable engine, and it differs in interesting and useful ways from other languages.

A layer above C

I don’t recommend anyone start working with Perl by looking at the interpreter’s internal API, but a quick description is useful. One of the main problems we deal with in the world of C is acquiring and releasing memory while also supporting control flow through a chain of function calls. C has a rough ability to throw exceptions using longjmp, but it doesn’t do any cleanup for you, so it is almost useless without a framework to manage resources. The Perl interpreter is exactly this sort of framework.

Perl provides a stack of variables independent from C’s stack of function calls on which you can mark the logical boundaries of a Perl scope. There are also API calls you can use to allocate memory, Perl variables, etc., and tell Perl to automatically free them at the end of the Perl scope. Now you can make whatever C calls you like, “die” out of the middle of them, and let Perl clean everything up for you.

Although this is a really unconventional perspective, I bring it up to emphasize that Perl sits on top of C and allows you to use as much or as little interpreted overhead as you like. Perl’s internal API is certainly not as nice as C++ for general programming, but C++ doesn’t give you an interpreted language on top of your work when you’re done. I’ve lost track of the number of times that I wanted reflective capability to inspect or alter my C++ objects, and following that rabbit hole has derailed more than one of my personal projects.

Lisp-like functions

Perl functions take a list of arguments. The downside is that you have to do argument count and type checking at runtime. The upside is you don’t end up doing that much, because you can just let the interpreter’s own runtime check catch those mistakes. You can also create the effect of C++’s overloaded functions by inspecting the arguments you were given and behaving accordingly.

Because arguments are a list, and return values are a list, this encourages Lisp-style programming, where you use a series of functions to filter a list of data elements. This “piping” or “streaming” effect can result in some really complicated loops turning into a single line of code.

Every function is available to the language as a coderef that can be passed around in variables, including anonymous closure functions. Also, I find sub {} more convenient to type than JavaScript’s function(){} or C++11’s [&](){}.

Generic data structures

The variables in Perl are either “scalars,” references, arrays, or “hashes” … or some other stuff that I’ll skip.

Scalars act as a string/integer/float hybrid and are automatically typecast as needed for the purpose you are using them. In other words, instead of determining the operation by the type of variable, the type of operator determines how the variable should be interpreted. This is less efficient than if the language knows the type in advance, but not as inefficient as, for example, shell scripting because Perl caches the type conversions.

Perl scalars may contain null characters, so they are fully usable as buffers for binary data. The scalars are mutable and copied by value, but optimized with copy-on-write, and substring operations are also optimized. Strings support unicode characters but are stored efficiently as normal bytes until you append a codepoint above 255.

References (which are considered scalars as well) hold a reference to any other variable; hashrefs and arrayrefs are most common, along with the coderefs described above.

Arrays are simply a dynamic-length array of scalars (or references).

Hashes (i.e., dictionaries, maps, or whatever you want to call them) are a performance-tuned hash table implementation where every key is a string and every value is a scalar (or reference). Hashes are used in Perl in the same way structs are used in C. Clearly a hash is less efficient than a struct, but it keeps things generic so tasks that require dozens of lines of code in other languages can become one-liners in Perl. For instance, you can dump the contents of a hash into a list of (key, value) pairs or reconstruct a hash from such a list as a natural part of the Perl syntax.

Object model

Any reference can be “blessed” to make it into an object, granting it a multiple-inheritance method-dispatch table. The blessing is simply the name of a package (namespace), and any function in that namespace becomes an available method of the object. The inheritance tree is defined by variables in the package. As a result, you can make modifications to classes or class hierarchies or create new classes on the fly with simple data edits, rather than special keywords or built-in reflection APIs. By combining this with Perl’s local keyword (where changes to a global are automatically undone at the end of the current scope), you can even make temporary changes to class methods or inheritance!

Perl objects only have methods, so attributes are accessed via accessors like the canonical Java get_ and set_ methods. Perl authors usually combine them into a single method of just the attribute name and differentiate get from set by whether a parameter was given.

You can also “re-bless” objects from one class to another, which enables interesting tricks not available in most other languages. Consider state machines, where each method would normally start by checking the object’s current state; you can avoid that in Perl by swapping the method table to one that matches the object’s state.

Visibility

While other languages spend a bunch of effort on access rules between classes, Perl adopted a simple “if the name begins with underscore, don’t touch it unless it’s yours” convention. Although I can see how this could be a problem with an undisciplined software team, it has worked great in my experience. The only thing C++’s private keyword ever did for me was impair my debugging efforts, yet it felt dirty to make everything public. Perl removes my guilt.

Likewise, an object provides methods, but you can ignore them and just access the underlying Perl data structure. This is another huge boost for debugging.

Garbage collection via reference counting

Although reference counting is a rather leak-prone form of memory management (it doesn’t detect cycles), it has a few upsides. It gives you deterministic destruction of your objects, like in C++, and never interrupts your program with a surprise garbage collection. It strongly encourages module authors to use a tree-of-objects pattern, which I much prefer vs. the tangle-of-objects pattern often seen in Java and JavaScript. (I’ve found trees to be much more easily tested with unit tests.) But, if you need a tangle of objects, Perl does offer “weak” references, which won’t be considered when deciding if it’s time to garbage-collect something.

On the whole, the only time this ever bites me is when making heavy use of closures for event-driven callbacks. It’s easy to have an object hold a reference to an event handle holding a reference to a callback that references the containing object. Again, weak references solve this, but it’s an extra thing to be aware of that JavaScript or Python don’t make you worry about.

Parallelism

The Perl interpreter is a single thread, although modules written in C can use threads of their own internally, and Perl often includes support for multiple interpreters within the same process.

Although this is a large limitation, knowing that a data structure will only ever be touched by one thread is nice, and it means you don’t need locks when accessing them from C code. Even in Java, where locking is built into the syntax in convenient ways, it can be a real time sink to reason through all the ways that threads can interact (and especially annoying that they force you to deal with that in every GUI program you write).

There are several event libraries available to assist in writing event-driven callback programs in the style of Node.js to avoid the need for threads.

Access to C libraries

Aside from directly writing your own C extensions via Perl’s XS system, there are already lots of common C libraries wrapped for you and available on Perl’s CPAN repository. There is also a great module, Inline::C, that takes most of the pain out of bridging between Perl and C, to the point where you just paste C code into the middle of a Perl module. (It compiles the first time you run it and caches the .so shared object file for subsequent runs.) You still need to learn some of the Perl interpreter API if you want to manipulate the Perl stack or pack/unpack Perl’s variables other than your C function arguments and return value.

Memory usage

Perl can use a surprising amount of memory, especially if you make use of heavyweight libraries and create thousands of objects, but with the size of today’s systems it usually doesn’t matter. It also isn’t much worse than other interpreted systems. My personal preference is to only use lightweight libraries, which also generally improve performance.

Startup speed

The Perl interpreter starts in under five milliseconds on modern hardware. If you take care to use only lightweight modules, you can use Perl for anything you might have used Bash for, like hotplug scripts.

Regex implementation

Perl provides the mother of all regex implementations… but you probably already knew that. Regular expressions are built into Perl’s syntax rather than being an object-oriented or function-based API; this helps encourage their use for any text processing you might need to do.

Ubiquity and stability

Perl 5 is installed on just about every modern Unix system, and the CPAN module collection is extensive and easy to install. There’s a production-quality module for almost any task, with solid test coverage and good documentation.

Perl 5 has nearly complete backward compatibility across two decades of releases. The community has embraced this as well, so most of CPAN is pretty stable. There’s even a crew of testers who run unit tests on all of CPAN on a regular basis to help detect breakage.

The toolchain is also pretty solid. The documentation syntax (POD) is a little more verbose than I’d like, but it yields much more useful results than doxygen or Javadoc. You can run perldoc FILENAME to instantly see the documentation of the module you’re writing. perldoc Module::Name shows you the specific documentation for the version of the module that you would load from your include path and can likewise show you the source code of that module without needing to browse deep into your filesystem.

The testcase system (the prove command and Test Anything Protocol, or TAP) isn’t specific to Perl and is extremely simple to work with (as opposed to unit testing based around language-specific object-oriented structure, or XML). Modules like Test::More make writing the test cases so easy that you can write a test suite in about the same time it would take to test your module once by hand. The testing effort barrier is so low that I’ve started using TAP and the POD documentation style for my non-Perl projects as well.

In summary

Perl 5 still has a lot to offer despite the large number of newer languages competing with it. The frontend syntax hasn’t stopped evolving, and you can improve it however you like with custom modules. The Perl 5 engine is capable of handling most programming problems you can throw at it, and it is even suitable for low-level work as a “glue” layer on top of C libraries. Once you get really familiar with it, it can even be an environment for developing C code.

My DeLorean runs Perl

My signature hobby project these days is a computerized instrument cluster for my car, which happens to be a DeLorean. But, whenever I show it to someone, I usually have to give them a while to marvel at the car before they even notice that there’s a computer screen in the dashboard. There’s a similar problem when I start describing the software; programmers immediately get hung up on “Why Perl???” when they learn that the real-time OpenGL rendering of instrument data is all coded in Perl. So, any discussion of my project usually starts with the history of the DeLorean or a discussion of the merits of Perl vs. other, more-likely tools.

I started the project in 2010 with the concept to integrate a computer in the dashboard to act as a personal assistant, but it quickly became a project about replacing the stock instrument cluster with something software-rendered. Based on the level of processing I wanted (I dream big) and the size of screen I wanted, I decided against the usual high-end microcontrollers people might use and instead went with a full Linux PC and desktop monitor with low-end microcontroller to read the analog measurements from the car. I was doing OpenGL and C++ at work at the time, so that was my first pick for software. I could write multiple articles about hardware selection, but I’ll try to stay focused on the software for this one. (You can find  more of that story on my website, nrdvana.net.)

After several years of effort, it became apparent that C++ is not a good fit for my large-scale personal projects. Although C++ yields great performance and low resource usage, the biggest resource shortage I had was time and “mental state.” Sometimes I would be away from the project for an entire month, and when I finally had a single day of free time to work on it, I spent it trying to remember where I left off. The worst aspect was that I usually couldn’t finish refactoring my design in a single session, so when I came back to it weeks later, I wasn’t catching all the places where the design change had broken the code. Also, while C++ is generally better than C for catching bugs, I would still end up with occasional memory corruption that could eat up hours of debugging time. There’s also just a lot of development overhead to write the logging and debugging routines needed to diagnose a real-time, multi-threaded application.

Meanwhile, my day job had shifted to working on Perl. I didn’t seek Perl on my own; it was just sort of thrust my way along with urgent projects. However, within a few months I was intrigued by its possibilities, and now it’s my favorite language.

Enter Perl

In 2014, I took the plunge and rewrote the instrument cluster software in Perl. After years of trudging along with C++ I was able to get a working prototype (of the software, at least) within a few months, and move to completing the hardware and microcontroller in 2015.

My little Perl success story is primarily about agility. I’m not really a buzzword fan or the kind of guy who reads books about methodologies, but “agile” definitely means something to me now. I feel like Perl hits a magic sweet spot of providing enough structure to build a correct, performant program, while being minimal and generic enough to plug things together with ease, and even offering enough syntax features to express complex operations in terse but readable code. (If you aren’t familiar with Perl’s capabilities, see my companion article “Perl from a Systems Programmer Perspective,” which elaborates on how Perl can be suited for systems work.)

The main, ongoing benefit is the ability to make ad-hoc changes. Because I don’t have a lot of time to plan out the full requirements of my objects, it has been a great boost to productivity to just toss in an additional few attributes on unsuspecting objects, or quickly sort through a list of objects based on criteria that would require awkward reflection code in Java or C++. If I decide I like the change, I go back and rewrite it with properly declared attributes and interfaces. I’ve found I can author a new graphic widget, complete with animations, in less than an hour.

Toolchain

One of the real killers for the C++ version of my project was keeping all the binary-level code in sync. The various components (rendering, message bus, logic core, microcontroller firmware, control tools, debug tools) were all sharing binary data structures, and keeping the dependencies straight in the makefile was a headache. I’m personally sour toward the automake family of tools, so whenever I needed to do something odd (like compile the microcontroller code using avr-gcc), I would risk getting frustrated and detouring into a new grand scheme to create a replacement for autotools (certainly a thing I don’t need to waste time on).

During my change to Perl, I converted the microcontroller to show up as a Linux serial device and changed the protocol to strings of short text notation. (The messages are actually smaller than the binary packet structs I had been using before.) This let me debug it with a simple socat on /dev/ttyS0. It also simplified the daemon that talks to the microcontroller. The C++ version was written with two threads, since I was using libusb, and its easiest mode of operation has a blocking read method. The Perl version simply opens a stream to the character device and reads lines of text.

I made a similar change to the host-side communication and had the daemon generate lines of JSON instead of binary packets. Since it is so incredibly easy to implement this in Perl with libraries like AnyEvent, I ditched the “message bus” idea entirely and just had each program create its own Unix socket, to which other programs can connect as needed. Debugging a single thread is much less painful, and there wasn’t even much debugging to do anyway, because AnyEvent does most of the work for me.

With everything passed around as JSON, there are no longer any message structs to worry about. None of my Perl programs requires a make process anymore, so the only piece of the project that still has a makefile is the microcontroller firmware, and it is simple enough that I just wrote it out by hand.

Performance

Processing low-level math directly with Perl can be slow, but the best way to use Perl where performance counts is to glue together C libraries. Perl has an extension system called XS to help you bind C code to Perl functions, but even better, there’s a CPAN repository module called Inline, which lets you paste C or C++ (and others) directly into a Perl module, and it compiles the first time the module is loaded. (But, yes, I pre-compile them before building the firmware image for the car.)

Thanks to Inline, I can move code back and forth from Perl to C as needed without messing around with library versions. I was able to bring over some of my C++ classes directly into the new Perl version of the instrument cluster. I was also able to wrap the C++ objects of the FreeType for OpenGL (FTGL) library, which is an important piece I didn’t want to have to re-invent.

The CPU usage of the system was about 15% with the C++ implementation. With Perl it’s about 40%. Almost all of that is the rendering code, so if I need to I can always push more of it back into C++. But, I could also just upgrade the computer, and 40% isn’t even a problem because I’m maintaining a full 60 frames per second (and I’m running a 6.4-watt processor).

Broader horizons

Perl’s CPAN public package repository is especially large, documented, tested, and stable compared to other languages. Naturally this depends on the individual authors (and there are plenty of counter-examples), but I’ve been impressed with the pervasive culture of test coverage and helpful documentation. Installing and using new Perl modules is also ridiculously easy. Not only do I avoid the toolchain efforts of C/C++, I get the advantage of Perl authors who have already overcome conflicting thread models or event loops or logging systems to give me a plugin experience.

With everything written in Perl, I can just grab anything I like off CPAN. For instance, I could have the car send me emails or text messages, host a web app for controlling features via phone, write Excel files of fuel mileage, and so on. I haven’t started on these features yet, but it feels nice that the barriers are gone.

Contributing back

In a decade of doing C++, I never once released a library for public consumption. A lot of it is due to the extreme awkwardness of autotools, and the fact that just creating a system-installed C++ library is a royal pain even without packaging it up properly for distribution.

Perl makes module authoring and testing and documentation extremely easy. It is so easy that I wrote test cases and documentation for my Math-InterpolationCompiler for my own benefit, and then published them on CPAN because, “why not?” I also became maintainer of X11-Xlib and greatly expanded its API, and then wrote X11-GLX so that I could finally have all my OpenGL setup code in proper order. (This was also part of my attempt to make the instrument renderer into a compositing window manager, which turned out to be much harder than I expected.) Currently, I’m working on making my maps/navigation database a CPAN module as well.

But why not…

“But, why not Language X” you say, with “Python” a common value for X. Well, for one, I know a lot more Perl than Python. I’m using a lot of deep and advanced Perl features, so picking up Python would be another large learning curve. I’m also partial to Perl’s toolchain, especially elements like prove and perldoc. I suspect it’s possible to do it all in Python as well, but I have no compelling reason to switch. For any other language X… well no other language can match the wealth of packages that Perl or Python offer, so I’m less inclined to experiment with them. I could mix languages, since my project is comprised of multiple processes, but having everything in the same language means I can more easily share code between programs.

“Why not Android?” is another common question. Indeed, a tablet is a much more embeddable device than a whole PC, and it comes with access to mapping apps. The obvious first problem is, I’d be back on Java and lose most of my prized agility. Second, I’m not aware of any way to merge the graphics of separate apps (such as using Google Maps as a texture within the dashboard), although there might be one. And third, I’ve been working on a feature to take video feeds and tie them directly into the graphics as textures. I don’t know of any tablets that could capture video from external sources in real time at a low enough latency, much less directly into a graphics texture buffer. Linux desktop software is much more open to this sort of deep mangling, so I’ll probably continue with it.

On the whole, I’m just happy I’ve finished enough that I can drive my DeLorean.

How to Block USB Storage Devices in Linux Servers

In order to protect sensitive data extraction from servers by users who have physical access to machines, it’s a best practice to disable all USB storage support in Linux kernel.

In order to disable USB storage support, we first need to identify if the storage driver is loaded into Linux kernel and the name of the driver (module) responsible with storage driver.

Run the lsmod command to list all loaded kernel drivers and filter the output via grep command with the search string “usb_storage”.

# lsmod | grep usb_storage
List USB Storage Drivers

List USB Storage Drivers

From lsmod command, we can see that the sub_storage module is in use by UAS module. Next, unload both USB storage modules from kernel and verify if the removal has been successfully completed, by issuing the below commands.

# modprobe -r usb_storage
# modprobe -r uas
# lsmod | grep usb


Next, list the content of the current runtime kernel usb storage modules directory by issuing the below command and identify the usb-storage driver name. Usually this module should be named usb-storage.ko.xz or usb-storage.ko.

# ls /lib/modules/`uname -r`/kernel/drivers/usb/storage/

In order to block USB storage module form loading into kernel, change directory to kernel usb storage modules path and rename the usb-storage.ko.xz module to usb-storage.ko.xz.blacklist, by issuing the below commands.

# cd /lib/modules/`uname -r`/kernel/drivers/usb/storage/
# ls
# mv usb-storage.ko.xz usb-storage.ko.xz.blacklist
Block USB Storage in Linux

Block USB Storage in Linux

In Debian based Linux distributions, issue the below commands to block usb-storage module from loading into Linux kernel.

# cd /lib/modules/`uname -r`/kernel/drivers/usb/storage/ # ls
# mv usb-storage.ko usb-storage.ko.blacklist
Block USB in Debian and Ubuntu

Block USB in Debian and Ubuntu

Now, whenever you plug-in a USB storage device, the kernel will be fail to load the storage device driver intro kernel. To revert changes, just rename the usb module blacklisted back to its old name.

# cd /lib/modules/`uname -r`/kernel/drivers/usb/storage/
# mv usb-storage.ko.xz.blacklist usb-storage.ko.xz

However, this method applies only to runtime kernel modules. In case you want to blacklist USB storage modules form all available kernels in the system, enter each kernel module directory version path and rename the usb-storage.ko.xz to usb-storage.ko.xz.blacklist.

Firefox Quantum Eats RAM Like Chrome

For a long time, Mozilla’s Firefox has been my web browser of choice. I have always preferred it to using Google’s Chrome, because of its simplicity and reasonable system resource (especially RAM) usage. On many Linux distributions such as Ubuntu, Linux Mint and many others, Firefox even comes installed by default.

Recently, Mozilla released a new, powerful and faster version of Firefox called Quantum. And according to the developers, it’s new with a “powerful engine that’s built for rapid-fire performance, better, faster page loading that uses less computer memory.”

Read Also: How to Install Firefox Quantum in Linux

However, after I updated to Firefox Quantum, I noticed two significant changes with by far the biggest update to Firefox: first, it is fast, I mean really fast, and secondly, it’s greedy of RAM just like Chrome, as you open more tabs and continue to use it for a long time.


Therefore I carried out an simple investigation to examine Quantum’s memory usage, and also tried to compare it to Chrome’s memory usage, using the following testing environment:

Operating system - Linux Mint 18.0
CPU Model - Intel(R) Core(TM) i3-3120M CPU @ 2.50GHz RAM - 4 GB(3.6 Usable)

Firefox Quantum Eats RAM With Many Tabs Opened

If you open Quantum with just few tabs, let’s say up to 5, you’ll notice that memory consumption by Firefox is fairly good, but as you open more tabs and continue to use it for long, it tends to eat up RAM.

I performed a few tests using glances – a real-time Linux system monitoring tool, to view top process by RAM usage. Under this tool, to sort processes by RAM usage, simply press m key.

I started by running glances and sorting processes by highest RAM usage before launching Firefox, as shown in the screenshot below.

$ glances 
Glances - Processes Memory Usage

Glances – Processes Memory Usage

After launching Firefox and using it for close to half an hour with less than 8 tabs open, I captured a screenshot of glances with processes sorted by RAM usage shown below.

Glances - Firefox Memory Usage Monitoring

Glances – Firefox Memory Usage Monitoring

As I continued using Firefox through the day, the memory usage was steadily increasing as seen in the next screen shot.

Glances - Firefox Memory Usage Increasing

Glances – Firefox Memory Usage Increasing

At the end of the day, Firefox had already consumed more than 70% off my system RAM as shown by the red warning-indicator in the following screen shot.

Note that during the test, I did not run any other RAM-consuming applications apart from Firefox itself (so it was definitely the one consuming the most amount of RAM).

Glances - Firefox High Memory Usage

Glances – Firefox High Memory Usage

From the results above, Mozilla was rather misleading in telling users that Quantum uses less computer memory.

Having known Chrome for eating RAM, the following day, I decided to also compared its (Quantum’s) memory usage with Chrome as explained in the next section.

Firefox Quantum Vs Chrome: RAM Usage

Here, I started my test by launching both browsers with the same number of tabs and opening the same sites in corresponding tabs as seen in the screen shot below.

Opened Same Tabs on Firefox and Chrome

Opened Same Tabs on Firefox and Chrome

Then from glances, I watched their RAM usage (sorted processes by memory usage as before). As you can see in this screenshot, considering all Chrome and Firefox processes (parent and child processes) on average Chrome still consumes more percentage of RAM than Quantum.

Compare Chrome and Firefox Memory Usage

Compare Chrome and Firefox Memory Usage

To better understand memory usage by the two browsers, we need to clearly interpret the output the meaning of the %MEM, VIRT and RES columns from the process list headers:

  • VIRT – represents the total amount of memory a process is able to access at the moment, which includes RAM, Swap and any shared memory being accessed.
  • RES – is the accurate representation of how much amount of resident memory or actual physical memory a process is consuming.
  • %MEM – represents the percentage of physical (resident) memory used by this process.

From the explanation and values in the screenshots above, Chrome still eats more physical memory than Quantum.

All in all, I suppose Quantum’s speedy new engine, that ships in with many other performance improvements speaks for its high memory utilization. But is it worth? I would like to here from you, via the comment form below.

How to Install Tripwire IDS (Intrusion Detection System) on Linux

Tripwire is a popular Linux Intrusion Detection System (IDS) that runs on systems in order to detect if unauthorized filesystem changes occurred over time.

In CentOS and RHEL distributions, tripwire is not a part of official repositories. However, the tripwire package can be installed via Epel repositories.

To begin, first install Epel repositories in CentOS and RHEL system, by issuing the below command.

# yum install epel-release

After you’ve installed Epel repositories, make sure you update the system with the following command.

# yum update


After the update process finishes, install Tripwire IDS software by executing the below command.

# yum install tripwire

Fortunately, tripwire is a part of Ubuntu and Debian default repositories and can be installed with following commands.

$ sudo apt update
$ sudo apt install tripwire

On Ubuntu and Debian, the tripwire installation will be asked to choose and confirm a site key and local key passphrase. These keys are used by tripwire to secure its configuration files.

Create Tripwire Site and Local Key

Create Tripwire Site and Local Key

On CentOS and RHEL, you need to create tripwire keys with the below command and supply a passphrase for site key and local key.

# tripwire-setup-keyfiles
Create Tripwire Keys

Create Tripwire Keys

In order to validate your system, you need to initialize Tripwire database with the following command. Due to the fact that the database hasn’t been initialized yet, tripwire will display a lot of false-positive warnings.

# tripwire --init
Initialize Tripwire Database

Initialize Tripwire Database

Finally, generate a tripwire system report in order to check the configurations by issuing the below command. Use --help switch to list all tripwire check command options.

# tripwire --check --help
# tripwire --check

After tripwire check command completes, review the report by opening the file with the extension .twr from /var/lib/tripwire/report/ directory with your favorite text editor command, but before that you need to convert to text file.

# twprint --print-report --twrfile /var/lib/tripwire/report/tecmint-20170727-235255.twr > report.txt
# vi report.txt
Tripwire System Report

Tripwire System Report

That’s It! you have successfully installed Tripwire on Linux server. I hope you can now easily configure your Tripwire IDS.