Paying it forward at Finland's Aalto Fablab

Originating at MIT, a fab lab is a technology prototyping platform where learning, experimentation, innovation, and invention are encouraged through curiosity, creativity, hands-on making, and most critically, open knowledge sharing. Each fab lab provides a common set of tools (including digital fabrication tools like laser cutters, CNC mills, and 3D printers) and processes, so you can learn how to work in a fab lab anywhere and use those skills at any of the 1,000+ fab labs across the globe. There is probably a fab lab near you.

Fab labs can be found anywhere avant-garde makers and hackers live, but they have also cropped up at libraries and other public spaces. For example, the Aalto Fablab, the first fab lab in Finland, is in the basement of Aalto University’s library, in Espoo. Solomon Embafrash, the studio master, explains, “Aalto Fablab was in the Arabia campus with the School of Arts and Design since 2011. As Aalto decided to move all the activities concentrated in one campus (Otaniemi), we decided that a dedicated maker space would complement the state-of-the-art library in the heart of Espoo.”

The library, which is now a full learning center, sports a maker space that consists of a VR hub, a visual resources center, a studio, and of course, the Fablab. With the expansion of the Helsinki metro to a new station across the street from the Aalto Fablab, everyone in the region now has easy access to it.

The Fab Lab Charter states: “Designs and processes developed in fab labs can be protected and sold however an inventor chooses, but should remain available for individuals to use and learn from.” The “protected” part does not quite meet the requirements set by the Open Source Hardware Association’s definition of open source hardware; however, for those not involved in commercialization of products, the code is available for a wide range of projects created in fab labs (like the FabFi, an open source wireless network).

That means fab labs are effectively feeding the open source ecosystem that allows digitally distributed manufacturing of a wide range of products as many designers choose to release their designs with fully free licenses. Even the code to create a fab lab is also openly shared by the U.S. non-profit Fab Foundation.

All fab labs are required to provide open access to the community; however, some, like the Aalto Fablab, take that requirement one step further. The Aalto Fablab is free to use, but if you wish to use bulk materials from its stock for your project—for example, to make a new chair—you need to pay for them. You are also expected to respect the philosophy of open knowledge-sharing by helping others, documenting your work, and sharing what you have learned. Specifically, the Aalto Fablab asks that you “pay forward” what you have learned to other users, who may be able to build upon your work and help speed development.

All fab labs are required to provide open access to the community.

Embafrash adds, “There is a very old tradition of free services in Finland, like the library service and education. We used to charge users a few cents for the material cost of the 3D prints, but we found that it makes a lot of sense to keep it free, as it encourages people to our core philosophy of Fablab, which is idea sharing and documentation.”

This approach has proven successful, fostering enormous interest in the local community for making and sharing. For example, the Unseen Art project, an open source platform that allows the visually impaired to enjoy 3D printed art, started in the Aalto Fablab.

Fablab members organize local Maker Faire events and work closely with the maker community, local schools, and other organizations. “The Fablab has open days, which are very popular times that people from outside the university get access to the resources, and our students get the exposure to work with people outside the school community,” Embafrash says.

In this way, the more they share, the more their university benefits.

This article was supported by Fulbright Finland, which is currently sponsoring my research in open source scientific hardware in Finland as the Fulbright-Aalto University Distinguished Chair.

How to Check Integrity of File and Directory Using “AIDE” in Linux

In our mega guide to hardening and securing CentOS 7, under the section “protect system internally”, one of the useful security tools we listed for internal system protection against viruses, rootkits, malware, and detection of unauthorized activities is AIDE.

AIDE (Advanced Intrusion Detection Environment) is a small yet powerful, free open source intrusion detection tool, that uses predefined rules to check file and directory integrity in Unix-like operating systems such as Linux. It is an independent static binary for simplified client/server monitoring configurations.

It is feature-rich: uses plain text configuration files and database making it easy to use; supports several message digest algorithms such as but not limited to md5, sha1, rmd160, tiger; supports common file attributes; also supports powerful regular expressions to selectively include or exclude files and directories to be scanned.

Also it can be compiled with exceptional support for Gzip compression, Posix ACL, SELinux, XAttrs and Extended file system attributes.


Aide works by creating a database (which is simply a snapshot of selected parts of the file system), from the regular expression rules defined in the configuration file(s). Once this database is initialized, you can verify the integrity of the system files against it. This guide will show how to install and use aide in Linux.

How to Install AIDE in Linux

Aide is packaged in official repositories of mainstream Linux distributions, to install it run the command for your distribution using a package manager.

# apt install aide [On Debian/Ubuntu]
# yum install aide [On RHEL/CentOS] # dnf install aide [On Fedora 22+]
# zypper install aide [On openSUSE]
# emerge aide [On Gentoo]

After installing it, the main configuration file is /etc/aide.conf. To view the installed version as well as compile time parameters, run the command below on your terminal:

# aide -v
Sample Output
Aide 0.14
Compiled with the following options:
WITH_MMAP
WITH_POSIX_ACL
WITH_SELINUX
WITH_PRELINK
WITH_XATTR
WITH_LSTAT64
WITH_READDIR64
WITH_ZLIB
WITH_GCRYPT
WITH_AUDIT
CONFIG_FILE = "/etc/aide.conf"

You can open the configuration using your favorite editor.

# vi /etc/aide.conf

It has directives that define the database location, report location, default rules, the directories/files to be included in the database.

Understanding Default Aide Rules

AIDE Default Rules

AIDE Default Rules

Using the above default rules, you can define new custom rules in the aide.conf file for example.

PERMS = p+u+g+acl+selinux+xattrs

The PERMS rule is used for access control only, it will detect any changes to file or directories based on file/directory permissions, user, group, access control permissions, SELinux context and file attributes.

This will only check file content and file type.

CONTENT = sha256+ftype

This is an extended version of the previous rule, it checks extended content, file type and access.

CONTENT_EX = sha256+ftype+p+u+g+n+acl+selinux+xattrs

The DATAONLY rule below will help detect any changes in data inside all files/directory.

DATAONLY = p+n+u+g+s+acl+selinux+xattrs+sha256
Configure Aide Rules

Configure Aide Rules

Defining Rules to Watch Files and Directories

Once you have defined rules, you can specify the file and directories to watch. Considering the PERMS rule above, this definition will check permissions for all files in root directory.

/root/\..* PERMS

This will check all files in the /root directory for any changes.

/root/ CONTENT_EX

To help you detect any changes in data inside all files/directory under /etc/, use this.

/etc/ DATAONLY 
Configure Aide Rules for Filesystem

Configure Aide Rules for Filesystem

Using AIDE to Check File and Directory Integrity in Linux

Start by constructing a database against the checks that will be performed using --init flag. This is expected to be done before your system is connected to a network.

The command below will create a database that contains all of the files that you selected in your configuration file.

# aide --init
Initialize Aide Database

Initialize Aide Database

Then rename the database to /var/lib/aide/aide.db.gz before proceeding, using this command.

# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

It is recommended to move the database to a secure location possibly in a read-only media or on another machines, but ensure that you update the configuration file to read it from there.

After the database is created, you can now check the integrity of the files and directories using the --check flag.

# aide --check

It will read the snapshot in the database and compares it to the files/directories found you system disk. If it finds changes in places that you might not expect, it generates a report which you can then review.

Run File Integrity Check

Run File Integrity Check

Since no changes have been made to the file system, you will only get an output similar to the one above. Now try to create some files in the file system, in areas defined in the configuration file.

# vi /etc/script.sh
# touch all.txt

Then run a check once more, which should report the files added above. The output of this command depends on the parts of the file system you configured for checking, it can be lengthy overtime.

# aide --check
Check File System Changes

Check File System Changes

You need to run aide checks regularly, and in case of any changes to already selected files or addition of new file definitions in the configuration file, always update the database using the --update option:

# aide --update

After running a database update, to use the new database for future scans, always rename it to /var/lib/aide/aide.db.gz:

# mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz

That’s all for now! But take note of these important points:

  • One characteristic of most intrusion detection systems AIDE inclusive, is that they will not provide solutions to most security loop holes on a system. They however, assist in easing the the intrusion response process by helping system administrators examine any changes to system files/directories. So you should always be vigilant and keep updating your current security measures.
  • It it highly recommended to keep the newly created database, the configuration file and the AIDE binary in a secure location such as read-only media (possible if you install from source).
  • For additional security, consider signing the configuration and/or database.

For additional information and configurations, see its man page or check out the AIDE Homepage: http://aide.sourceforge.net/

A Shell Script to Send Email Alert When Memory Gets Low

A powerful aspect of Unix/Linux shell programs such as bash, is their amazing support for common programming constructs that enable you to make decisions, execute commands repeatedly, create new functions, and so much more. You can write commands in a file known as a shell script and execute them collectively.

This offers you a reliable and effective means of system administration. You can write scripts to automate tasks, for instance daily back ups, system updates etc; create new custom commands/utilities/tools and beyond. You can write scripts to help you keep up with what’s unfolding on a server.

One of the critical components of a server is memory (RAM), it greatly impacts on overall performance of a system.

In this article, we will share a small but useful shell script to send an alert email to one or more system administrator(s), if server memory is running low.


This is script is particularly useful for keeping an eye on Linux VPS (Virtual Private Servers) with small amount of memory, say of about 1GB (approximately 990MB).

Testing Environment Setup

  1. A CentOS/RHEL 7 production server with mailx utility installed with working postfix mail server.

This is how the alertmemory.sh script works: first it checks the free memory size, then determines if amount of free memory is less or equal to a specified size (100 MB for the purpose of this guide), used as a bench mark for the least acceptable free memory size.

If this condition is true, it will generate a list of the top 10 processes consuming server RAM and sends an alert email to specified email addresses.

Note: You will have to make a few changes to script (especially the mail sender utility, use the appropriate flags) to meet your Linux distributions requirements.

Shell Script to Check Server Memory

#!/bin/bash #######################################################################################
#Script Name :alertmemory.sh
#Description :send alert mail when server memory is running low
#Args : #Author :Aaron Kili Kisinga
#Email :[email protected]
#License : GNU GPL-3 #######################################################################################
## declare mail variables
##email subject subject="Server Memory Status Alert"
##sending mail as
from="[email protected]"
## sending mail to
to="[email protected]"
## send carbon copy to
also_to="[email protected]"
## get total free memory size in megabytes(MB) free=$(free -mt | grep Total | awk '{print $4}')
## check if free memory is less or equals to 100MB
if [[ "$free" -le 100 ]]; then
## get top processes consuming system memory and save to temporary file ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head >/tmp/top_proccesses_consuming_memory.txt
file=/tmp/top_proccesses_consuming_memory.txt
## send email if system memory is running low
echo -e "Warning, server memory is running low!\n\nFree memory: $free MB" | mailx -a "$file" -s "$subject" -r "$from" -c "$to" "$also_to"
fi
exit 0

After creating your script /etc/scripts/alertmemory.sh, make it executable and symlink to cron.hourly.

# chmod +x /etc/scripts/alertmemory.sh
# ln -s -t /etc/cron.hourly/alertmemory.sh /etc/scripts/alertmemory.sh

This means that the above script will be run after every 1 hour as long as the server is running.

Tip: You can test if it is working as intended, set the bench mark value a little high to easily trigger an email to be sent, and specify a small interval of about 5 minutes.

Then keep on checking from the command line using the free command provided in the script. Once you confirm that it is working, define the actual values you would like to use.

Below is a screenshot showing a sample alert email.

Linux Memory Email Alert

Linux Memory Email Alert

That’s all! In this article, we explained how to use shell script to send alert emails to system administrators in case server memory (RAM) is running low. You can share any thoughts relating to this topic, with us via the feedback form below.

What goes into protecting your credit card information on the web?

*  This post was originally posted on November 28, 2014, and has been updated for accuracy. 

Purchases happen with the click of a button, a swipe of a finger, or simply, no human interaction at all. Whether it’s our monthly subscription to Netflix, the plane tickets that just went on flash sale, or the book that we purchased with Prime shipping, our request for immediacy and automation has placed our credit card information all over the web. Though scary in context, the Payment Card Industry Security Standards Council has developed a set of data security standards that merchants storing credit card information on servers need to abide by. Luckily, for hosting providers using cPanel servers, we’ve already loaded you with the equipment to better ensure your information is secure, your customer’s information is protected, and your customer’s customers have secure transactions on the web.

What is PCI Compliance?
Established by the major credit card providers, Visa, MasterCard, Discover, and JCB International, the Payment Card Industry Security Standards Council was launched as an independent body in 2006 to focus and advise on the rapidly evolving landscape of the payment transaction process. What resulted was an organic set of criteria, with twelve major tenets, called the Payment Card Industry Data Security Standards (PCI DSS).

The Big 12

  1. Install/Maintain firewall configuration that will protect cardholder data
  2. Do not use vendor-supplied defaults for system passwords or any other security parameter
    • Many switches/routers (i.e. wireless)/applications have a default admin account, that uses a default password. Remove them if possible, or at least change the password to something very complex
  3. Protect stored cardholder data
    • Disable direct root logins. A simple configuration file that is in a publicly accessible directory can still cause issues, even if the permissions on the directory forbid direct access. Storing the data in a database is an added level of security, especially if encrypted and hashed.
  4. Encrypt transmission of cardholder data across open, public networks
    • Keep the cardholder data being sent across networks to a minimum and encrypt with the highest possible strength
  5. Use and regularly update antivirus software
    • The antivirus database needs to be up-to-date to ensure any threats created/surfaced after last manual update can be caught.
  6. Develop/Maintain secure systems and applications
  7. Restrict access to cardholder data
    • Machines holding card info should be available on the private network only and a two-factor authentication or higher security level should be required for access.
  8. Assign a unique ID to each person with computer access
  9. Restrict physical access to cardholder data
  10. Track/Monitor all access to network resources and cardholder data
    • Audit access logs frequently.
  11. Regularly test security systems and processes
  12. Maintain a policy that addresses information security
    • Create a system of internal policies to ensure the proper, regimented handled of secured information.

While cPanel isn’t PCI compliant right out of the box, turning on SSL Cipher along with a few other features, and keeping your software up to date should have you ready to accept and administer transaction on your cPanel server.

To find out more about PCI, check out these slides from the cPanel Conference Session PCI Talk, or contact session author Ryan Sherer for more info.

4 Ways to Speed Up SSH Connections in Linux

SSH is the most popular and secure method for managing Linux servers remotely. One of the challenges with remote server management is connection speeds, especially when it comes to session creation between the remote and local machines.

There are several bottlenecks to this process, one scenario is when you are connecting to a remote server for the first time; it normally takes a few seconds to establish a session. However, when you try to start multiple connections in succession, this causes an overhead (combination of excess or indirect computation time, memory, bandwidth, or other related resources to carry out the operation).

In this article, we will share four useful tips on how to speed up remote SSH connections in Linux.

1. Force SSH Connection Over IPV4

OpenSSH supports both IPv4/IP6, but at times IPv6 connections tend to be slower. So you can consider forcing ssh connections over IPv4 only, using the syntax below:

# ssh -4 [email protected]


Alternatively, use the AddressFamily (specifies the address family to use when connecting) directive in your ssh configuration file /etc/ssh/ssh_config (global configuration) or ~/.ssh/config (user specific file).

The accepted values are “any”, “inet” for IPv4 only, or “inet6”.

$ vi ~.ssh/config 
Disable SSH Connections on ipv6

Disable SSH Connections on ipv6

Here is a useful starter guide on configuring user specific ssh configuration file:

  1. How to Configure Custom SSH Connections to Simplify Remote Access

Additionally, on the remote machine, you can also instruct the sshd daemon to consider connections over IPv4 by using the above directive in the /etc/ssh/sshd_config file.

2. Disable DNS Lookup On Remote Machine

By default, sshd daemon looks up the remote host name, and also checks that the resolved host name for the remote IP address maps back to the very same IP address. This can result into delays in connection establishment or session creation.

The UseDNS directive controls the above functionality; to disable it, search and uncomment it in the /etc/ssh/sshd_config file. If it’s not set, add it with the value no.

UseDNS no
Disable SSH DNS Lookup

Disable SSH DNS Lookup

3. Reuse SSH Connection

An ssh client program is used to establish connections to an sshd daemon accepting remote connections. You can reuse an already-established connection when creating a new ssh session and this can significantly speed up subsequent sessions.

You can enable this in your ~/.ssh/config file.

Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%[email protected]%h-%p
ControlPersist 600

The above configuration (Host *) will enable connection re-use for all remote servers you connect to using these directives:

  • ControlMaster – enables the sharing of multiple sessions over a single network connection.
  • ControlPath – defines a path to the control socket used for connection sharing.
  • ControlPersist – if used together with ControlMaster, tells ssh to keep the master connection open in the background (waiting for future client connections) once the initial client connection has been closed.
Reuse SSH Connections

Reuse SSH Connections

You can enable this for connections to a specific remote server, for instance:

Host server1
HostName www.example.com
IdentityFile ~/.ssh/webserver.pem
User username_here
ControlMaster auto
ControlPath ~/.ssh/sockets/%[email protected]%h-%p
ControlPersist 600

This way you only suffer the connection overhead for the first connection, and all subsequent connections will be much faster.

4. Use Specific SSH Authentication Method

Another way of speeding up ssh connections is to use a given authentication method for all ssh connections, and here we recommend configuring ssh passwordless login using ssh keygen in 5 easy steps.

Once that is done, use the PreferredAuthentications directive, within ssh_config files (global or user specific) above. This directive defines the order in which the client should try authentication methods (you can specify a command separated list to use more than one method).

PreferredAuthentications=publickey 
SSH Authentication Method

SSH Authentication Method

Optionally, use this syntax below from the command line.

# ssh -o "PreferredAuthentications=publickey" [email protected]

If you prefer password authentication which is deemed unsecure, use this.

# ssh -o "PreferredAuthentications=password" [email protected]

Finally, you need to restart your sshd daemon after making all the above changes.

# systemctl restart sshd #Systemd
# service sshd restart #SysVInit

For more information about the directives used here, see the ssh_config and sshd_config man pages.

# man ssh_config
# man sshd_config 

Also check out these useful guides for securing ssh on Linux systems:

  1. 5 Best Practices to Secure and Protect SSH Server
  2. How to Disconnect Inactive or Idle SSH Connections in Linux

That’s all for now! Do you have any tips/tricks for speeding up SSH connections. We would love to hear of other ways of doing this. Use the comment form below to share with us.

How to Configure Basic HTTP Authentication in Nginx

Basic HTTP authentication is a security mechanism to restrict access to your website/application or some parts of it by setting up simple username/password authentication. It can be used essentially to protect the whole HTTP server, individual server blocks (virtual hosts in Apache) or location blocks.

Read Also: How to Setup Name-based and IP-based Virtual Hosts (Server Blocks) with NGINX

As the name suggests, it is not a security method to rely on; you should use it in conjunction with other more reliable security measures. For instance if your web application is running on HTTP, then user credentials are transmitted in plain text, so you should consider enabling HTTPS.

The purpose of this guide is to help you add a small but useful layer of security to protect private/privileged content on your web applications (such as, but not limited to administrator sides). You can also use it to prevent access to a website or application which is still in the development phase.

Requirements

  1. Install LEMP Stack in CentOS/RHEL 7
  2. Install LEMP Stack in Ubuntu/Debian

Create HTTP Authentication User File


You should start by creating a file that will store username:password pairs. We will use the htpasswd utility from Apache HTTP Server, to create this file.

First check that apache2-utils or httpd-tools, the packages which provide htpasswd utility are installed on your system, otherwise run the appropriate command for your distribution to install it:

# yum install httpd-tools [RHEL/CentOS]
$ sudo apt install apache2-utils [Debian/Ubuntu]

Next, run htpasswd command below to create the password file with the first user. The -c option is used to specify the passwd file, once you hit [Enter], you will be asked to enter the user password.

# htpasswd -c /etc/nginx/conf.d/.htpasswd developer

Add a second user, and do not use the -c option here.

# htpasswd /etc/nginx/conf.d/.htpasswd admin

Now that you have the password file ready, proceed to configure the parts of your web server that you want to restrict access to. To view the password file content (which includes usernames and encrypted passwords), use the cat command below.

# cat /etc/nginx/conf.d/.htpasswd 
View HTTP Password File

View HTTP Password File

Configure HTTP Authentication for Nginx

As we mentioned earlier on, you can restrict access to your webserver, a single web site (using its server block) or a location directive. Two useful directives can be used to achieve this.

  • auth_basic – turns on validation of user name and password using the “HTTP Basic Authentication” protocol.
  • auth_basic_user_file – specifies the password file.

Password Protect Nginx Virtual Hosts

To implement basic authentication for the whole web server, which applies to all server blocks, open the /etc/nginx/nginx.conf file and add the lines below in the http context:

htpp{
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; ……...
}

Password Protect Nginx Website or Domain

To enable basic authentication for a particular domain or sub-domain, open its configuration file under /etc/nginx/conf.d/ or /etc/nginx/conf/sites-available (depending on how you installed Nginx), then add the configuration below in server block or context:

server {
listen 80;
server_name example.com;
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; location / {
……..
}
……...
}

Password Protect Web Directory in Nginx

You can also enable basic authentication within a location directive. In the example below, all users trying to access the /admin location block will be asked to authenticate.

server {
listen 80;
server_name example.com www.example.com;
location / {
……..
}
location /admin/ {
auth_basic "Restricted Access!";
auth_basic_user_file /etc/nginx/conf.d/.htpasswd; }
location /public/{
auth_basic off; #turns off basic http authentication off for this block
}
……..
}

If you have configured basic HTTP authentication, all user who try to access you webserver or a sub-domain or specific part of a site (depending on where you implemented it), will be asked for a username and password as shown in the screen shot below.

Nginx Basic Authentication

Nginx Basic Authentication

In case of a failed user authentication, a “401 Authorization Required” error will be displayed as shown below.

401 Authorization Required Error

401 Authorization Required Error

You can find more information at restricting Access with Basic HTTP Authentication.

You might also like to read these following useful Nginx HTTP server related guides.

  1. How to Password Protect Web Directories in Nginx
  2. The Ultimate Guide to Secure, Harden and Improve Performance of Nginx
  3. Setting Up HTTPS with Let’s Encrypt SSL Certificate For Nginx

In this guide, we showed how to implement basic HTTP authentication in Nginx HTTP web server. To ask any questions, use the feedback form below.

The Unity A to Z Game Development Bundle

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘The Unity A to Z Game Development Bundle’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/11/Unity-Game-Development-Course.jpg’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

How to Download and Extract Tar Files with One Command

Tar (Tape Archive) is a popular file archiving format in Linux. It can be used together with gzip (tar.gz) or bzip2 (tar.bz2) for compression. It is the most widely used command line utility to create compressed archive files (packages, source code, databases and so much more) that can be transferred easily from machine to another or over a network.

In this article, we will show you how to download tar archives using two well known command line downloaders – wget or cURL and extract them with one single command.

How to Download and Extract File Using Wget Command

The example below shows how to download, unpack the latest GeoLite2 Country databases (use by the GeoIP Nginx module) in the current directory.

# wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | tar -xz
Download and Extract File with Wget

Download and Extract File with Wget

The wget option -O specifies a file to which the documents is written, and here we use -, meaning it will written to standard output and piped to tar and the tar flag -x enables extraction of archive files and -z decompresses, compressed archive files created by gzip.

To extract tar files to specific directory, /etc/nginx/ in this case, include use the -C flag as follows.

Note: If the extracting directories to a file that requires root permissions, use the sudo command to run tar.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | sudo tar -xz -C /etc/nginx/
Download and Extract File to Directory

Download and Extract File to Directory

Alternatively, you can use the following command, here, the archive file will be downloaded on your system before you can extract it.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && tar -xzf  GeoLite2-Country.tar.gz

To extract compressed archive file to a specific directory, use the following command.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf  GeoLite2-Country.tar.gz -C /etc/nginx/

How to Download and Extract File Using cURL Command

Considering the previous example, this is how you can use cURL to download and unpack archives in the current working directory.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | tar -xz 
Download and Extract File with cURL

Download and Extract File with cURL

To extract file to different directory while downloading, use the following command.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | sudo tar -xz  -C /etc/nginx/
OR
$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

That’s all! In this short but useful guide, we showed you how to download and extract archive files in one single command. If you have any queries, use the comment section below to reach us.

How to Download and Extract Tar Files with One Command

Tar (Tape Archive) is a popular file archiving format in Linux. It can be used together with gzip (tar.gz) or bzip2 (tar.bz2) for compression. It is the most widely used command line utility to create compressed archive files (packages, source code, databases and so much more) that can be transferred easily from machine to another or over a network.

Read Also: 18 Tar Command Examples in Linux

In this article, we will show you how to download tar archives using two well known command line downloaders – wget or cURL and extract them with one single command.

How to Download and Extract File Using Wget Command

The example below shows how to download, unpack the latest GeoLite2 Country databases (use by the GeoIP Nginx module) in the current directory.

# wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | tar -xz
Download and Extract File with Wget

Download and Extract File with Wget


The wget option -O specifies a file to which the documents is written, and here we use -, meaning it will written to standard output and piped to tar and the tar flag -x enables extraction of archive files and -z decompresses, compressed archive files created by gzip.

To extract tar files to specific directory, /etc/nginx/ in this case, include use the -C flag as follows.

Note: If extracting files to particular directory that requires root permissions, use the sudo command to run tar.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | sudo tar -xz -C /etc/nginx/
Download and Extract File to Directory

Download and Extract File to Directory

Alternatively, you can use the following command, here, the archive file will be downloaded on your system before you can extract it.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && tar -xzf GeoLite2-Country.tar.gz

To extract compressed archive file to a specific directory, use the following command.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

How to Download and Extract File Using cURL Command

Considering the previous example, this is how you can use cURL to download and unpack archives in the current working directory.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | tar -xz 
Download and Extract File with cURL

Download and Extract File with cURL

To extract file to different directory while downloading, use the following command.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | sudo tar -xz -C /etc/nginx/
OR
$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

That’s all! In this short but useful guide, we showed you how to download and extract archive files in one single command. If you have any queries, use the comment section below to reach us.

5 new OpenStack resources

As OpenStack has continued to mature and move from the first stages of adoption to use in production clouds, the focus of the OpenStack community has shifted as well, with more focus than ever on integrating OpenStack with other infrastructure projects. Today’s cloud architects and engineers need to be familiar with a wide range of projects and how they might be of use in their data center, and OpenStack is often the glue stitching the different pieces together.

More on OpenStack

Keeping up with everything you need to know can be tough. Fortunately, learning new skills is made a little easier by the large number of resources available to help you. Along with project documentation, support from your vendors and the community at large, printed books and other publications, and certification and training programs, there are many wonderful community-created resources as well.

Every month we share some of the best OpenStack-related content we come across, from guides and tutorials to deep-dives and technical notes. Have a look at what we found this month.

  • Security is always important in cloud applications, but sometimes security protocols require conformance to certain exact specifications. In this guide on how to build security hardened images with volumes, learn how to take advantage of changes introduced in the Queens release of OpenStack which allow for using volumes for your images, giving you greater flexibility when resizing filesystems.

  • Real-time systems impose certain operating constraints, including determinism and guaranteed performance levels, which have been historically difficult to find in the cloud. This guide to deploying real-time OpenStack shows you how recent feature additions in Nova can allow for real-time applications in an OpenStack environment. While focused on CentOS and DevStack, with a few modifications this tutorial could be used on other installation profiles as well.

  • The rapid pace of development with OpenStack means an entirely new release becomes available every six months. But in a production environment running mission-critical systems, upgrading at that pace can be difficult. One approach to dealing with this issue is allowing for quick upgrades across multiple releases of OpenStack at a time. TripleO fast-forward upgrades allow this possibility, and this guide will walk you through a rough demo of how it works.

  • Have you wanted to try out the recently open sourced AWX, which is the upstream of Ansible Tower, for managing Ansible projects? You’re in luck. Here is a simple guide to deploying AWX to an OpenStack RDO cloud.

  • Finally this month, in case you missed it, earlier this month we ran a great tutorial for getting started with Gnocchi. Gnocchi is a tool which enables indexing and storage of time series data, and purpose-built for large-scale environments like clouds. While now cloud-agnostic, Gnocchi is commonly installed with OpenStack to manage logging and metrics needs.


Thanks for checking out this month’s roundup. If you’d like to learn more, take a look back at our entire collection of OpenStack guides, how-tos, and tutorials with more than three years of community-made content. Did we leave out a great guide or tutorial that you found? Let us know in the comments below, and we’ll consider putting it in our next edition.