How to generate webpages using CGI scripts

Back in the stone age of the Internet when I first created my first business website, life was good.

I installed Apache and created a few simple HTML pages that stated a few important things about my business and gave important information like an overview of my product and how to contact me. It was a static website because the content seldom changed. Maintenance was simple because of the unchanging nature of my site.

Static content

Static content is easy and still common. Let’s take a quick look at a couple sample static web pages. You don’t need a working website to perform these little experiments. Just place the files in your home directory and open them with your browser. You will see exactly what you would if the file were served to your browser via a web server.

The first thing you need on a static website is the index.html file which is usually located in the /var/www/html directory. This file can be as simple as a text phrase such as “Hello world” without any HTML markup at all. This would simply display the text string. Create index.html in your home directory and add “Hello world” (without the quotes) as it’s only content. Open the index.html in your browser with the following URL.


So HTML is not required, but if you had a large amount of text that needed formatting, the results of a web page with no HTML coding would be incomprehensible with everything running together.

So the next step is to make the content more readable by using a bit of HTML coding to provide some formatting. The following command creates a page with the absolute minimum markup required for a static web page with HTML. You could also use your favorite editor to create the content.

echo "<h1>Hello World</h1>" > test1.html

Now view index.html and see the difference.

Of course you can put a lot of additional HTML around the actual content line to make a more complete and standard web page. That more complete version as shown below will still display the same results in the browser, but it also forms the basis for more standardized web site. Go ahead and use this content for your index.html file and display it in your browser.

I built a couple static websites using these techniques, but my life was about to change.

Dynamic web pages for a new job

I took a new job in which my primary task was to create and maintain the CGI (Common Gateway Interface) code for a very dynamic website. In this context, dynamic means that the HTML needed to produce the web page on a browser was generated from data that could be different every time the page was accessed. This includes input from the user on a web form that is used to look up data in a database. The resulting data is surrounded by appropriate HTML and displayed on the requesting browser. But it does not need to be that complex.

Using CGI scripts for a website allows you to create simple or complex interactive programs that can be run to provide a dynamic web page that can change based on input, calculations, current conditions in the server, and so on. There are many languages that can be used for CGI scripts. We will look at two of them, Perl and Bash. Other popular CGI languages include PHP and Python.

This article does not cover installation and setup of Apache or any other web server. If you have access to a web server that you can experiment with, you can directly view the results as they would appear in a browser. Otherwise, you can still run the programs from the command line and view the HTML that would be created. You can also redirect that HTML output to a file and then display the resulting file in your browser.

Using Perl

Perl is a very popular language for CGI scripts. Its strength is that it is a very powerful language for the manipulation of text.

To get CGI scripts to execute, you need the following line in the in httpd.conf for the website you are using. This tells the web server where your executable CGI files are located. For this experiment, let’s not worry about that.

ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"

Add the following Perl code to the file index.cgi, which should be located in your home directory for your experimentation. Set the ownership of the file to apache.apache when you use a web server, and set the permissions to 755 because it must be executable no matter where it is located.

print “Content-type: text/html\n\n;
print “<html><body>\n;
print “<h1>Hello World</h1>\n;
print “Using Perl<p>\n;
print “</body></html>\n;

Run this program from the command line and view the results. It should display the HTML code it will generate.

Now view the index.cgi in your browser. Well, all you get is the contents of the file. Browsers really need to have this delivered as CGI content. Apache does not really know that it needs to run the file as a CGI program unless the Apache configuration for the web site includes the “ScriptAlias” definition as shown above. Without that bit of configuration Apache simply send the data in the file to the browser. If you have access to a web server, you could try this out with your executable index files in the /var/www/cgi-bin directory.

To see what this would look like in your browser, run the program again and redirect the output to a new file. Name it whatever you want. Then use your browser to view the file that contains the generated content.

The above CGI program is still generating static content because it always displays the same output. Add the following line to your CGI program immediately after the “Hello World” line. The Perl “system” command executes the commands following it in a system shell, and returns the result to the program. In this case, we simply grep the current RAM usage out of the results from the free command.

system "free | grep Mem\n";

Now run the program again and redirect the output to the results file. Reload the file in the browser. You should see an additional line so that displays the system memory statistics. Run the program and refresh the browser a couple more times and notice that the memory usage should change occasionally.

Using Bash

Bash is probably the simplest language of all for use in CGI scripts. Its primary strength for CGI programming is that it has direct access to all of the standard GNU utilities and system programs.

Rename the existing index.cgi to Perl.index.cgi and create a new index.cgi with the following content. Remember to set the permissions correctly to executable.

echo “Content-type: text/html”
echo “”
echo ‘<html>’
echo ‘<head>’
echo ‘<meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8″>’
echo ‘<title>Hello World</title>’
echo ‘</head>’
echo ‘<body>’
echo ‘<h1>Hello World</h1><p>’
echo ‘Using Bash<p>’
free | grep Mem
echo ‘</body>’
echo ‘</html>’
exit 0

Execute this program from the command line and view the output, then run it and redirect the output to the temporary results file you created before. Then refresh the browser to view what it looks like displayed as a web page.


It is actually very simple to create CGI programs that can be used to generate a wide range of dynamic web pages. This is a trivial example but you should now see some of the possibilities.  

Create custom wallpaper slideshows in GNOME

A very cool, yet lesser known, feature in GNOME is its ability to display a slideshow as your wallpaper. You can select a wallpaper slideshow from the background settings panel in the GNOME Control Center. Wallpaper slideshows can be distinguished from static wallpapers by a small clock emblem displayed in the lower-right corner of the preview.

Some distributions come with pre-installed slideshow wallpapers. For example, Ubuntu includes the stock GNOME timed wallpaper slideshow, as well as one of Ubuntu wallpaper contest winners.

What if you want to create your own custom slideshow to use as a wallpaper? While GNOME doesn’t provide a user interface for this, it’s quite simple to create one using some simple XML files in your home directory. Fortunately, the background selection in the GNOME Control Center honors some common directory paths, which makes it easy to create a slideshow without having to edit anything provided by your distribution.

Getting started

Using your favorite text editor, create an XML file in $HOME/.local/share/gnome-background-properties/. Although the filename isn’t important, the directory name matters (and you’ll probably have to create the directory). For my example, I created /home/ken/.local/share/gnome-background-properties/osdc-wallpapers.xml with the following content:

<?xml version=“1.0” encoding=“UTF-8”?>
<!DOCTYPE wallpapers SYSTEM “gnome-wp-list.dtd”>
 <wallpaper deleted=“false”>
   <name> Wallpapers</name>

The above XML file needs a <wallpaper> stanza for each slideshow or static wallpaper you want to include in the backgrounds panel of the GNOME Control Center.

In this example, my osdc.xml file looks like this:

<?xml version=“1.0” ?>
    <!— Duration in seconds to display the background —>
    <!— Duration of the transition in seconds, default is 2 seconds —>

There are a few important pieces in the above XML. The <background> node in the XML is your outer node. Each background supports multiple <static> and <transition> nodes.

The <static> node defines an image to be displayed and the duration to display it with <duration> and <file> nodes, respectively.

The <transition> node defines the <duration>, the <from> image, and the <to> image for each transition.

Changing wallpaper throughout the day

Another cool GNOME feature is time-based slideshows. You can define the start time for the slideshow and GNOME will calculate times based on it. This is useful for setting different wallpapers based on the time of day. For example, you could set the start time to 06:00 and display one wallpaper until 12:00, then change it for the afternoon, and again at 18:00.

This is accomplished by defining the <starttime> in your XML like this:

    <!— A start time in the past is fine —>

The above XML started the animation at 06:00 on November 21, 2017, with a duration of 21,600.00, equal to six hours. This displays your morning wallpaper until 12:00, at which time it changes to your next wallpaper. You can continue in this manner to change the wallpaper at any intervals you’d like throughout the day, but ensure the total of all your durations is 86,400 seconds (equal to 24 hours).

GNOME will calculate the delta between the start time and the current time and display the correct wallpaper for the current time. For example, if you select your new wallpaper at 16:00, GNOME will display the proper wallpaper for 36,000 seconds past the start time of 06:00.

For a complete example, see the adwaita-timed slideshow provided by the gnome-backgrounds package in most distributions. It’s usually found in /usr/share/backgrounds/gnome/adwaita-timed.xml.

For more information

Hopefully this encourages you to take a dive into creating your own slideshow wallpapers. If you would like to download complete versions of the files referenced in this article, they can be found on GitHub.

If you’re interested in utility scripts for generating the XML files, you can do an internet search for gnome-background-generator.

Paying it forward at Finland's Aalto Fablab

Originating at MIT, a fab lab is a technology prototyping platform where learning, experimentation, innovation, and invention are encouraged through curiosity, creativity, hands-on making, and most critically, open knowledge sharing. Each fab lab provides a common set of tools (including digital fabrication tools like laser cutters, CNC mills, and 3D printers) and processes, so you can learn how to work in a fab lab anywhere and use those skills at any of the 1,000+ fab labs across the globe. There is probably a fab lab near you.

Fab labs can be found anywhere avant-garde makers and hackers live, but they have also cropped up at libraries and other public spaces. For example, the Aalto Fablab, the first fab lab in Finland, is in the basement of Aalto University’s library, in Espoo. Solomon Embafrash, the studio master, explains, “Aalto Fablab was in the Arabia campus with the School of Arts and Design since 2011. As Aalto decided to move all the activities concentrated in one campus (Otaniemi), we decided that a dedicated maker space would complement the state-of-the-art library in the heart of Espoo.”

The library, which is now a full learning center, sports a maker space that consists of a VR hub, a visual resources center, a studio, and of course, the Fablab. With the expansion of the Helsinki metro to a new station across the street from the Aalto Fablab, everyone in the region now has easy access to it.

The Fab Lab Charter states: “Designs and processes developed in fab labs can be protected and sold however an inventor chooses, but should remain available for individuals to use and learn from.” The “protected” part does not quite meet the requirements set by the Open Source Hardware Association’s definition of open source hardware; however, for those not involved in commercialization of products, the code is available for a wide range of projects created in fab labs (like the FabFi, an open source wireless network).

That means fab labs are effectively feeding the open source ecosystem that allows digitally distributed manufacturing of a wide range of products as many designers choose to release their designs with fully free licenses. Even the code to create a fab lab is also openly shared by the U.S. non-profit Fab Foundation.

All fab labs are required to provide open access to the community; however, some, like the Aalto Fablab, take that requirement one step further. The Aalto Fablab is free to use, but if you wish to use bulk materials from its stock for your project—for example, to make a new chair—you need to pay for them. You are also expected to respect the philosophy of open knowledge-sharing by helping others, documenting your work, and sharing what you have learned. Specifically, the Aalto Fablab asks that you “pay forward” what you have learned to other users, who may be able to build upon your work and help speed development.

All fab labs are required to provide open access to the community.

Embafrash adds, “There is a very old tradition of free services in Finland, like the library service and education. We used to charge users a few cents for the material cost of the 3D prints, but we found that it makes a lot of sense to keep it free, as it encourages people to our core philosophy of Fablab, which is idea sharing and documentation.”

This approach has proven successful, fostering enormous interest in the local community for making and sharing. For example, the Unseen Art project, an open source platform that allows the visually impaired to enjoy 3D printed art, started in the Aalto Fablab.

Fablab members organize local Maker Faire events and work closely with the maker community, local schools, and other organizations. “The Fablab has open days, which are very popular times that people from outside the university get access to the resources, and our students get the exposure to work with people outside the school community,” Embafrash says.

In this way, the more they share, the more their university benefits.

This article was supported by Fulbright Finland, which is currently sponsoring my research in open source scientific hardware in Finland as the Fulbright-Aalto University Distinguished Chair.

How to Download and Extract Tar Files with One Command

Tar (Tape Archive) is a popular file archiving format in Linux. It can be used together with gzip (tar.gz) or bzip2 (tar.bz2) for compression. It is the most widely used command line utility to create compressed archive files (packages, source code, databases and so much more) that can be transferred easily from machine to another or over a network.

In this article, we will show you how to download tar archives using two well known command line downloaders – wget or cURL and extract them with one single command.

How to Download and Extract File Using Wget Command

The example below shows how to download, unpack the latest GeoLite2 Country databases (use by the GeoIP Nginx module) in the current directory.

# wget -c -O - | tar -xz
Download and Extract File with Wget

Download and Extract File with Wget

The wget option -O specifies a file to which the documents is written, and here we use -, meaning it will written to standard output and piped to tar and the tar flag -x enables extraction of archive files and -z decompresses, compressed archive files created by gzip.

To extract tar files to specific directory, /etc/nginx/ in this case, include use the -C flag as follows.

Note: If the extracting directories to a file that requires root permissions, use the sudo command to run tar.

$ sudo wget -c -O - | sudo tar -xz -C /etc/nginx/
Download and Extract File to Directory

Download and Extract File to Directory

Alternatively, you can use the following command, here, the archive file will be downloaded on your system before you can extract it.

$ sudo wget -c && tar -xzf  GeoLite2-Country.tar.gz

To extract compressed archive file to a specific directory, use the following command.

$ sudo wget -c && sudo tar -xzf  GeoLite2-Country.tar.gz -C /etc/nginx/

How to Download and Extract File Using cURL Command

Considering the previous example, this is how you can use cURL to download and unpack archives in the current working directory.

$ sudo curl | tar -xz 
Download and Extract File with cURL

Download and Extract File with cURL

To extract file to different directory while downloading, use the following command.

$ sudo curl | sudo tar -xz  -C /etc/nginx/
$ sudo curl && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

That’s all! In this short but useful guide, we showed you how to download and extract archive files in one single command. If you have any queries, use the comment section below to reach us.

5 new OpenStack resources

As OpenStack has continued to mature and move from the first stages of adoption to use in production clouds, the focus of the OpenStack community has shifted as well, with more focus than ever on integrating OpenStack with other infrastructure projects. Today’s cloud architects and engineers need to be familiar with a wide range of projects and how they might be of use in their data center, and OpenStack is often the glue stitching the different pieces together.

More on OpenStack

Keeping up with everything you need to know can be tough. Fortunately, learning new skills is made a little easier by the large number of resources available to help you. Along with project documentation, support from your vendors and the community at large, printed books and other publications, and certification and training programs, there are many wonderful community-created resources as well.

Every month we share some of the best OpenStack-related content we come across, from guides and tutorials to deep-dives and technical notes. Have a look at what we found this month.

  • Security is always important in cloud applications, but sometimes security protocols require conformance to certain exact specifications. In this guide on how to build security hardened images with volumes, learn how to take advantage of changes introduced in the Queens release of OpenStack which allow for using volumes for your images, giving you greater flexibility when resizing filesystems.

  • Real-time systems impose certain operating constraints, including determinism and guaranteed performance levels, which have been historically difficult to find in the cloud. This guide to deploying real-time OpenStack shows you how recent feature additions in Nova can allow for real-time applications in an OpenStack environment. While focused on CentOS and DevStack, with a few modifications this tutorial could be used on other installation profiles as well.

  • The rapid pace of development with OpenStack means an entirely new release becomes available every six months. But in a production environment running mission-critical systems, upgrading at that pace can be difficult. One approach to dealing with this issue is allowing for quick upgrades across multiple releases of OpenStack at a time. TripleO fast-forward upgrades allow this possibility, and this guide will walk you through a rough demo of how it works.

  • Have you wanted to try out the recently open sourced AWX, which is the upstream of Ansible Tower, for managing Ansible projects? You’re in luck. Here is a simple guide to deploying AWX to an OpenStack RDO cloud.

  • Finally this month, in case you missed it, earlier this month we ran a great tutorial for getting started with Gnocchi. Gnocchi is a tool which enables indexing and storage of time series data, and purpose-built for large-scale environments like clouds. While now cloud-agnostic, Gnocchi is commonly installed with OpenStack to manage logging and metrics needs.

Thanks for checking out this month’s roundup. If you’d like to learn more, take a look back at our entire collection of OpenStack guides, how-tos, and tutorials with more than three years of community-made content. Did we leave out a great guide or tutorial that you found? Let us know in the comments below, and we’ll consider putting it in our next edition.

Part 1: How I Built a cPanel Hosting Environment on Amazon AWS

People argue for and against building a production hosting environment on top of cloud services such as Amazon’s AWS. I recently made the decision to migrate my entire hosting infrastructure from co-located dedicated hardware to a full implementation built entirely on top of Amazon’s Web Services.

I will be releasing a four part series detailing the tricks I’ve learned in my own migration to AWS and walking you through setting up your own full service hosting environment within the AWS eco-system, all while still leveraging the power of cPanel, WHM, and DNSONLY.

I chose to use AWS, more specifically EC2, VPC and S3, for its rapid deployment, unlimited scaling, load balancing, and global distribution abilities. Working with AWS, I started to realize just how powerful it could become.

I started this challenge with a few key questions: What are the benefits and the challenges one would face working in an environment like this? All of our servers run instances of cPanel/WHM, so what are the difficulties in setting up cPanel in an AWS environment?

Amazon’s AWS platform is built behind a NAT infrastructure, so inherently, configuring cPanel for a NAT used to be an elaborate ballet of duct taped scripts and hooks. However, with cPanel 11.39, I’ve been able to seamlessly migrate my entire infrastructure ( 30+ instances ) from a dedicated environment to AWS without any misstep.

The result is a solid hosting architecture using Amazon VPC (Virtual Private Cloud), Amazon EC2 (Elastic Cloud Compute) and Amazon S3 (Simple Storage Service), built with cPanel/WHM/DNSONLY that not only works on AWS, but makes deployment and provisioning of new servers unbelievably rapid and simple.

Below is a quick overview of the architecture implemented as well as instance types used for provisioning instances. While I can not link directly to specific AMIs (Amazon Machine Images), selecting your desired operating system and getting cPanel/WHM installed is a straightforward procedure.


  • First, you must have a working knowledge of the command line, networking, Amazon AWS, and cPanel/WHM/DNSONLY.
  • Second, this model will run two dedicated nameservers (cPanel DNSONLY), the node servers will not be running DNS and will be configured in a cluster.
  • Third, I won’t be going over the registration process of AWS, you need to already have an active account.

Some instructions below are borrowed from Amazon’s AWS User Guide.

A Representation of the Basic Network Architecture

This Lesson Includes

  • Creating a new Amazon VPC Instance
  • Defining subnet scope
  • Creating and defining Security Groups

Setup the VPC, Subnet, & Internet Gateway:

  1. Open the Amazon VPC console at
  2. Click “VPC Dashboard” in the navigation pane.
  3. Locate the “Your Virtual Private Cloud” area of the dashboard and click “Get started creating a VPC“, if you have no VPC resources, or click “Start VPC Wizard“.
  4. Select the first option, VPC with a Single Public Subnet Only, and then click Continue.

  1. The confirmation page shows the CIDR ranges and settings that you’ve chosen. Since this is going to be a small network, click “Edit VPC IP CIDR Block” and change the value to ““. This gives us 251 useable IPs on the gateway.
  2. Click “Create VPC” to create your VPC, subnet, Internet gateway, and route table.

Create Security Groups

Security Groups are essentially Firewall Rules that can be applied on a per-instance basis. We are going to create two primary Security Groups, one for Name Servers and one for Web Servers. Of course, your specific scenario will differ from the one represented here, so feel free to create as many Security Groups as needed.

In my use case scenario, I established a Security Group for Name Servers, Shared Web Servers, and Dedicated VPSs. Again, tailor these to meet your needs.

  1. Open the Amazon VPC console at
  2. Click “Security Groups” in the navigation pane.
  3. Click the “Create Security Group” button.
  4. Specify NS_SG as the name of the security group, and provide a description. Select the ID of your VPC from the “VPC” menu, and then click “Yes, Create“.
  5. Click the “Create Security Group” button.
  6. Specify VS_SG as the name of the security group, and provide a description. Select the ID of your VPC from the “VPC” menu, and then click “Yes, Create“.
  7. Select the “NS_SG” security group that you just created. The details pane includes a tab for information about the security group, plus tabs for working with its inbound rules and outbound rules.

On the “Inbound” tab, do the following:

  1. Select “All Traffic” from the Create a new rule list, make sure that Source is ““, and then click “Add Rule“.
  2. Click “Apply Rule Changes” to apply these inbound rules.

On the “Outbound” tab, do the following:

  1. All Traffic” is allowed by default, we will temporarily keep this rule.

Complete the same steps above for the “VS_SG” you created.

If you’ve made it this far, you’re probably half way to a panic attack wondering why we’ve opened up all inbound and outbound ports. Each environment’s needs for port availability will obviously be unique, but for most standard cPanel/WHM installations, you can have a look at this informative article, Getting The Most Out of Your System’s Firewall,  detailing commonly used ports by cPanel and its bundled services and then choose to open or close the ports at the firewall level accordingly.

Alternately, you can keep all inbound/outbound traffic at the firewall level as pass-through (as detailed above) and handle your firewall at the instance level with a software based firewall.

cPanel supports numerous software based firewalls that are freely available to download and install, personally I use and highly recommend ConfigServer Security & Firewall. It’s dead simple to install and I recommend running the security scan once you have it configured to ensure you’ve taken extra steps in hardening your systems.

Up Next

  • Creating and Launching Name Server Instances Into Your New VPC
  • Configuring your Name Server
  • Basic Cluster Configuration


Getting started with .NET for Linux

When you know a software developer’s preferred operating system, you can often guess what programming language(s) they use. If they use Windows, the language list includes C#, JavaScript, and TypeScript. A few legacy devs may be using Visual Basic, and the bleeding-edge coders are dabbling in F#. Even though you can use Windows to develop in just about any language, most stick with the usuals.

If they use Linux, you get a list of open source projects: Go, Python, Ruby, Rails, Grails, Node.js, Haskell, Elixir, etc. It seems that as each new language—Kotlin, anyone?—is introduced, Linux picks up a new set of developers.

So leave it to Microsoft (Microsoft?!?) to throw a wrench into this theory by making the .NET framework, coined .NET Core, open source and available to run on any platform. Windows, Linux, MacOS, and even a television OS: Samsung’s Tizen. Add in Microsoft’s other .NET flavors, including Xamarin, and you can add the iOS and Android operating systems to the list. (Seriously? I can write a Visual Basic app to run on my TV? What strangeness is this?)

Given this situation, it’s about time Linux developers get comfortable with .NET Core and start experimenting, perhaps even building production applications. Pretty soon you’ll meet that person: “I use Linux … I write C# apps.” Brace yourself: .NET is coming.

How to install .NET Core on Linux

The list of Linux distributions on which you can run .NET Core includes Red Hat Enterprise Linux (RHEL), Ubuntu, Debian, Fedora, CentOS, Oracle, and SUSE.

Each distribution has its own installation instructions. For example, consider Fedora 26:

Step 1: Add the dotnet product feed.

        sudo rpm –import
        sudo sh -c ‘echo -e “[packages-microsoft-com-prod]\nname=packages-microsoft-com-prod \nbaseurl=\nenabled=1\ngpgcheck=1\ngpgkey=” > /etc/yum.repos.d/dotnetdev.repo’

Step 2: Install the .NET Core SDK.

        sudo dnf update
        sudo dnf install libunwind libicu compat-openssl10
        sudo dnf install dotnet-sdk-2.0.0

Creating the Hello World console app

Now that you have .NET Core installed, you can create the ubiquitous “Hello World” console application before learning more about .NET Core. After all, you’re a developer: You want to create and run some code now. Fair enough; this is easy. Create a directory, move into it, create the code, and run it:

mkdir helloworld && cd helloworld
dotnet new console
dotnet run

You’ll see the following output:

$ dotnet run
Hello World!

What just happened?

Let’s take what just happened and break it down. We know what the mkdir and cd did, but after that?

dotnew new console

As you no doubt have guessed, this created the “Hello World!” console app. The key things to note are: The project name matches the directory name (i.e., “helloworld”); the code was build using a template (console application); and the project’s dependencies were automatically retrieved by the dotnet restore command, which pulls from

If you view the directory, you’ll see these files were created:


Program.cs is the C# console app code. Go ahead and take a look inside (you already did … I know … because you’re a developer), and you’ll see what’s going on. It’s all very simple.

Helloworld.csproj is the MSBuild-compatible project file. In this case there’s not much to it. When you create a web service or website, the project file will take on a new level of significance.

dotnet run

This command did two things: It built the code, and it ran the newly built code. Whenever you invoke dotnet run, it will check to see if the *.csproj file has been altered and will run the dotnet restore command. It will also check to see if any source code has been altered and will, behind the scenes, run the dotnet build command which—you guessed it—builds the executable. Finally, it will run the executable.

Sort of.

Where is my executable?

Oh, it’s right there. Just run which dotnet and you’ll see (on RHEL): 


That’s your executable.

Sort of.

When you create a dotnet application, you’re creating an assembly … a library … yes, you’re creating a DLL. If you want to see what is created by the dotnet build command, take a peek at bin/Debug/netcoreapp2.0/. You’ll see helloworld.dll, some JSON configuration files, and a helloworld.pdb (debug database) file. You can look at the JSON files to get some idea as to what they do (you already did … I know … because you’re a developer).

When you run dotnet run, the process that runs is dotnet. That process, in turn, invokes your DLL file and it becomes your application.

It’s portable

Here’s where .NET Core really starts to depart from the Windows-only .NET Framework: The DLL you just created will run on any system that has .NET Core installed, whether it be Linux, Windows, or MacOS. It’s portable. In fact, it is literally called a “portable application.”

Forever alone

What if you want to distribute an application and don’t want to ask the user to install .NET Core on their machine? (Asking that is sort of rude, right?) Again, .NET Core has the answer: the standalone application.

Creating a standalone application means you can distribute the application to any system and it will run, without the need to have .NET Core installed. This means a faster and easier installation. It also means you can have multiple applications running different versions of .NET Core on the same system. It also seems like it would be useful for, say, running a microservice inside a Linux container. Hmmm…

What’s the catch?

Okay, there is a catch. For now. When you create a standalone application using the dotnet publish command, your DLL is placed into the target directory along with all of the .NET bits necessary to run your DLL. That is, you may see 50 files in the directory. This is going to change soon. An already-running-in-the-lab initiative, .NET Native, will soon be introduced with a future release of .NET Core. This will build one executable with all the bits included. It’s just like when you are compiling in the Go language, where you specify the target platform and you get one executable; .NET will do that as well.

You do need to build once for each target, which only makes sense. You simply include a runtime identifier and build the code, like this example, which builds the release version for RHEL 7.x on a 64-bit processor:

dotnet publish -c Release -r rhel.7-x64

Web services, websites, and more

So much more is included with the .NET Core templates, including support for F# and Visual Basic. To get a starting list of available templates that are built into .NET Core, use the command dotnet new –help.

Hint: .NET Core templates can be created by third parties. To get an idea of some of these third-party templates, check out these templates, then let your mind start to wander…

Like most command-line utilities, contextual help is always at hand by using the –help command switch. Now that you’ve been introduced to .NET Core on Linux, the help function and a good web search engine are all you need to get rolling.

Other resources

Ready to learn more about .NET Core on Linux? Check out these resources:

How the OpenType font system works

Digital typography is something that we use every day, but few of us understand how digital fonts work. This article gives a basic, quick, dirty, oversimplified (but hopefully useful) tour of OpenType— what it is and how you can use its powers with free, libre, and open source software (FLOSS). All the fonts mentioned here are FLOSS, too.

What is OpenType?

On the most basic level, a digital font is a “container” for different glyphs plus extra information about how to use them. Each glyph is represented by a series of points and rules to connect those points. I’ll not delve into the different ways to define those “connections” or how we arrived there (the history of software development can be messy), but basically there are two kinds of rules: parabolic segments (quadratic Bézier curves) or cubic functions (cubic Bézier curves).

The TTF file format, generally known as TrueType Font, can only use quadratic Bézier curves, whereas the OTF file format, known as OpenType Font, supports both.

Here is where we need to be careful about what we are talking about: The term “OpenType” refers not only to the file format, but also to the advanced properties of a typeface as a whole (i.e., the “extra information” mentioned earlier).

In fact, in addition to the OpenType file format, there are also substitution tables that, for example, tell the software using that font to substitute two characters with the corresponding typographical ligature; that the shape of a character needs to change according to the characters that surround it (its “contextual alternate”); or that when you write in Greek, a ? at the end of a word must be substituted with a ?. This is what the term “smart fonts” means.

And, to make things more confusing, including OpenType tables on TrueType fonts is possible, such as what happens on Junicode.

A quick example

Let’s see a quick example of smart fonts in use. Here is an example of Cormorant with (top) and without (bottom) OpenType features enabled:

Each OpenType property has its own “tag” that is used to activate those “specialties.” Some of these tags are enabled by default (like liga for normal ligatures or clig for contextual ligatures), whereas others must be enabled by hand.

A partial list of OpenType tags and names can be found in Dario Taraborelli’s Accessing OpenType font features in LaTeX.

Querying fonts

Finding out the characteristics of an OpenType font is simple. All you need is the otfinfo command, which is included in the package lcdf typetools (on my openSUSE system, it’s installed as texlive-lcdftypetools). Using it is quite simple: On the command line, issue something like:

otfinfo [option] /path/to/the/font

The option -s provides the languages supported by the font, whereas -f tells us which OpenType options are available. Font license information is displayed with the -i option.

If the path to the font contains a space, “scape” that space with an inverted bar. For example, to know what Sukhumala Regular.otf offers when installed in the folder ~/.fonts/s/, simply write in the terminal:

otfinfo -f ~/.fonts/s/Sukhumala\ Regular.otf

Using OpenType tables on LibreOffice Writer

LibreOffice version 5.3 offers good support for OpenType. It is not exactly “user-friendly,” but it’s not that difficult to understand, and it provides so much typographical power that it shouldn’t be ignored.

To simultaneously activate “stylistic sets” 1 and 11 on Vollkorn (see screenshot bellow), in the font name box, write:


The colon starts the “tag section” on the extended font name and the ampersand allows us to use several tags.

But there is more. You can also disable any default option. For example, the Sukhumala font has some strange contextual ligatures that turn aa into ?, ii into ?, and uu into ?. To disable contextual ligatures on Sukhumala, add a dash in front of the corresponding OpenType tag clig:


And that’s it. As I said before, it’s not exactly user friendly, especially considering that the font name box is rather small, but it works!

And don’t forget to use all of this within styles: Direct formatting is the enemy of good formatting. I mean, unless you are preparing a quick screenshot for a short article about typography. In that case it’s OK. But only in that case.

There’s more

One interesting OpenType tag that, sadly, does not work on LibreOffice yet is “size.” The size feature enables the automated selection of optical sizes, which is a font family that offers different designs for different point sizes. Few fonts offer this option (some GUST fonts like Latin Modern or Antykwa Pó?tawskiego; an interesting project in its initial stages of development called Coelacanth; or, to a lesser extent, EB Garamond), but they are all great. Right now, the only way to enjoy this property is through a more advanced layout system such as XeTeX. Using OpenType on XeTeX is a really big topic; the fontspec manual (the package that handles font selection and configuration on both XeTeX and LuaTeX) has more than 120 pages, so… not today.

And yes, version 1.5.3 of Scribus added support for OpenType (in addition to footnotes and other stuff), but that’s something I still need to explore.

How to align your team around microservices

Microservices have been a focus across the open source world for several years now. Although open source technologies such as Docker, Kubernetes, Prometheus, and Swarm make it easier than ever for organizations to adopt microservice architectures, getting your team on the same page about microservices remains a difficult challenge.

For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices. The problem is that that there is nothing inherently “micro” about microservices. Some can be small, but size is relative and there’s no standard measurement unit across organizations. A “small” service at one company might be 1 million lines of code, but far fewer at another organization.

For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices.

Some argue that microservices aren’t a new thing at all, rather a rebranding of service-oriented architecture (SOA), whereas others view microservices as an implementation of SOA, similar to how Scrum is an implementation of Agile. (For more on the ambiguity of microservice definitions, check out this upcoming book Microservices for Startups.)

How do you get your team on the same page about microservices when no precise definition exists? The most important thing when talking about microservices is to ensure that your team is grounded in a common starting point. Ambiguous definitions don’t help. It would be like trying to put Agile into practice without context for what you are trying to achieve or an understanding of precise methodologies like Scrum.

Finding common ground

Knowing the dangers of too eagerly hopping on the microservices bandwagon, a team I worked on tried not to stall on definitions and instead focused on defining the benefits we were trying to achieve with microservices adoption. Following are the three areas we focused on and lessons learned from each piece of our microservices implementation.

1. Ability to ship software faster

Our main application was a large codebase with several small teams of developers trying to build features for different purposes. This meant that every change had to try to satisfy all the different groups. For example, a database change that served only one group had to be reviewed and accepted by other groups that didn’t have as much context. This was tedious and slowed us down.

Having different groups of developers sharing the same codebase also meant that the code continually grew more complex in undeliberate ways. As the codebase grew larger, no one on the team could own it and make sure all the parts were organized and fit together optimally. This made deployment a scary ordeal. A one-line change to our application required the whole codebase to be deployed in order to push out the change. Because deploying our large application was high risk, our quality assurance process grew and, as a result, we deployed less.

With a microservices architecture, we hoped to be able to divide our code up so different teams of developers could fully own parts. This would enable teams to innovate much more quickly without tedious design, review, and deployment processes. We also hoped that having smaller codebases worked on by fewer developers would make our codebases easier to develop, test, and keep organized.

2. Flexibly with technology choices

Our main application was large, built with Ruby on Rails with a custom JavaScript framework and complex build processes. Several parts of our application hit major performance issues that were difficult to fix and brought down the rest of the application. We saw an opportunity to rewrite these parts of our application using a better approach. Our codebase was intertangled, which make rewriting feel extremely big and costly.

At the same time, one of our frontend teams wanted to pull away from our custom JavaScript framework and build product features with a newer framework like React. But mixing React into our existing application and complex frontend build process seemed expensive to configure.

As time went on, our teams grew frustrated with the feeling of being trapped in a codebase that was too big and expensive to fix or replace. By adopting microservices architecture, we hoped that keeping individual services smaller would mean that the cost to replace them with a better implementation would be much easier to manage. We also hoped to be able to pick the right tool for each job rather than being stuck with a one-size-fits-all approach. We’d have the flexibility to use multiple technologies across our different applications as we saw fit. If a team wanted to use something other than Ruby for better performance or switch from our custom JavaScript framework to React, they could do so.

3. Microservices are not a free lunch

In addition to outlining the benefits we hoped to achieve, we also made sure we were being realistic about the costs and challenges associated with building and managing microservices. Developing, hosting, and managing numerous services requires substantial overhead (and orchestrating a substantial number of different open source tools). A single, monolithic codebase running on a few processes can easily translate into a couple dozen processes across a handful of services, requiring load balancers, messaging layers, and clustering for resiliency. Managing all of this requires substantial skill and tooling.

Furthermore, microservices involve distributed systems that introduce a whole host of concerns such as network latency, fault tolerance, transactions, unreliable networks, and asynchronicity.

Setting your own microservices path

Once we defined the benefits and costs of microservices, we could talk about architecture without falling into counterproductive debates about who was doing microservices right or wrong. Instead of trying to find our way using others’ descriptions or examples of microservices, we instead focused on the core problems we were trying to solve.

  • How would having more services help us ship software faster in the next six to 12 months?
  • Were there strong technical advantages to using a specific tool for a portion of our system?
  • Did we foresee wanting to replace one of the systems with a more appropriate one down the line?
  • How did we want to structure our teams around services as we hired more people?
  • Was the productivity gain from having more services worth the foreseeable costs?

In summary, here are five recommended steps for aligning your team before jumping into microservices:

  1. Learn about microservices while agreeing that there is no “right” definition.
  2. Define a common set of goals and objectives to avoid counterproductive debates.
  3. Discuss and memorialize your anticipated benefits and costs of adopting microservices.
  4. Avoid too eagerly hopping on the microservices bandwagon; be open to creative ideas and spirited debate about how best to architect your systems.
  5. Stay rooted in the benefits and costs your team identified.

Focus on making sure the team has a concretely defined set of common goals to work off. It’s more valuable to discuss and define what you’d like to achieve with microservices than it is to try and pin down what a microservice actually is.

Flint OS, an operating system for a cloud-first world

Given the power of today’s browser platform technology and web frontend performance, it’s not surprising that most things we want to do with the internet can be accomplished through a single browser window. We are stepping into an era where installable apps will become history, where all our applications and services will live in the cloud.

The problem is that most operating systems weren’t designed for an internet-first world. Flint OS (soon to be renamed FydeOS) is a secure, fast, and productive operating system that was built to fill that gap. It’s based on the open source Chromium OS project that also powers Google Chromebooks. Chromium OS is based on the Linux kernel and uses Google’s Chromium browser as its principal user interface, therefore it primarily supports web applications.

Compared to older operating systems, Flint OS:

  • Boots up fast and never gets slow
  • Runs on full-fledged x86 laptops; on single-board computers (SBCs) like the Raspberry Pi, Asus Tinker Board, those with RK3288 and RK3399 chips; and more
  • Works with keyboard and mouse as well as touch and swipe
  • Has a simple architecture with sophisticated security to prevent viruses and malware
  • Avoids pausing work for updates due to its automated update mechanism
  • Is adding support for Android apps
  • Increases battery life for mobile devices by running applications in the cloud
  • Is familiar to users because it looks like Google Chrome

Downloading and installing Flint OS

Flint OS runs on a wide variety of hardware (Raspberry Pi, PC, Tinker Board, and VMware), and you can find information, instructions, and downloads for different versions on the Flint OS download page.

On PCs, Flint OS must be booted via a USB flash drive (8GB or larger). Make sure to back up your USB drive, since the flashing process will erase all data on it.

To flash Flint OS for PC to the USB drive, we recommend using a new, open source, multi-platform (Windows, macOS, and Linux) tool for USB drive and SD card burning called etcher. It is in beta; we use it to test our builds and absolutely love it.

Open the Flint OS .xz file in etcher; there is no need to rename or extract the image. Select your USB drive and click Flash; etcher will prompt you once the USB drive is ready.

To run Flint OS, first configure your computer to boot from USB media. Plug the USB drive into your PC, reboot, and you are ready to enjoy Flint OS on your PC.

Installing Flint OS as dual boot (beta) is an option, but configuring it requires some knowledge of a Linux environment. (We are working on a simpler GUI version, which will be available in the near future.) If setting up Flint OS as dual boot is your preference, see our dual-boot installation instructions.

Flint OS screenshots

Here are examples of what you can expect to see once Flint OS is up and running.

Contributing to Flint OS

We’ve spent some time cleaning up Flint OS’s Raspberry Pi (RPi) build system and codebase, both based on users’ requests and so we can create a public GitHub for our Raspberry Pi images.

In the past, when people asked how to contribute, we encouraged them to check out the Chromium project. By creating our public GitHub, we are hoping to make it easier to respond to issues and collaborate with the community.

Currently there are two branches: the x11 and the master branch.

  • The x11 branch is the legacy branch for all releases running on Chromium R56 and earlier. You are welcome to build newer versions of Chromium with this branch, but there are likely to be issues.
  • The master branch is our new Freon branch that works with R57 releases of Chromium and newer. We have successfully used this to boot R59 and R60 of Chromium. Please note this branch is currently quite unstable.

Please check out Flint OS and let us know what you think. We welcome contributions, suggestions, and changes from the community.