Part 3: How I Built a cPanel Hosting Environment on Amazon AWS

In Part 2 of this series, we discussed selecting and launching a new Amazon Machine Instance (AMI), creating and configuring that instance to serve as a dedicated name server, and worked through configuring a DNS Cluster for use within your subnet.

Today, we will launch and configure a standard Web Server instance using cPanel 11.39 or newer. I will discuss how to join the new instance to our existing DNS Cluster and how to ensure that 1:1 NAT is configured and working properly.


Below is a quick overview of the architecture implemented as well as instance types used for provisioning instances. While I can not link directly to specific AMIs (Amazon Machine Images), selecting your desired operating system and getting cPanel/WHM installed is a straightforward procedure.


Assumptions

  • First, I will discuss the reasons for configuring instances in certain ways as they relate to being on AWS, but this is not a lesson in web server management. Use of best practices falls to you.
  • Second, this model makes no assumption of complete configuration or security. Again, I will just be touching on the subtleties of using the AWS eco-system.

Some instructions below are borrowed from Amazon’s AWS User Guide.

This Lesson Includes

  • Creating and launching a new EC2 Instance (Web Server) within VPC
  • Applying a Security Group to an Instance
  • Configuring cPanel/WHM for a NAT Architecture on AWS
  • Joining a DNS Cluster

Create and Launching the Web Server Instance

Amazon EC2 instances are the fundamental building blocks for your computing needs in AWS. You can think of instances as virtual servers that can run applications and services. Instances are created from an Amazon Machine Image (AMI) and choosing an appropriate instance type. An AMI is a template that contains a software configuration, including an operating system, which defines your operating environment. You can select an AMI provided by AWS, our user community, or on the AWS Marketplace. You can also create and optionally share your own AMIs.  A single AMI can be used to launch one or thousands of instances.

There are thousands of freely (and commercially) available AMIs available to choose from. You can also opt for building your own from the ground up. In my case, I chose a vanilla CentOS 6 AMI and built my name servers from there.

An important aspect to understand about the AWS eco-system is a term called “Regions“. Regions are just that, geographical locations of the datacenters that house your services in AWS. Amazon offers numerous regions all at different price points. I generally build out an infrastructure in a single region and then duplicate the infrastructure to a separate region. I then can use AWS ELB (Elastic Load Balancing) to direct traffic to different regions or for failover. In this tutorial I will be operating in the N. Virginia (East 1-A) region. More on regions can be found here.

While I will walk you through launching your instance, I will skip the installation step for cPanel Services merely for brevity. Let’s begin.

Choose an AMI

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/
  2. Click “Launch Instance” in the top menu.
  3. Click the “Classic Wizard” radio and click “Continue“.
  4. Choose one of the four tabs to search for your desired AMI. Keep in mind, AMIs are region specific so when launching a new AMI ensure it is in the same region as your VPC.

Instance Details

  1. Select the “Instance Type: T1 Micro“. A T1 Micro Instance is sufficient for testing a basic web server. (More on Instance Types).
  2. Select the “Launch into: EC2-VPC” radio button.
  3. Accept the default subnet since we only have one (unless more were configured, select accordingly).
  4. Click “Continue“.

  1. Kernel ID and RAM Disk ID can both be kept as “Use Default“.
  2. While an additional charge will be incurred, it may be advantageous for you to enable CloudWatch Monitoring. I choose to enable it.
  3. Important: Make sure you enable Termination Protection by checking the box labeled “Prevent against accidental termination.” This helps prevent you from deleting an instance or volume store without you first disabling this protection.
  4. Also Important: Ensure “Shutdown Behavior” is set to “Stop” and not “Terminate”. When an instance is terminated, it is deleted from your VPC/EC2 account and is not recoverable.
  5. Now we want to set a Static Private IP for our instance. VPC comes built in with a DHCP server but we really don’t want our instance IPs to be changing. Set an appropriate IP address for your instance. I chose “10.0.0.12” based on my subnet range. (Remember our Name Servers were “10.0.0.10” & “10.0.0.11” respectively)
  6. Click “Continue“.

Understanding AWS storage can be somewhat overwhelming but it is really quite simple. AWS uses two primary storage types. “EBS” and “Instance Store“. In all practical instances, you will want to use EBS. The differences are simple really.

EBS Storage is physically separate storage that is backed by Amazon S3 and is independent of your instance. EBS volumes can be attached/detached to Instances much like plugging in a thumb drive. You can also take snapshots of EBS volumes making backups/recovery simple. EBS storage is a safer option because if a region goes offline or fails completely, the likely hood of recovery of your EBS backed volumes are significantly greater than Instance Stores because of the physical location separation. When you terminate (delete) an instance, unless you say otherwise, the EBS volume associated with that instance will still be available. EBS volumes can also be resized and scaled. More on this later.

Instance Store is a storage volume type that is tied directly to an instance. Instance stores cannot be managed and cannot have snapshots taken. Instance stores are also not persistent, meaning, if you boot an instance, make changes to the volume (create/delete files, etc) and then stop the instance, the next time you boot the instance, any changes made will not be available. The instance essentially resets to a fresh state every time you boot. Instance stores are useful in an application specific environment where a particular instance has one job to do.

Important: When selecting an AMI, ensure that the Storage Type indicates “EBS-Backed if that is the storage type you want to select.

  1. Accept the defaults of your selected AMI and click “Continue“.

Naming convention is entirely up to you, however, I recommend using a standard naming schema throughout your VPC. This makes for easier maintenance and management. I generally set the “Name” key to the hostname of the instance, and create an additional key “Type” and set it to the function of the instance, in this case VS (Virtual/Web Server).

Click “Continue“.

Create KeyPair

Public/private key pairs allow you to securely connect to your instance after it launches. For Windows Server instances, a Key Pair is required to set and deliver a secure encrypted password. For Linux server instances, a key pair allows you to SSH into your instance.

Select the previous key pair we created in Part 2 titled “vpc_keypair“.

Click “Continue“.

Configure Firewall

  1. Select the “VS_SG” Security Group that we created in Part 1.
  2. Click “Continue“.

Review

  1. Review and verify the Instance details.
  2. Click “Launch“.

Allocating and Associate an Elastic IP

Elastic IP addresses are static IP addresses designed for dynamic cloud computing. An Elastic IP address is associated with your account, not a particular instance (but can be associated to an instance), and you control that address until you choose to explicitly release it. Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or availability zone failures by programmatically remapping your public IP addresses to any instance associated with your account. Rather than waiting on a data technician to reconfigure or replace your host, or waiting for DNS to propagate to all of your customers, Amazon EC2 enables you to engineer around problems with your instance or software by programmatically remapping your Elastic IP address to a replacement instance.

Allocating

  1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/
  2. Click “Elastic IPs“ in the left hand navigation menu.
  3. Click the “Allocate New Address” button in the header menu.
  4. Set “EIP Used In:” to “VPC“. (Elastic IPs allocated outside of a VPC to EC2 cannot see VPC Instances).
  5. Click “Yes Allocate“.

Associating

  1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/
  2. Click “Elastic IPs” in the left hand navigation menu.
  3. Locate your newly allocated IP Address in the list and click the selection box (or right click) associated with the address.
  4. With the address selected, click the “Associate Address” button in the header menu.
  5. Select your new Instance from the “Instance” dropdown and the correct Private IP should be selected by default.
  6. ImportantEnsure that you enable “Allow Reassociation“. This tells the VPC to reassign this EIP to this instance in the event of a reboot or shutdown. If you do not enable this option, you will have to manually re-associate the EIP with the Instance.
  7. Click “Yes, Associate“.

Configuring cPanel/WHM

At this point, you have a brand new Instance with an Elastic IP associated to it. The first thing you want to do is login to your instance via SSH using your newly acquired KeyPair. As I said previously, I won’t be going over the steps for installing cPanel, although they are straightforward.

Pre-configured AMIs will always have a root password set which you will inherently have to change to be able to login to cPanel. This is a quick, yet necessary step to complete before continuing.

SSH into your instance as root and run:

passwd

Modify your password and continue.

Initial Setup

  1. Assuming you have installed cPanel/WHM, In a web browser, navigate to:
    https://<elastic-ip>:2087

    Where <elastic-ip> is replaced by the Elastic IP Associated to your new instance.

  2. You will be prompted for login credentials. Username will be ‘root’ and the password will be your new modified password.
  3. ‘Read’ and Agree to the Terms and Conditions and continue to Step 2.
  4. Enter your Contact Information.
  5. Enter the hostname of this instance. In my case, I chose “vs1.example.com“.
  6. Enter your primary and secondary resolvers. I choose to use Google’s Resolvers located at “8.8.8.8” and 8.8.4.4” respectively.
  7. Ensure Main Network Device is set appropriately. It will most often be eth0.
  8. Save and Go To Step 3.

Ensuring Proper NAT Detection

Officially, cPanel’s NAT feature should only be used on fresh installs of cPanel/WHM. The automatic detection of the NAT architecture will not occur properly on an upgraded system, however, we can force 11.39+ to manually check for a NAT instance. I will first go over the expected results of a “fresh-install” and then I will review how you can enable NAT on an updated instance. Note: To the best of my knowledge, cPanel prefers you do a fresh install when using NAT so please proceed at your own risk.

  1. In Step 3, we won’t be adding an additional IP. You will see your current IP address in the “Current IP Addresses” block. In my experience, I’ve seen the internal/local IP for the instance here, you may see the external IP address instead. We will verify in the next few steps.
  2. Click “Go to Step 4

You should now be directed to the DNS configuration. Since we are implementing a clustering environment, we will not need to run local DNS services.

  1. Select “Disabled” in the “Nameserver Configuration” block.
  2. Configure your Primary and Secondary name servers with the hostnames of the two instances we configured in Part 2. In my case, “ns1.example.com” and ns2.example.com“.
  3. Keep all other values at their default settings.
  4. Click “Save & Go to Step 5.

Mail server configuration is completely up to you and should be configured on your own environment’s needs.

  1. Configure your Mail settings.
  2. Click “Save & Go to Step 6 .

Depending on the type of instance shared/dedicated you may wish to enable/disable file system quotas.

  1. Configure your File System Quota settings.
  2. Click “Finish Setup Wizard.

Verifying NAT

We will now go through a few steps in verifying your cPanel is properly detecting your NAT and properly mapping it to the external/public IP address.

  1. In the left hand menu, under Server Configuration, click “Basic cPanel & WHM Setup“.
  2. In the Basic Config section, ensure the field described as “The IP address that will be used for setting up shared IP virtual hosts” is displaying your external/public IP address. If something other than your external/public IP is displayed, read below.

I’ve encountered a few scenarios where either a random local IP (mainly inherited from a cloned instance) will be displayed in this IP field. If the IP shown IS NOT your external/public IP and IS NOT the correct internal/local IP:

  1. Enter the correct Internal/Local IP.
  2. Click “Save Changes”.

Now that our Main/Shared IP is set correctly, let’s verify the current IP mapping.

  1. In the left hand menu, under IP Functions, click “Show or Delete Current IP Addresses“.
  2. If cPanel has properly detected the NAT, you will see a “NAT Mode” heading with a box below displaying the Local IP and the Public IP that it is being mapped to.  Click the “Validate” button to ensure that the mapping is functioning properly.

Forcing NAT Detection

In some cases, NAT Mode will not be automatically enabled or detected. If the steps above produced Local IPs instead of Public IPs, you will probably need to force cPanel to detect your NAT. This can be done in a few simple steps. As I said before, please follow these steps at your own risk as cPanel does not officially support an installation that has been “upgraded” to NAT.

SSH into your instance as root and run:

/scripts/build_cpnat

If your NAT was detected you should receive an output similar to the one below.

Assuming NAT was manually forced and detected properly, repeat the steps in “Verify NAT” above to ensure cPanel has detected and mapped your IP properly.

DNS Clustering

DNS cluster is a group of nameservers that share records. A DNS cluster allows you to physically separate your nameservers so that if a web server loses its connection, you still have DNS functionality. This will allow visitors to reach websites on your server more quickly after the web server comes back online.

Since we have already enabled our Clustering Servers (ns1.example.com & ns2.example.com) I will go through the steps required to join our server to the cluster.

  1. In a web browser, navigate to:
    https://<your-nameserver>:2087

    Where <your-nameserver> is replaced by the hostname to your first nameserver instance.

  2. You will be prompted for login credentials. Username will be ‘root’ and the password will be the password you set.
  3. In the left hand menu, under Cluster/Remote Access, click “Setup Remote Access Key“.
  4. You will be given a long string designated as “Access key for user ‘root’.” Copy this key to your clipboard or a temporary text document. Note: An access key is essentially login certificate that gives anyone with access to the key complete control over cPanel/WHM. Never share this key with anyone and never save it anywhere. The key can always be accessed from within WHM.
  5. Next, in a new tab, navigate to
    https://<your-webserver>:2087

    Where <your-webserver> is new instance created to act as the web server. In my case “vs1.example.com“.

  6. You will be prompted for login credentials. Username will be ‘root’ and the password will be the password you set.
  7. In the left hand menu, under Cluster/Remote Access, click Configure Cluster“.
  8. In the “Remote cPanel & WHM DNS host:” field, enter the hostname of the nameserver you just copied the access key from. In my case, “ns1.example.com“.
  9. In the “Remote server username:” field, enter “root“.
  10. In the “Remote server access hash:” field, paste in the Access Key you previously copied from the nameserver.
  11. Ensure that “Setup Reverse Trust Relationship” is checked.
  12. Debug mode can remain disabled.
  13. Set “DNS Role:” to “Synchronize Changes“. This setting is specific to the server type, but generally will be set to Synchronize Changes.
  14. Click “Submit“.

The server will now make an attempt to establish the Trust Relationship with the cluster. If the connection succeeds you will see a verification message “The Trust Relationship has been established… and “The new role for <ip> is sync“.

Click the “Return to Cluster Status” link.

Verify DNS Clustering

On the Configure Cluster page of your virtual server, in my case “vs1.example.com“, ensure that you see the established relationship with your nameserver.

Refresh the Configure Cluster page of your nameserver, in my case “ns1.example.com“, ensure that you see the established relationship with your virtual server. On the nameserver side, you will see a DNS role of your virtual server set as “Standalone”, this is intentional and expected.

Note: In some instances, I’ve experienced situations where the virtual server will indicate that it had succesfully established a reverse trust relationship with the nameserver, but upon verifying the cluster on the nameserver, I either did not even see the virtual server displayed in the cluster or I received authentication errors. The solution is to follow the steps above for creating the Access Key and adding a server to the cluster but do it on the nameserver as well. You shouldn’t run into this issue but if you do, post in the comments and I am happy to help sort it out.

Additional Note: Depending on how your firewall rules are setup, DNS clustering could fail if the proper ports are not opened.  To ensure you are opening the proper ports, have a look at Getting the Most Out of Your System’s Firewall, which details cPanel’s commonly used ports.

Conclusion

At this point you have a single virtual server, vs1.example.com configured for NAT and with DNS Clustering enabled. We have joined the instance to one of the nameservers in our DNS Cluster.

You do, however, need to repeat the DNS Clustering steps for the secondary nameserver, presumably ns2.example.com.

You can continue configuring your server how you would normally for your own environment. cPanel/WHMs NAT implementation is pretty transparent to the user. You rarely need to take into consideration the fact that you are behind a NAT architecture. cPanel simply translates your Local IP to your Public IP wherever it is required. Seamless. The NAT Team at cPanel worked very hard to ensure that everything just works.

While this is a very basic setup, all of the possibilities of this infrastructure within AWS are too numerous and out of scope for this tutorial. I am more than happy to field questions and comments below if you have a more challenging project.

 

 

Part 2: How I Built a cPanel Hosting Environment on Amazon AWS

In Part 1 of this four part series, we discussed establishing your VPC, creating and configuring your small subnet, and worked through configuring the Security Groups for our two instance types (‘NS_SG‘ and ‘VS_SG‘).

Today, we work on launching a two new instances, running cPanel DNSONLY, into our VPC and configuring these to be the primary and secondary DNS resolvers for our environment. While this series is written with the assumption of using dedicated DNS instances, you could easily use these instructions on dual use instances that both serve as web servers and as name servers.


Below is a quick overview of the architecture implemented as well as instance types used for provisioning instances. While I can not link directly to specific AMIs (Amazon Machine Images), selecting your desired operating system and getting cPanel/WHM installed is a straightforward procedure.


Assumptions

  • First, I will discuss the reasons for configuring instances in certain ways as they relate to being on AWS, but this is not a lesson in DNS basics. You will need to have a working knowledge of DNS best practices.
  • Second, this model makes no assumption of complete configuration or security. Again, I will just be touching on the subtleties of using the AWS eco-system.

Some instructions below are borrowed from Amazon’s AWS User Guide.

This Lesson Includes

  • Creating and launching a new EC2 Instance (Name Server) within VPC
  • Applying a Security Group to an Instance
  • Configuring cPanel DNSONLY for AWS
  • Creating a DNS Cluster

Create and Launching the Name Server Instance

Amazon EC2 instances are the fundamental building blocks for your computing needs in AWS. You can think of instances as virtual servers that can run applications and services. Instances are created from an Amazon Machine Image (AMI) and choosing an appropriate instance type. An AMI is a template that contains a software configuration, including an operating system, which defines your operating environment. You can select an AMI provided by AWS, our user community, or on the AWS Marketplace. You can also create and optionally share your own AMIs.  A single AMI can be used to launch one or thousands of instances

There are thousands of freely (and commercially) available AMIs available to choose from. You can also opt for building your own from the ground up. In my case, I chose a vanilla CentOS 6 AMI and built my name servers from there.

An important aspect to understand about the AWS eco-system is a term called “Regions“. Regions are just that, geographical locations of the datacenters that house your services in AWS. Amazon offers numerous regions all at different price points. I generally build out an infrastructure in a single region and then duplicate the infrastructure to a separate region. I then can use AWS ELB (Elastic Load Balancing) to direct traffic to different regions or for failover. In this tutorial I will be operating in the N. Virginia (East 1-A) region. More on regions can be found here.

While I will walk you through launching your instance, I will skip the installation step for cPanel Services merely for brevity. Let’s begin.

Choose an AMI

  1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/
  2. Click “Launch Instance” in the top menu.
  3. Click the “Classic Wizard” radio and click “Continue“.
  4. Choose one of the four tabs to search for your desired AMI. Keep in mind, AMIs are region specific so when launching a new AMI ensure it is in the same region as your VPC.

Instance Details

  1. Select the “Instance Type: T1 Micro“. A T1 Micro Instance is sufficient for a basic name server. (More on Instance Types).
  2. Select the “Launch into: EC2-VPC” radio button.
  3. Accept the default subnet since we only have one (unless more were configured, select accordingly).
  4. Click “Continue“.

  1. Kernel ID and RAM Disk ID can both be kept as “Use Default“.
  2. While an additional charge will be incurred, it may be advantageous for you to enable CloudWatch Monitoring. I choose to enable it.
  3. Important: Make sure you enable Termination Protection by checking the box labeled “Prevent against accidental termination.” This helps prevent you from deleting an instance or volume store without you first disabling this protection.
  4. Also Important: Ensure “Shutdown Behavior” is set to “Stop” and not “Terminate”. When an instance is terminated, it is deleted from your VPC/EC2 account and is not recoverable.
  5. Now we want to set a Static Private IP for our instance. VPC comes built in with a DHCP server but we really don’t want our instance IPs to be changing. Set an appropriate IP address for your instance. I chose “10.0.0.10” based on my subnet range.
  6. Click “Continue“.

Understanding AWS storage can be somewhat overwhelming but it is really quite simple. AWS uses two primary storage types. “EBS” and “Instance Store“. In all practical instances, you will want to use EBS. The differences are simple really.

EBS Storage is physically separate storage that is backed by Amazon S3 and is independent of your instance. EBS volumes can be attached/detached to Instances much like plugging in a thumb drive. You can also take snapshots of EBS volumes making backups/recovery simple. EBS storage is a safer option because if a region goes offline or fails completely, the likely hood of recovery of your EBS backed volumes are significantly greater than Instance Stores because of the physical location separation. When you terminate (delete) an instance, unless you say otherwise, the EBS volume associated with that instance will still be available. EBS volumes can also be resized and scaled. More on this later.

Instance Store is a storage volume type that is tied directly to an instance. Instance stores cannot be managed and cannot have snapshots taken. Instance stores are also not persistent, meaning, if you boot an instance, make changes to the volume (create/delete files, etc) and then stop the instance, the next time you boot the instance, any changes made will not be available. The instance essentially resets to a fresh state every time you boot. Instance stores are useful in an application specific environment where a particular instance has one job to do.

Important: When selecting an AMI, ensure that the Storage Type indicates “EBS-Backed if that is the storage type you want to select.

  1. Accept the defaults of your selected AMI and click “Continue“.

Naming convention is entirely up to you, however, I recommend using a standard naming schema throughout your VPC. This makes for easier maintenance and management. I generally set the “Name” key to the hostname of the instance, and create an additional key “Type” and set it to the function of the instance, in this case NS (Name Server).

Click “Continue“.

Create KeyPair

Public/private key pairs allow you to securely connect to your instance after it launches. For Windows Server instances, a Key Pair is required to set and deliver a secure encrypted password. For Linux server instances, a key pair allows you to SSH into your instance.

To create a key pair, enter a name and click “Create & Download Your Key Pair”. You will be prompted to save the private key to your computer. Note: You only need to generate a key pair once – not each time you want to deploy an Amazon EC2 instance.

Click “Continue“.

Configure Firewall

  1. Select the “NS_SG” Security Group that we created in Part 1.
  2. Click “Continue“.

Review

  1. Review and verify the Instance details.
  2. Click “Launch“.

Allocating and Associate an Elastic IP

Elastic IP addresses are static IP addresses designed for dynamic cloud computing. An Elastic IP address is associated with your account, not a particular instance (but can be associated to an instance), and you control that address until you choose to explicitly release it. Unlike traditional static IP addresses, however, Elastic IP addresses allow you to mask instance or availability zone failures by programmatically remapping your public IP addresses to any instance associated with your account. Rather than waiting on a data technician to reconfigure or replace your host, or waiting for DNS to propagate to all of your customers, Amazon EC2 enables you to engineer around problems with your instance or software by programmatically remapping your Elastic IP address to a replacement instance.

Allocating

  1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/
  2. Click “Elastic IPs“ in the left hand navigation menu.
  3. Click the “Allocate New Address” button in the header menu.
  4. Set “EIP Used In:” to “VPC“. (Elastic IPs allocated outside of a VPC to EC2 cannot see VPC Instances).
  5. Click “Yes Allocate“.

Associating

  1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/
  2. Click “Elastic IPs” in the left hand navigation menu.
  3. Locate your newly allocated IP Address in the list and click the selection box (or right click) associated with the address.
  4. With the address selected, click the “Associate Address” button in the header menu.
  5. Select your new Instance from the “Instance” dropdown and the correct Private IP should be selected by default.
  6. ImportantEnsure that you enable “Allow Reassociation“. This tells the VPC to reassign this EIP to this instance in the event of a reboot or shutdown. If you do not enable this option, you will have to manually re-associate the EIP with the Instance.
  7. Click “Yes, Associate“.

Configuring cPanel DNSONLY

At this point, you have a brand new Instance with an Elastic IP associated to it. The first thing you want to do is login to your instance via SSH using your newly acquired KeyPair. As I said previously, I won’t be going over the steps for installing cPanel, although they are straightforward.

Pre-configured AMIs will always have a root password set which you will inherently have to change to be able to login to cPanel. This is a quick, yet necessary step to complete before continuing.

SSH into your instance as root and run:

passwd

Modify your password and continue.

Initial Setup

  1. Assuming you have installed cPanel DNSONLY, In a web browser, navigate to:
    https://<elastic-ip>:2087

    Where <elastic-ip> is replaced by the Elastic IP Associated to your new instance.

  2. You will be prompted for login credentials. Username will be ‘root’ and the password will be your new modified password.
  3. ‘Read’ and Agree to the Terms and Conditions and continue to Step 2.
  4. Enter your Contact Information.
  5. Enter the hostname of this instance. In my case, I chose “ns1.example.com“.
  6. Enter your primary and secondary resolvers. I choose to use Google’s Resolvers located at “8.8.8.8” and 8.8.4.4” respectively.
  7. Ensure Main Network Device is set appropriately. It will most often be eth0.
  8. Save and Go To Step 3.

11.36 Temporary Workaround

At the time of writing this, cPanel DNSONLY Stable Release is at 11.36, meaning it does not yet officially support a NAT, however I can say with confidence that by the time WHM 11.40 is released DNSONLY will be on par with NAT support.

The following instructions are unique to 11.36 and DNSONLY because it does not yet officially support NAT, these should be considered a temporary work-around until 11.40 arrives at which I will update the instructions.

  1. In Step 3, add a new IP address by entering in the Elastic IP of the instance you are working with. Subnet should remain default.
  2. Click “Add IP(s)“.
  3. Click “Finish

You should now be directed to the DNSONLY Dashboard. Again, due to this being a non-NAT build, we need to workaround for the time being. We need to modify the Main IP within cPanel from our Private IP to our Elastic IP.

  1. In the left hand menu, click “Basic cPanel & WHM Setup“.
  2. Locate the first field under “Basic Config” that contains what probably looks like a random 10.x.x.x IP. Replace the existing IP with your Elastic IP.
  3. Click “Save Changes“.

DNS Clustering

DNS cluster is a group of nameservers that share records. A DNS cluster allows you to physically separate your nameservers so that if a web server loses its connection, you still have DNS functionality. This will allow visitors to reach websites on your server more quickly after the web server comes back online.

  1. In the left hand menu, under Cluster/Remote Access, click “Configure Cluster“.
  2. In the Modify Cluster Status box, select “Enable DNS Clustering”.
  3. Click “Change”.
  4. Click “Return to Cluster Status”.

Conclusion

At this point you have a single nameserver, ns1.example.com configured and with DNS Clustering enabled. This server is ready to Pair/Synchronize with WHM/cPanel client servers.

You do, however, need to repeat these steps for a secondary nameserver, presumably ns2.example.com.

While this is a very basic setup, all of the possibilities of this infrastructure within AWS are too numerous and out of scope for this tutorial. I am more than happy to field questions and comments below if you have a more challenging project.


Already using Amazon Web Services? Check out the cPanel & WHM listing in the AWS Marketplace and start building your own cPanel hosting environment.

Part 1: How I Built a cPanel Hosting Environment on Amazon AWS

People argue for and against building a production hosting environment on top of cloud services such as Amazon’s AWS. I recently made the decision to migrate my entire hosting infrastructure from co-located dedicated hardware to a full implementation built entirely on top of Amazon’s Web Services.

I will be releasing a four part series detailing the tricks I’ve learned in my own migration to AWS and walking you through setting up your own full service hosting environment within the AWS eco-system, all while still leveraging the power of cPanel, WHM, and DNSONLY.

I chose to use AWS, more specifically EC2, VPC and S3, for its rapid deployment, unlimited scaling, load balancing, and global distribution abilities. Working with AWS, I started to realize just how powerful it could become.

I started this challenge with a few key questions: What are the benefits and the challenges one would face working in an environment like this? All of our servers run instances of cPanel/WHM, so what are the difficulties in setting up cPanel in an AWS environment?

Amazon’s AWS platform is built behind a NAT infrastructure, so inherently, configuring cPanel for a NAT used to be an elaborate ballet of duct taped scripts and hooks. However, with cPanel 11.39, I’ve been able to seamlessly migrate my entire infrastructure ( 30+ instances ) from a dedicated environment to AWS without any misstep.

The result is a solid hosting architecture using Amazon VPC (Virtual Private Cloud), Amazon EC2 (Elastic Cloud Compute) and Amazon S3 (Simple Storage Service), built with cPanel/WHM/DNSONLY that not only works on AWS, but makes deployment and provisioning of new servers unbelievably rapid and simple.


Below is a quick overview of the architecture implemented as well as instance types used for provisioning instances. While I can not link directly to specific AMIs (Amazon Machine Images), selecting your desired operating system and getting cPanel/WHM installed is a straightforward procedure.


Assumptions

  • First, you must have a working knowledge of the command line, networking, Amazon AWS, and cPanel/WHM/DNSONLY.
  • Second, this model will run two dedicated nameservers (cPanel DNSONLY), the node servers will not be running DNS and will be configured in a cluster.
  • Third, I won’t be going over the registration process of AWS, you need to already have an active account.

Some instructions below are borrowed from Amazon’s AWS User Guide.

A Representation of the Basic Network Architecture

This Lesson Includes

  • Creating a new Amazon VPC Instance
  • Defining subnet scope
  • Creating and defining Security Groups

Setup the VPC, Subnet, & Internet Gateway:

  1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
  2. Click “VPC Dashboard” in the navigation pane.
  3. Locate the “Your Virtual Private Cloud” area of the dashboard and click “Get started creating a VPC“, if you have no VPC resources, or click “Start VPC Wizard“.
  4. Select the first option, VPC with a Single Public Subnet Only, and then click Continue.

  1. The confirmation page shows the CIDR ranges and settings that you’ve chosen. Since this is going to be a small network, click “Edit VPC IP CIDR Block” and change the value to “10.0.0.0/24“. This gives us 251 useable IPs on the gateway.
  2. Click “Create VPC” to create your VPC, subnet, Internet gateway, and route table.

Create Security Groups

Security Groups are essentially Firewall Rules that can be applied on a per-instance basis. We are going to create two primary Security Groups, one for Name Servers and one for Web Servers. Of course, your specific scenario will differ from the one represented here, so feel free to create as many Security Groups as needed.

In my use case scenario, I established a Security Group for Name Servers, Shared Web Servers, and Dedicated VPSs. Again, tailor these to meet your needs.

  1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
  2. Click “Security Groups” in the navigation pane.
  3. Click the “Create Security Group” button.
  4. Specify NS_SG as the name of the security group, and provide a description. Select the ID of your VPC from the “VPC” menu, and then click “Yes, Create“.
  5. Click the “Create Security Group” button.
  6. Specify VS_SG as the name of the security group, and provide a description. Select the ID of your VPC from the “VPC” menu, and then click “Yes, Create“.
  7. Select the “NS_SG” security group that you just created. The details pane includes a tab for information about the security group, plus tabs for working with its inbound rules and outbound rules.

On the “Inbound” tab, do the following:

  1. Select “All Traffic” from the Create a new rule list, make sure that Source is “0.0.0.0/0“, and then click “Add Rule“.
  2. Click “Apply Rule Changes” to apply these inbound rules.

On the “Outbound” tab, do the following:

  1. All Traffic” is allowed by default, we will temporarily keep this rule.

Complete the same steps above for the “VS_SG” you created.

If you’ve made it this far, you’re probably half way to a panic attack wondering why we’ve opened up all inbound and outbound ports. Each environment’s needs for port availability will obviously be unique, but for most standard cPanel/WHM installations, you can have a look at this informative article, Getting The Most Out of Your System’s Firewall,  detailing commonly used ports by cPanel and its bundled services and then choose to open or close the ports at the firewall level accordingly.

Alternately, you can keep all inbound/outbound traffic at the firewall level as pass-through (as detailed above) and handle your firewall at the instance level with a software based firewall.

cPanel supports numerous software based firewalls that are freely available to download and install, personally I use and highly recommend ConfigServer Security & Firewall. It’s dead simple to install and I recommend running the security scan once you have it configured to ensure you’ve taken extra steps in hardening your systems.


Up Next

  • Creating and Launching Name Server Instances Into Your New VPC
  • Configuring your Name Server
  • Basic Cluster Configuration

 

Getting started with .NET for Linux

When you know a software developer’s preferred operating system, you can often guess what programming language(s) they use. If they use Windows, the language list includes C#, JavaScript, and TypeScript. A few legacy devs may be using Visual Basic, and the bleeding-edge coders are dabbling in F#. Even though you can use Windows to develop in just about any language, most stick with the usuals.

If they use Linux, you get a list of open source projects: Go, Python, Ruby, Rails, Grails, Node.js, Haskell, Elixir, etc. It seems that as each new language—Kotlin, anyone?—is introduced, Linux picks up a new set of developers.

So leave it to Microsoft (Microsoft?!?) to throw a wrench into this theory by making the .NET framework, coined .NET Core, open source and available to run on any platform. Windows, Linux, MacOS, and even a television OS: Samsung’s Tizen. Add in Microsoft’s other .NET flavors, including Xamarin, and you can add the iOS and Android operating systems to the list. (Seriously? I can write a Visual Basic app to run on my TV? What strangeness is this?)

Given this situation, it’s about time Linux developers get comfortable with .NET Core and start experimenting, perhaps even building production applications. Pretty soon you’ll meet that person: “I use Linux … I write C# apps.” Brace yourself: .NET is coming.

How to install .NET Core on Linux

The list of Linux distributions on which you can run .NET Core includes Red Hat Enterprise Linux (RHEL), Ubuntu, Debian, Fedora, CentOS, Oracle, and SUSE.

Each distribution has its own installation instructions. For example, consider Fedora 26:

Step 1: Add the dotnet product feed.


        sudo rpm –import https://packages.microsoft.com/keys/microsoft.asc
        sudo sh -c ‘echo -e “[packages-microsoft-com-prod]\nname=packages-microsoft-com-prod \nbaseurl=https://packages.microsoft.com/yumrepos/microsoft-rhel7.3-prod\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc” > /etc/yum.repos.d/dotnetdev.repo’

Step 2: Install the .NET Core SDK.


        sudo dnf update
        sudo dnf install libunwind libicu compat-openssl10
        sudo dnf install dotnet-sdk-2.0.0

Creating the Hello World console app

Now that you have .NET Core installed, you can create the ubiquitous “Hello World” console application before learning more about .NET Core. After all, you’re a developer: You want to create and run some code now. Fair enough; this is easy. Create a directory, move into it, create the code, and run it:


mkdir helloworld && cd helloworld
dotnet new console
dotnet run

You’ll see the following output:


$ dotnet run
Hello World!

What just happened?

Let’s take what just happened and break it down. We know what the mkdir and cd did, but after that?

dotnew new console

As you no doubt have guessed, this created the “Hello World!” console app. The key things to note are: The project name matches the directory name (i.e., “helloworld”); the code was build using a template (console application); and the project’s dependencies were automatically retrieved by the dotnet restore command, which pulls from nuget.org.

If you view the directory, you’ll see these files were created:


Program.cs
helloworld.csproj

Program.cs is the C# console app code. Go ahead and take a look inside (you already did … I know … because you’re a developer), and you’ll see what’s going on. It’s all very simple.

Helloworld.csproj is the MSBuild-compatible project file. In this case there’s not much to it. When you create a web service or website, the project file will take on a new level of significance.

dotnet run

This command did two things: It built the code, and it ran the newly built code. Whenever you invoke dotnet run, it will check to see if the *.csproj file has been altered and will run the dotnet restore command. It will also check to see if any source code has been altered and will, behind the scenes, run the dotnet build command which—you guessed it—builds the executable. Finally, it will run the executable.

Sort of.

Where is my executable?

Oh, it’s right there. Just run which dotnet and you’ll see (on RHEL): 

/opt/rh/rh-dotnet20/root/usr/bin/dotnet

That’s your executable.

Sort of.

When you create a dotnet application, you’re creating an assembly … a library … yes, you’re creating a DLL. If you want to see what is created by the dotnet build command, take a peek at bin/Debug/netcoreapp2.0/. You’ll see helloworld.dll, some JSON configuration files, and a helloworld.pdb (debug database) file. You can look at the JSON files to get some idea as to what they do (you already did … I know … because you’re a developer).

When you run dotnet run, the process that runs is dotnet. That process, in turn, invokes your DLL file and it becomes your application.

It’s portable

Here’s where .NET Core really starts to depart from the Windows-only .NET Framework: The DLL you just created will run on any system that has .NET Core installed, whether it be Linux, Windows, or MacOS. It’s portable. In fact, it is literally called a “portable application.”

Forever alone

What if you want to distribute an application and don’t want to ask the user to install .NET Core on their machine? (Asking that is sort of rude, right?) Again, .NET Core has the answer: the standalone application.

Creating a standalone application means you can distribute the application to any system and it will run, without the need to have .NET Core installed. This means a faster and easier installation. It also means you can have multiple applications running different versions of .NET Core on the same system. It also seems like it would be useful for, say, running a microservice inside a Linux container. Hmmm…

What’s the catch?

Okay, there is a catch. For now. When you create a standalone application using the dotnet publish command, your DLL is placed into the target directory along with all of the .NET bits necessary to run your DLL. That is, you may see 50 files in the directory. This is going to change soon. An already-running-in-the-lab initiative, .NET Native, will soon be introduced with a future release of .NET Core. This will build one executable with all the bits included. It’s just like when you are compiling in the Go language, where you specify the target platform and you get one executable; .NET will do that as well.

You do need to build once for each target, which only makes sense. You simply include a runtime identifier and build the code, like this example, which builds the release version for RHEL 7.x on a 64-bit processor:

dotnet publish -c Release -r rhel.7-x64

Web services, websites, and more

So much more is included with the .NET Core templates, including support for F# and Visual Basic. To get a starting list of available templates that are built into .NET Core, use the command dotnet new –help.

Hint: .NET Core templates can be created by third parties. To get an idea of some of these third-party templates, check out these templates, then let your mind start to wander…

Like most command-line utilities, contextual help is always at hand by using the –help command switch. Now that you’ve been introduced to .NET Core on Linux, the help function and a good web search engine are all you need to get rolling.

Other resources

Ready to learn more about .NET Core on Linux? Check out these resources:

How the OpenType font system works

Digital typography is something that we use every day, but few of us understand how digital fonts work. This article gives a basic, quick, dirty, oversimplified (but hopefully useful) tour of OpenType— what it is and how you can use its powers with free, libre, and open source software (FLOSS). All the fonts mentioned here are FLOSS, too.

What is OpenType?

On the most basic level, a digital font is a “container” for different glyphs plus extra information about how to use them. Each glyph is represented by a series of points and rules to connect those points. I’ll not delve into the different ways to define those “connections” or how we arrived there (the history of software development can be messy), but basically there are two kinds of rules: parabolic segments (quadratic Bézier curves) or cubic functions (cubic Bézier curves).

The TTF file format, generally known as TrueType Font, can only use quadratic Bézier curves, whereas the OTF file format, known as OpenType Font, supports both.

Here is where we need to be careful about what we are talking about: The term “OpenType” refers not only to the file format, but also to the advanced properties of a typeface as a whole (i.e., the “extra information” mentioned earlier).

In fact, in addition to the OpenType file format, there are also substitution tables that, for example, tell the software using that font to substitute two characters with the corresponding typographical ligature; that the shape of a character needs to change according to the characters that surround it (its “contextual alternate”); or that when you write in Greek, a ? at the end of a word must be substituted with a ?. This is what the term “smart fonts” means.

And, to make things more confusing, including OpenType tables on TrueType fonts is possible, such as what happens on Junicode.

A quick example

Let’s see a quick example of smart fonts in use. Here is an example of Cormorant with (top) and without (bottom) OpenType features enabled:

Each OpenType property has its own “tag” that is used to activate those “specialties.” Some of these tags are enabled by default (like liga for normal ligatures or clig for contextual ligatures), whereas others must be enabled by hand.

A partial list of OpenType tags and names can be found in Dario Taraborelli’s Accessing OpenType font features in LaTeX.

Querying fonts

Finding out the characteristics of an OpenType font is simple. All you need is the otfinfo command, which is included in the package lcdf typetools (on my openSUSE system, it’s installed as texlive-lcdftypetools). Using it is quite simple: On the command line, issue something like:

otfinfo [option] /path/to/the/font

The option -s provides the languages supported by the font, whereas -f tells us which OpenType options are available. Font license information is displayed with the -i option.

If the path to the font contains a space, “scape” that space with an inverted bar. For example, to know what Sukhumala Regular.otf offers when installed in the folder ~/.fonts/s/, simply write in the terminal:

otfinfo -f ~/.fonts/s/Sukhumala\ Regular.otf

Using OpenType tables on LibreOffice Writer

LibreOffice version 5.3 offers good support for OpenType. It is not exactly “user-friendly,” but it’s not that difficult to understand, and it provides so much typographical power that it shouldn’t be ignored.

To simultaneously activate “stylistic sets” 1 and 11 on Vollkorn (see screenshot bellow), in the font name box, write:

Vollkorn:ss01&ss11

The colon starts the “tag section” on the extended font name and the ampersand allows us to use several tags.

But there is more. You can also disable any default option. For example, the Sukhumala font has some strange contextual ligatures that turn aa into ?, ii into ?, and uu into ?. To disable contextual ligatures on Sukhumala, add a dash in front of the corresponding OpenType tag clig:

Sukhumala:-clig

And that’s it. As I said before, it’s not exactly user friendly, especially considering that the font name box is rather small, but it works!

And don’t forget to use all of this within styles: Direct formatting is the enemy of good formatting. I mean, unless you are preparing a quick screenshot for a short article about typography. In that case it’s OK. But only in that case.

There’s more

One interesting OpenType tag that, sadly, does not work on LibreOffice yet is “size.” The size feature enables the automated selection of optical sizes, which is a font family that offers different designs for different point sizes. Few fonts offer this option (some GUST fonts like Latin Modern or Antykwa Pó?tawskiego; an interesting project in its initial stages of development called Coelacanth; or, to a lesser extent, EB Garamond), but they are all great. Right now, the only way to enjoy this property is through a more advanced layout system such as XeTeX. Using OpenType on XeTeX is a really big topic; the fontspec manual (the package that handles font selection and configuration on both XeTeX and LuaTeX) has more than 120 pages, so… not today.

And yes, version 1.5.3 of Scribus added support for OpenType (in addition to footnotes and other stuff), but that’s something I still need to explore.

How to align your team around microservices

Microservices have been a focus across the open source world for several years now. Although open source technologies such as Docker, Kubernetes, Prometheus, and Swarm make it easier than ever for organizations to adopt microservice architectures, getting your team on the same page about microservices remains a difficult challenge.

For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices. The problem is that that there is nothing inherently “micro” about microservices. Some can be small, but size is relative and there’s no standard measurement unit across organizations. A “small” service at one company might be 1 million lines of code, but far fewer at another organization.

For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices.

Some argue that microservices aren’t a new thing at all, rather a rebranding of service-oriented architecture (SOA), whereas others view microservices as an implementation of SOA, similar to how Scrum is an implementation of Agile. (For more on the ambiguity of microservice definitions, check out this upcoming book Microservices for Startups.)

How do you get your team on the same page about microservices when no precise definition exists? The most important thing when talking about microservices is to ensure that your team is grounded in a common starting point. Ambiguous definitions don’t help. It would be like trying to put Agile into practice without context for what you are trying to achieve or an understanding of precise methodologies like Scrum.

Finding common ground

Knowing the dangers of too eagerly hopping on the microservices bandwagon, a team I worked on tried not to stall on definitions and instead focused on defining the benefits we were trying to achieve with microservices adoption. Following are the three areas we focused on and lessons learned from each piece of our microservices implementation.

1. Ability to ship software faster

Our main application was a large codebase with several small teams of developers trying to build features for different purposes. This meant that every change had to try to satisfy all the different groups. For example, a database change that served only one group had to be reviewed and accepted by other groups that didn’t have as much context. This was tedious and slowed us down.

Having different groups of developers sharing the same codebase also meant that the code continually grew more complex in undeliberate ways. As the codebase grew larger, no one on the team could own it and make sure all the parts were organized and fit together optimally. This made deployment a scary ordeal. A one-line change to our application required the whole codebase to be deployed in order to push out the change. Because deploying our large application was high risk, our quality assurance process grew and, as a result, we deployed less.

With a microservices architecture, we hoped to be able to divide our code up so different teams of developers could fully own parts. This would enable teams to innovate much more quickly without tedious design, review, and deployment processes. We also hoped that having smaller codebases worked on by fewer developers would make our codebases easier to develop, test, and keep organized.

2. Flexibly with technology choices

Our main application was large, built with Ruby on Rails with a custom JavaScript framework and complex build processes. Several parts of our application hit major performance issues that were difficult to fix and brought down the rest of the application. We saw an opportunity to rewrite these parts of our application using a better approach. Our codebase was intertangled, which make rewriting feel extremely big and costly.

At the same time, one of our frontend teams wanted to pull away from our custom JavaScript framework and build product features with a newer framework like React. But mixing React into our existing application and complex frontend build process seemed expensive to configure.

As time went on, our teams grew frustrated with the feeling of being trapped in a codebase that was too big and expensive to fix or replace. By adopting microservices architecture, we hoped that keeping individual services smaller would mean that the cost to replace them with a better implementation would be much easier to manage. We also hoped to be able to pick the right tool for each job rather than being stuck with a one-size-fits-all approach. We’d have the flexibility to use multiple technologies across our different applications as we saw fit. If a team wanted to use something other than Ruby for better performance or switch from our custom JavaScript framework to React, they could do so.

3. Microservices are not a free lunch

In addition to outlining the benefits we hoped to achieve, we also made sure we were being realistic about the costs and challenges associated with building and managing microservices. Developing, hosting, and managing numerous services requires substantial overhead (and orchestrating a substantial number of different open source tools). A single, monolithic codebase running on a few processes can easily translate into a couple dozen processes across a handful of services, requiring load balancers, messaging layers, and clustering for resiliency. Managing all of this requires substantial skill and tooling.

Furthermore, microservices involve distributed systems that introduce a whole host of concerns such as network latency, fault tolerance, transactions, unreliable networks, and asynchronicity.

Setting your own microservices path

Once we defined the benefits and costs of microservices, we could talk about architecture without falling into counterproductive debates about who was doing microservices right or wrong. Instead of trying to find our way using others’ descriptions or examples of microservices, we instead focused on the core problems we were trying to solve.

  • How would having more services help us ship software faster in the next six to 12 months?
  • Were there strong technical advantages to using a specific tool for a portion of our system?
  • Did we foresee wanting to replace one of the systems with a more appropriate one down the line?
  • How did we want to structure our teams around services as we hired more people?
  • Was the productivity gain from having more services worth the foreseeable costs?

In summary, here are five recommended steps for aligning your team before jumping into microservices:

  1. Learn about microservices while agreeing that there is no “right” definition.
  2. Define a common set of goals and objectives to avoid counterproductive debates.
  3. Discuss and memorialize your anticipated benefits and costs of adopting microservices.
  4. Avoid too eagerly hopping on the microservices bandwagon; be open to creative ideas and spirited debate about how best to architect your systems.
  5. Stay rooted in the benefits and costs your team identified.

Focus on making sure the team has a concretely defined set of common goals to work off. It’s more valuable to discuss and define what you’d like to achieve with microservices than it is to try and pin down what a microservice actually is.

Flint OS, an operating system for a cloud-first world

Given the power of today’s browser platform technology and web frontend performance, it’s not surprising that most things we want to do with the internet can be accomplished through a single browser window. We are stepping into an era where installable apps will become history, where all our applications and services will live in the cloud.

The problem is that most operating systems weren’t designed for an internet-first world. Flint OS (soon to be renamed FydeOS) is a secure, fast, and productive operating system that was built to fill that gap. It’s based on the open source Chromium OS project that also powers Google Chromebooks. Chromium OS is based on the Linux kernel and uses Google’s Chromium browser as its principal user interface, therefore it primarily supports web applications.

Compared to older operating systems, Flint OS:

  • Boots up fast and never gets slow
  • Runs on full-fledged x86 laptops; on single-board computers (SBCs) like the Raspberry Pi, Asus Tinker Board, those with RK3288 and RK3399 chips; and more
  • Works with keyboard and mouse as well as touch and swipe
  • Has a simple architecture with sophisticated security to prevent viruses and malware
  • Avoids pausing work for updates due to its automated update mechanism
  • Is adding support for Android apps
  • Increases battery life for mobile devices by running applications in the cloud
  • Is familiar to users because it looks like Google Chrome

Downloading and installing Flint OS

Flint OS runs on a wide variety of hardware (Raspberry Pi, PC, Tinker Board, and VMware), and you can find information, instructions, and downloads for different versions on the Flint OS download page.

On PCs, Flint OS must be booted via a USB flash drive (8GB or larger). Make sure to back up your USB drive, since the flashing process will erase all data on it.

To flash Flint OS for PC to the USB drive, we recommend using a new, open source, multi-platform (Windows, macOS, and Linux) tool for USB drive and SD card burning called etcher. It is in beta; we use it to test our builds and absolutely love it.

Open the Flint OS .xz file in etcher; there is no need to rename or extract the image. Select your USB drive and click Flash; etcher will prompt you once the USB drive is ready.

To run Flint OS, first configure your computer to boot from USB media. Plug the USB drive into your PC, reboot, and you are ready to enjoy Flint OS on your PC.

Installing Flint OS as dual boot (beta) is an option, but configuring it requires some knowledge of a Linux environment. (We are working on a simpler GUI version, which will be available in the near future.) If setting up Flint OS as dual boot is your preference, see our dual-boot installation instructions.

Flint OS screenshots

Here are examples of what you can expect to see once Flint OS is up and running.

Contributing to Flint OS

We’ve spent some time cleaning up Flint OS’s Raspberry Pi (RPi) build system and codebase, both based on users’ requests and so we can create a public GitHub for our Raspberry Pi images.

In the past, when people asked how to contribute, we encouraged them to check out the Chromium project. By creating our public GitHub, we are hoping to make it easier to respond to issues and collaborate with the community.

Currently there are two branches: the x11 and the master branch.

  • The x11 branch is the legacy branch for all releases running on Chromium R56 and earlier. You are welcome to build newer versions of Chromium with this branch, but there are likely to be issues.
  • The master branch is our new Freon branch that works with R57 releases of Chromium and newer. We have successfully used this to boot R59 and R60 of Chromium. Please note this branch is currently quite unstable.

Please check out Flint OS and let us know what you think. We welcome contributions, suggestions, and changes from the community.

How to manage Linux containers with Ansible Container

I love containers and use the technology every day. Even so, containers aren’t perfect. Over the past couple of months, however, a set of projects has emerged that addresses some of the problems I’ve experienced.

I started using containers with Docker, since this project made the technology so popular. Aside from using the container engine, I learned how to use docker-compose and started managing my projects with it. My productivity skyrocketed! One command to run my project, no matter how complex it was. I was so happy.

After some time, I started noticing issues. The most apparent were related to the process of creating container images. The Docker tool uses a custom file format as a recipe to produce container images—Dockerfiles. This format is easy to learn, and after a short time you are ready to produce container images on your own. The problems arise once you want to master best practices or have complex scenarios in mind.

Let’s take a break and travel to a different land: the world of Ansible. You know it? It’s awesome, right? You don’t? Well, it’s time to learn something new. Ansible is a project that allows you to manage your infrastructure by writing tasks and executing them inside environments of your choice. No need to install and set up any services; everything can easily run from your laptop. Many people already embrace Ansible.

Imagine this scenario: You invested in Ansible, you wrote plenty of Ansible roles and playbooks that you use to manage your infrastructure, and you are thinking about investing in containers. What should you do? Start writing container image definitions via shell scripts and Dockerfiles? That doesn’t sound right.

Some people from the Ansible development team asked this question and realized that those same Ansible roles and playbooks that people wrote and use daily can also be used to produce container images. But not just that—they can be used to manage the complete lifecycle of containerized projects. From these ideas, the Ansible Container project was born. It utilizes existing Ansible roles that can be turned into container images and can even be used for the complete application lifecycle, from build to deploy in production.

Let’s talk about the problems I mentioned regarding best practices in context of Dockerfiles. A word of warning: This is going to be very specific and technical. Here are the top three issues I have:

1. Shell scripts embedded in Dockerfiles.

When writing Dockerfiles, you can specify a script that will be interpreted via /bin/sh -c. It can be something like:

RUN dnf install -y nginx

where RUN is a Dockerfile instruction and the rest are its arguments (which are passed to shell). But imagine a more complex scenario:

RUN set -eux; \
    \
# this “case” statement is generated via “update.sh”
    %%ARCH-CASE%%; \
    \
    url=“https://golang.org/dl/go${GOLANG_VERSION}.${goRelArch}.tar.gz”; \
    wget -O go.tgz $url; \
    echo ${goRelSha256} *go.tgz” | sha256sum -c -; \

This one is taken from the official golang image. It doesn’t look pretty, right?

2. You can’t parse Dockerfiles easily.

Dockerfiles are a new format without a formal specification. This is tricky if you need to process Dockerfiles in your infrastructure (e.g., automate the build process a bit). The only specification is the code that is part of dockerd. The problem is that you can’t use it as a library. The easiest solution is to write a parser on your own and hope for the best. Wouldn’t it be better to use some well-known markup language, such as YAML or JSON?

3. It’s hard to control.

If you are familiar with the internals of container images, you may know that every image is composed of layers. Once the container is created, the layers are stacked onto each other (like pancakes) using union filesystem technology. The problem is, that you cannot explicitly control this layering—you can’t say, “here starts a new layer.” You are forced to change your Dockerfile in a way that may hurt readability. The bigger problem is that a set of best practices has to be followed to achieve optimal results—newcomers have a really hard time here.

Comparing Ansible language and Dockerfiles

The biggest shortcoming of Dockerfiles in comparison to Ansible is that Ansible, as a language, is much more powerful. For example, Dockerfiles have no direct concept of variables, whereas Ansible has a complete templating system (variables are just one of its features). Ansible contains a large number of modules that can be easily utilized, such as wait_for, which can be used for service readiness checks—e.g., wait until a service is ready before proceeding. With Dockerfiles, everything is a shell script. So if you need to figure out service readiness, it has to be done with shell (or installed separately). The other problem with shell scripts is that, with growing complexity, maintenance becomes a burden. Plenty of people have already figured this out and turned those shell scripts into Ansible.

If you are interested in this topic and would like to know more, please come to Open Source Summit in Prague to see my presentation on Monday, Oct. 23, at 4:20 p.m. in Palmovka room.

Learn more in Tomas Tomecek’s talk, From Dockerfiles to Ansible Container, at Open Source Summit EU, which will be held October 23-26 in Prague.

The illustrated Open Organization is now available

In April, the Open Organization Ambassadors at Opensource.com released the second version of their Open Organization Definition, a document outlining the five key characteristics any organization must embrace if it wants to leverage the power openness at scale.

Today, that definition is a book.

Richly illustrated and available immediately in full-color paperback and eBook formats, The Open Organization Definition makes an excellent primer on open principles and practices.

Download or purchase (completely at cost) your copies today, and share them with anyone in need of a plain-language introduction to transparency, inclusivity, adaptability, collaboration, and community.

We're giving away a Linux-ready laptop from ZaReason

For the first time ever, Opensource.com is partnering with ZaReason to give away an UltraLap 5330 laptop with Linux pre-installed!

Since 2007, ZaReason has assembled, shipped, and supported hardware specifically designed for Linux, and the UltraLap 5330 is no exception—the 3.6-lb laptop ships with the Linux distribution of your choice and boasts the following hardware specs:

  • 14″ FHD display
  • Intel i3-7100U processor
  • 4GB RAM
  • 120GB M.2 SSD

So, what are you waiting for? Enter our ZaReason Laptop Giveaway by Sunday, September 24 at 11:59 p.m. Eastern Time (3:59 a.m. UTC) for your chance to win.

Have a great idea for a future Opensource.com giveaway? Let us know about it in the comments below.