Paying it forward at Finland's Aalto Fablab

Originating at MIT, a fab lab is a technology prototyping platform where learning, experimentation, innovation, and invention are encouraged through curiosity, creativity, hands-on making, and most critically, open knowledge sharing. Each fab lab provides a common set of tools (including digital fabrication tools like laser cutters, CNC mills, and 3D printers) and processes, so you can learn how to work in a fab lab anywhere and use those skills at any of the 1,000+ fab labs across the globe. There is probably a fab lab near you.

Fab labs can be found anywhere avant-garde makers and hackers live, but they have also cropped up at libraries and other public spaces. For example, the Aalto Fablab, the first fab lab in Finland, is in the basement of Aalto University’s library, in Espoo. Solomon Embafrash, the studio master, explains, “Aalto Fablab was in the Arabia campus with the School of Arts and Design since 2011. As Aalto decided to move all the activities concentrated in one campus (Otaniemi), we decided that a dedicated maker space would complement the state-of-the-art library in the heart of Espoo.”

The library, which is now a full learning center, sports a maker space that consists of a VR hub, a visual resources center, a studio, and of course, the Fablab. With the expansion of the Helsinki metro to a new station across the street from the Aalto Fablab, everyone in the region now has easy access to it.

The Fab Lab Charter states: “Designs and processes developed in fab labs can be protected and sold however an inventor chooses, but should remain available for individuals to use and learn from.” The “protected” part does not quite meet the requirements set by the Open Source Hardware Association’s definition of open source hardware; however, for those not involved in commercialization of products, the code is available for a wide range of projects created in fab labs (like the FabFi, an open source wireless network).

That means fab labs are effectively feeding the open source ecosystem that allows digitally distributed manufacturing of a wide range of products as many designers choose to release their designs with fully free licenses. Even the code to create a fab lab is also openly shared by the U.S. non-profit Fab Foundation.

All fab labs are required to provide open access to the community; however, some, like the Aalto Fablab, take that requirement one step further. The Aalto Fablab is free to use, but if you wish to use bulk materials from its stock for your project—for example, to make a new chair—you need to pay for them. You are also expected to respect the philosophy of open knowledge-sharing by helping others, documenting your work, and sharing what you have learned. Specifically, the Aalto Fablab asks that you “pay forward” what you have learned to other users, who may be able to build upon your work and help speed development.

All fab labs are required to provide open access to the community.

Embafrash adds, “There is a very old tradition of free services in Finland, like the library service and education. We used to charge users a few cents for the material cost of the 3D prints, but we found that it makes a lot of sense to keep it free, as it encourages people to our core philosophy of Fablab, which is idea sharing and documentation.”

This approach has proven successful, fostering enormous interest in the local community for making and sharing. For example, the Unseen Art project, an open source platform that allows the visually impaired to enjoy 3D printed art, started in the Aalto Fablab.

Fablab members organize local Maker Faire events and work closely with the maker community, local schools, and other organizations. “The Fablab has open days, which are very popular times that people from outside the university get access to the resources, and our students get the exposure to work with people outside the school community,” Embafrash says.

In this way, the more they share, the more their university benefits.

This article was supported by Fulbright Finland, which is currently sponsoring my research in open source scientific hardware in Finland as the Fulbright-Aalto University Distinguished Chair.

How to Download and Extract Tar Files with One Command

Tar (Tape Archive) is a popular file archiving format in Linux. It can be used together with gzip (tar.gz) or bzip2 (tar.bz2) for compression. It is the most widely used command line utility to create compressed archive files (packages, source code, databases and so much more) that can be transferred easily from machine to another or over a network.

In this article, we will show you how to download tar archives using two well known command line downloaders – wget or cURL and extract them with one single command.

How to Download and Extract File Using Wget Command

The example below shows how to download, unpack the latest GeoLite2 Country databases (use by the GeoIP Nginx module) in the current directory.

# wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | tar -xz
Download and Extract File with Wget

Download and Extract File with Wget

The wget option -O specifies a file to which the documents is written, and here we use -, meaning it will written to standard output and piped to tar and the tar flag -x enables extraction of archive files and -z decompresses, compressed archive files created by gzip.

To extract tar files to specific directory, /etc/nginx/ in this case, include use the -C flag as follows.

Note: If the extracting directories to a file that requires root permissions, use the sudo command to run tar.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz -O - | sudo tar -xz -C /etc/nginx/
Download and Extract File to Directory

Download and Extract File to Directory

Alternatively, you can use the following command, here, the archive file will be downloaded on your system before you can extract it.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && tar -xzf  GeoLite2-Country.tar.gz

To extract compressed archive file to a specific directory, use the following command.

$ sudo wget -c http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf  GeoLite2-Country.tar.gz -C /etc/nginx/

How to Download and Extract File Using cURL Command

Considering the previous example, this is how you can use cURL to download and unpack archives in the current working directory.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | tar -xz 
Download and Extract File with cURL

Download and Extract File with cURL

To extract file to different directory while downloading, use the following command.

$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz | sudo tar -xz  -C /etc/nginx/
OR
$ sudo curl http://geolite.maxmind.com/download/geoip/database/GeoLite2-Country.tar.gz && sudo tar -xzf GeoLite2-Country.tar.gz -C /etc/nginx/

That’s all! In this short but useful guide, we showed you how to download and extract archive files in one single command. If you have any queries, use the comment section below to reach us.

5 new OpenStack resources

As OpenStack has continued to mature and move from the first stages of adoption to use in production clouds, the focus of the OpenStack community has shifted as well, with more focus than ever on integrating OpenStack with other infrastructure projects. Today’s cloud architects and engineers need to be familiar with a wide range of projects and how they might be of use in their data center, and OpenStack is often the glue stitching the different pieces together.

More on OpenStack

Keeping up with everything you need to know can be tough. Fortunately, learning new skills is made a little easier by the large number of resources available to help you. Along with project documentation, support from your vendors and the community at large, printed books and other publications, and certification and training programs, there are many wonderful community-created resources as well.

Every month we share some of the best OpenStack-related content we come across, from guides and tutorials to deep-dives and technical notes. Have a look at what we found this month.

  • Security is always important in cloud applications, but sometimes security protocols require conformance to certain exact specifications. In this guide on how to build security hardened images with volumes, learn how to take advantage of changes introduced in the Queens release of OpenStack which allow for using volumes for your images, giving you greater flexibility when resizing filesystems.

  • Real-time systems impose certain operating constraints, including determinism and guaranteed performance levels, which have been historically difficult to find in the cloud. This guide to deploying real-time OpenStack shows you how recent feature additions in Nova can allow for real-time applications in an OpenStack environment. While focused on CentOS and DevStack, with a few modifications this tutorial could be used on other installation profiles as well.

  • The rapid pace of development with OpenStack means an entirely new release becomes available every six months. But in a production environment running mission-critical systems, upgrading at that pace can be difficult. One approach to dealing with this issue is allowing for quick upgrades across multiple releases of OpenStack at a time. TripleO fast-forward upgrades allow this possibility, and this guide will walk you through a rough demo of how it works.

  • Have you wanted to try out the recently open sourced AWX, which is the upstream of Ansible Tower, for managing Ansible projects? You’re in luck. Here is a simple guide to deploying AWX to an OpenStack RDO cloud.

  • Finally this month, in case you missed it, earlier this month we ran a great tutorial for getting started with Gnocchi. Gnocchi is a tool which enables indexing and storage of time series data, and purpose-built for large-scale environments like clouds. While now cloud-agnostic, Gnocchi is commonly installed with OpenStack to manage logging and metrics needs.


Thanks for checking out this month’s roundup. If you’d like to learn more, take a look back at our entire collection of OpenStack guides, how-tos, and tutorials with more than three years of community-made content. Did we leave out a great guide or tutorial that you found? Let us know in the comments below, and we’ll consider putting it in our next edition.

Part 1: How I Built a cPanel Hosting Environment on Amazon AWS

People argue for and against building a production hosting environment on top of cloud services such as Amazon’s AWS. I recently made the decision to migrate my entire hosting infrastructure from co-located dedicated hardware to a full implementation built entirely on top of Amazon’s Web Services.

I will be releasing a four part series detailing the tricks I’ve learned in my own migration to AWS and walking you through setting up your own full service hosting environment within the AWS eco-system, all while still leveraging the power of cPanel, WHM, and DNSONLY.

I chose to use AWS, more specifically EC2, VPC and S3, for its rapid deployment, unlimited scaling, load balancing, and global distribution abilities. Working with AWS, I started to realize just how powerful it could become.

I started this challenge with a few key questions: What are the benefits and the challenges one would face working in an environment like this? All of our servers run instances of cPanel/WHM, so what are the difficulties in setting up cPanel in an AWS environment?

Amazon’s AWS platform is built behind a NAT infrastructure, so inherently, configuring cPanel for a NAT used to be an elaborate ballet of duct taped scripts and hooks. However, with cPanel 11.39, I’ve been able to seamlessly migrate my entire infrastructure ( 30+ instances ) from a dedicated environment to AWS without any misstep.

The result is a solid hosting architecture using Amazon VPC (Virtual Private Cloud), Amazon EC2 (Elastic Cloud Compute) and Amazon S3 (Simple Storage Service), built with cPanel/WHM/DNSONLY that not only works on AWS, but makes deployment and provisioning of new servers unbelievably rapid and simple.


Below is a quick overview of the architecture implemented as well as instance types used for provisioning instances. While I can not link directly to specific AMIs (Amazon Machine Images), selecting your desired operating system and getting cPanel/WHM installed is a straightforward procedure.


Assumptions

  • First, you must have a working knowledge of the command line, networking, Amazon AWS, and cPanel/WHM/DNSONLY.
  • Second, this model will run two dedicated nameservers (cPanel DNSONLY), the node servers will not be running DNS and will be configured in a cluster.
  • Third, I won’t be going over the registration process of AWS, you need to already have an active account.

Some instructions below are borrowed from Amazon’s AWS User Guide.

A Representation of the Basic Network Architecture

This Lesson Includes

  • Creating a new Amazon VPC Instance
  • Defining subnet scope
  • Creating and defining Security Groups

Setup the VPC, Subnet, & Internet Gateway:

  1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
  2. Click “VPC Dashboard” in the navigation pane.
  3. Locate the “Your Virtual Private Cloud” area of the dashboard and click “Get started creating a VPC“, if you have no VPC resources, or click “Start VPC Wizard“.
  4. Select the first option, VPC with a Single Public Subnet Only, and then click Continue.

  1. The confirmation page shows the CIDR ranges and settings that you’ve chosen. Since this is going to be a small network, click “Edit VPC IP CIDR Block” and change the value to “10.0.0.0/24“. This gives us 251 useable IPs on the gateway.
  2. Click “Create VPC” to create your VPC, subnet, Internet gateway, and route table.

Create Security Groups

Security Groups are essentially Firewall Rules that can be applied on a per-instance basis. We are going to create two primary Security Groups, one for Name Servers and one for Web Servers. Of course, your specific scenario will differ from the one represented here, so feel free to create as many Security Groups as needed.

In my use case scenario, I established a Security Group for Name Servers, Shared Web Servers, and Dedicated VPSs. Again, tailor these to meet your needs.

  1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
  2. Click “Security Groups” in the navigation pane.
  3. Click the “Create Security Group” button.
  4. Specify NS_SG as the name of the security group, and provide a description. Select the ID of your VPC from the “VPC” menu, and then click “Yes, Create“.
  5. Click the “Create Security Group” button.
  6. Specify VS_SG as the name of the security group, and provide a description. Select the ID of your VPC from the “VPC” menu, and then click “Yes, Create“.
  7. Select the “NS_SG” security group that you just created. The details pane includes a tab for information about the security group, plus tabs for working with its inbound rules and outbound rules.

On the “Inbound” tab, do the following:

  1. Select “All Traffic” from the Create a new rule list, make sure that Source is “0.0.0.0/0“, and then click “Add Rule“.
  2. Click “Apply Rule Changes” to apply these inbound rules.

On the “Outbound” tab, do the following:

  1. All Traffic” is allowed by default, we will temporarily keep this rule.

Complete the same steps above for the “VS_SG” you created.

If you’ve made it this far, you’re probably half way to a panic attack wondering why we’ve opened up all inbound and outbound ports. Each environment’s needs for port availability will obviously be unique, but for most standard cPanel/WHM installations, you can have a look at this informative article, Getting The Most Out of Your System’s Firewall,  detailing commonly used ports by cPanel and its bundled services and then choose to open or close the ports at the firewall level accordingly.

Alternately, you can keep all inbound/outbound traffic at the firewall level as pass-through (as detailed above) and handle your firewall at the instance level with a software based firewall.

cPanel supports numerous software based firewalls that are freely available to download and install, personally I use and highly recommend ConfigServer Security & Firewall. It’s dead simple to install and I recommend running the security scan once you have it configured to ensure you’ve taken extra steps in hardening your systems.


Up Next

  • Creating and Launching Name Server Instances Into Your New VPC
  • Configuring your Name Server
  • Basic Cluster Configuration

 

Getting started with .NET for Linux

When you know a software developer’s preferred operating system, you can often guess what programming language(s) they use. If they use Windows, the language list includes C#, JavaScript, and TypeScript. A few legacy devs may be using Visual Basic, and the bleeding-edge coders are dabbling in F#. Even though you can use Windows to develop in just about any language, most stick with the usuals.

If they use Linux, you get a list of open source projects: Go, Python, Ruby, Rails, Grails, Node.js, Haskell, Elixir, etc. It seems that as each new language—Kotlin, anyone?—is introduced, Linux picks up a new set of developers.

So leave it to Microsoft (Microsoft?!?) to throw a wrench into this theory by making the .NET framework, coined .NET Core, open source and available to run on any platform. Windows, Linux, MacOS, and even a television OS: Samsung’s Tizen. Add in Microsoft’s other .NET flavors, including Xamarin, and you can add the iOS and Android operating systems to the list. (Seriously? I can write a Visual Basic app to run on my TV? What strangeness is this?)

Given this situation, it’s about time Linux developers get comfortable with .NET Core and start experimenting, perhaps even building production applications. Pretty soon you’ll meet that person: “I use Linux … I write C# apps.” Brace yourself: .NET is coming.

How to install .NET Core on Linux

The list of Linux distributions on which you can run .NET Core includes Red Hat Enterprise Linux (RHEL), Ubuntu, Debian, Fedora, CentOS, Oracle, and SUSE.

Each distribution has its own installation instructions. For example, consider Fedora 26:

Step 1: Add the dotnet product feed.


        sudo rpm –import https://packages.microsoft.com/keys/microsoft.asc
        sudo sh -c ‘echo -e “[packages-microsoft-com-prod]\nname=packages-microsoft-com-prod \nbaseurl=https://packages.microsoft.com/yumrepos/microsoft-rhel7.3-prod\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc” > /etc/yum.repos.d/dotnetdev.repo’

Step 2: Install the .NET Core SDK.


        sudo dnf update
        sudo dnf install libunwind libicu compat-openssl10
        sudo dnf install dotnet-sdk-2.0.0

Creating the Hello World console app

Now that you have .NET Core installed, you can create the ubiquitous “Hello World” console application before learning more about .NET Core. After all, you’re a developer: You want to create and run some code now. Fair enough; this is easy. Create a directory, move into it, create the code, and run it:


mkdir helloworld && cd helloworld
dotnet new console
dotnet run

You’ll see the following output:


$ dotnet run
Hello World!

What just happened?

Let’s take what just happened and break it down. We know what the mkdir and cd did, but after that?

dotnew new console

As you no doubt have guessed, this created the “Hello World!” console app. The key things to note are: The project name matches the directory name (i.e., “helloworld”); the code was build using a template (console application); and the project’s dependencies were automatically retrieved by the dotnet restore command, which pulls from nuget.org.

If you view the directory, you’ll see these files were created:


Program.cs
helloworld.csproj

Program.cs is the C# console app code. Go ahead and take a look inside (you already did … I know … because you’re a developer), and you’ll see what’s going on. It’s all very simple.

Helloworld.csproj is the MSBuild-compatible project file. In this case there’s not much to it. When you create a web service or website, the project file will take on a new level of significance.

dotnet run

This command did two things: It built the code, and it ran the newly built code. Whenever you invoke dotnet run, it will check to see if the *.csproj file has been altered and will run the dotnet restore command. It will also check to see if any source code has been altered and will, behind the scenes, run the dotnet build command which—you guessed it—builds the executable. Finally, it will run the executable.

Sort of.

Where is my executable?

Oh, it’s right there. Just run which dotnet and you’ll see (on RHEL): 

/opt/rh/rh-dotnet20/root/usr/bin/dotnet

That’s your executable.

Sort of.

When you create a dotnet application, you’re creating an assembly … a library … yes, you’re creating a DLL. If you want to see what is created by the dotnet build command, take a peek at bin/Debug/netcoreapp2.0/. You’ll see helloworld.dll, some JSON configuration files, and a helloworld.pdb (debug database) file. You can look at the JSON files to get some idea as to what they do (you already did … I know … because you’re a developer).

When you run dotnet run, the process that runs is dotnet. That process, in turn, invokes your DLL file and it becomes your application.

It’s portable

Here’s where .NET Core really starts to depart from the Windows-only .NET Framework: The DLL you just created will run on any system that has .NET Core installed, whether it be Linux, Windows, or MacOS. It’s portable. In fact, it is literally called a “portable application.”

Forever alone

What if you want to distribute an application and don’t want to ask the user to install .NET Core on their machine? (Asking that is sort of rude, right?) Again, .NET Core has the answer: the standalone application.

Creating a standalone application means you can distribute the application to any system and it will run, without the need to have .NET Core installed. This means a faster and easier installation. It also means you can have multiple applications running different versions of .NET Core on the same system. It also seems like it would be useful for, say, running a microservice inside a Linux container. Hmmm…

What’s the catch?

Okay, there is a catch. For now. When you create a standalone application using the dotnet publish command, your DLL is placed into the target directory along with all of the .NET bits necessary to run your DLL. That is, you may see 50 files in the directory. This is going to change soon. An already-running-in-the-lab initiative, .NET Native, will soon be introduced with a future release of .NET Core. This will build one executable with all the bits included. It’s just like when you are compiling in the Go language, where you specify the target platform and you get one executable; .NET will do that as well.

You do need to build once for each target, which only makes sense. You simply include a runtime identifier and build the code, like this example, which builds the release version for RHEL 7.x on a 64-bit processor:

dotnet publish -c Release -r rhel.7-x64

Web services, websites, and more

So much more is included with the .NET Core templates, including support for F# and Visual Basic. To get a starting list of available templates that are built into .NET Core, use the command dotnet new –help.

Hint: .NET Core templates can be created by third parties. To get an idea of some of these third-party templates, check out these templates, then let your mind start to wander…

Like most command-line utilities, contextual help is always at hand by using the –help command switch. Now that you’ve been introduced to .NET Core on Linux, the help function and a good web search engine are all you need to get rolling.

Other resources

Ready to learn more about .NET Core on Linux? Check out these resources:

How the OpenType font system works

Digital typography is something that we use every day, but few of us understand how digital fonts work. This article gives a basic, quick, dirty, oversimplified (but hopefully useful) tour of OpenType— what it is and how you can use its powers with free, libre, and open source software (FLOSS). All the fonts mentioned here are FLOSS, too.

What is OpenType?

On the most basic level, a digital font is a “container” for different glyphs plus extra information about how to use them. Each glyph is represented by a series of points and rules to connect those points. I’ll not delve into the different ways to define those “connections” or how we arrived there (the history of software development can be messy), but basically there are two kinds of rules: parabolic segments (quadratic Bézier curves) or cubic functions (cubic Bézier curves).

The TTF file format, generally known as TrueType Font, can only use quadratic Bézier curves, whereas the OTF file format, known as OpenType Font, supports both.

Here is where we need to be careful about what we are talking about: The term “OpenType” refers not only to the file format, but also to the advanced properties of a typeface as a whole (i.e., the “extra information” mentioned earlier).

In fact, in addition to the OpenType file format, there are also substitution tables that, for example, tell the software using that font to substitute two characters with the corresponding typographical ligature; that the shape of a character needs to change according to the characters that surround it (its “contextual alternate”); or that when you write in Greek, a ? at the end of a word must be substituted with a ?. This is what the term “smart fonts” means.

And, to make things more confusing, including OpenType tables on TrueType fonts is possible, such as what happens on Junicode.

A quick example

Let’s see a quick example of smart fonts in use. Here is an example of Cormorant with (top) and without (bottom) OpenType features enabled:

Each OpenType property has its own “tag” that is used to activate those “specialties.” Some of these tags are enabled by default (like liga for normal ligatures or clig for contextual ligatures), whereas others must be enabled by hand.

A partial list of OpenType tags and names can be found in Dario Taraborelli’s Accessing OpenType font features in LaTeX.

Querying fonts

Finding out the characteristics of an OpenType font is simple. All you need is the otfinfo command, which is included in the package lcdf typetools (on my openSUSE system, it’s installed as texlive-lcdftypetools). Using it is quite simple: On the command line, issue something like:

otfinfo [option] /path/to/the/font

The option -s provides the languages supported by the font, whereas -f tells us which OpenType options are available. Font license information is displayed with the -i option.

If the path to the font contains a space, “scape” that space with an inverted bar. For example, to know what Sukhumala Regular.otf offers when installed in the folder ~/.fonts/s/, simply write in the terminal:

otfinfo -f ~/.fonts/s/Sukhumala\ Regular.otf

Using OpenType tables on LibreOffice Writer

LibreOffice version 5.3 offers good support for OpenType. It is not exactly “user-friendly,” but it’s not that difficult to understand, and it provides so much typographical power that it shouldn’t be ignored.

To simultaneously activate “stylistic sets” 1 and 11 on Vollkorn (see screenshot bellow), in the font name box, write:

Vollkorn:ss01&ss11

The colon starts the “tag section” on the extended font name and the ampersand allows us to use several tags.

But there is more. You can also disable any default option. For example, the Sukhumala font has some strange contextual ligatures that turn aa into ?, ii into ?, and uu into ?. To disable contextual ligatures on Sukhumala, add a dash in front of the corresponding OpenType tag clig:

Sukhumala:-clig

And that’s it. As I said before, it’s not exactly user friendly, especially considering that the font name box is rather small, but it works!

And don’t forget to use all of this within styles: Direct formatting is the enemy of good formatting. I mean, unless you are preparing a quick screenshot for a short article about typography. In that case it’s OK. But only in that case.

There’s more

One interesting OpenType tag that, sadly, does not work on LibreOffice yet is “size.” The size feature enables the automated selection of optical sizes, which is a font family that offers different designs for different point sizes. Few fonts offer this option (some GUST fonts like Latin Modern or Antykwa Pó?tawskiego; an interesting project in its initial stages of development called Coelacanth; or, to a lesser extent, EB Garamond), but they are all great. Right now, the only way to enjoy this property is through a more advanced layout system such as XeTeX. Using OpenType on XeTeX is a really big topic; the fontspec manual (the package that handles font selection and configuration on both XeTeX and LuaTeX) has more than 120 pages, so… not today.

And yes, version 1.5.3 of Scribus added support for OpenType (in addition to footnotes and other stuff), but that’s something I still need to explore.

How to align your team around microservices

Microservices have been a focus across the open source world for several years now. Although open source technologies such as Docker, Kubernetes, Prometheus, and Swarm make it easier than ever for organizations to adopt microservice architectures, getting your team on the same page about microservices remains a difficult challenge.

For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices. The problem is that that there is nothing inherently “micro” about microservices. Some can be small, but size is relative and there’s no standard measurement unit across organizations. A “small” service at one company might be 1 million lines of code, but far fewer at another organization.

For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices.

Some argue that microservices aren’t a new thing at all, rather a rebranding of service-oriented architecture (SOA), whereas others view microservices as an implementation of SOA, similar to how Scrum is an implementation of Agile. (For more on the ambiguity of microservice definitions, check out this upcoming book Microservices for Startups.)

How do you get your team on the same page about microservices when no precise definition exists? The most important thing when talking about microservices is to ensure that your team is grounded in a common starting point. Ambiguous definitions don’t help. It would be like trying to put Agile into practice without context for what you are trying to achieve or an understanding of precise methodologies like Scrum.

Finding common ground

Knowing the dangers of too eagerly hopping on the microservices bandwagon, a team I worked on tried not to stall on definitions and instead focused on defining the benefits we were trying to achieve with microservices adoption. Following are the three areas we focused on and lessons learned from each piece of our microservices implementation.

1. Ability to ship software faster

Our main application was a large codebase with several small teams of developers trying to build features for different purposes. This meant that every change had to try to satisfy all the different groups. For example, a database change that served only one group had to be reviewed and accepted by other groups that didn’t have as much context. This was tedious and slowed us down.

Having different groups of developers sharing the same codebase also meant that the code continually grew more complex in undeliberate ways. As the codebase grew larger, no one on the team could own it and make sure all the parts were organized and fit together optimally. This made deployment a scary ordeal. A one-line change to our application required the whole codebase to be deployed in order to push out the change. Because deploying our large application was high risk, our quality assurance process grew and, as a result, we deployed less.

With a microservices architecture, we hoped to be able to divide our code up so different teams of developers could fully own parts. This would enable teams to innovate much more quickly without tedious design, review, and deployment processes. We also hoped that having smaller codebases worked on by fewer developers would make our codebases easier to develop, test, and keep organized.

2. Flexibly with technology choices

Our main application was large, built with Ruby on Rails with a custom JavaScript framework and complex build processes. Several parts of our application hit major performance issues that were difficult to fix and brought down the rest of the application. We saw an opportunity to rewrite these parts of our application using a better approach. Our codebase was intertangled, which make rewriting feel extremely big and costly.

At the same time, one of our frontend teams wanted to pull away from our custom JavaScript framework and build product features with a newer framework like React. But mixing React into our existing application and complex frontend build process seemed expensive to configure.

As time went on, our teams grew frustrated with the feeling of being trapped in a codebase that was too big and expensive to fix or replace. By adopting microservices architecture, we hoped that keeping individual services smaller would mean that the cost to replace them with a better implementation would be much easier to manage. We also hoped to be able to pick the right tool for each job rather than being stuck with a one-size-fits-all approach. We’d have the flexibility to use multiple technologies across our different applications as we saw fit. If a team wanted to use something other than Ruby for better performance or switch from our custom JavaScript framework to React, they could do so.

3. Microservices are not a free lunch

In addition to outlining the benefits we hoped to achieve, we also made sure we were being realistic about the costs and challenges associated with building and managing microservices. Developing, hosting, and managing numerous services requires substantial overhead (and orchestrating a substantial number of different open source tools). A single, monolithic codebase running on a few processes can easily translate into a couple dozen processes across a handful of services, requiring load balancers, messaging layers, and clustering for resiliency. Managing all of this requires substantial skill and tooling.

Furthermore, microservices involve distributed systems that introduce a whole host of concerns such as network latency, fault tolerance, transactions, unreliable networks, and asynchronicity.

Setting your own microservices path

Once we defined the benefits and costs of microservices, we could talk about architecture without falling into counterproductive debates about who was doing microservices right or wrong. Instead of trying to find our way using others’ descriptions or examples of microservices, we instead focused on the core problems we were trying to solve.

  • How would having more services help us ship software faster in the next six to 12 months?
  • Were there strong technical advantages to using a specific tool for a portion of our system?
  • Did we foresee wanting to replace one of the systems with a more appropriate one down the line?
  • How did we want to structure our teams around services as we hired more people?
  • Was the productivity gain from having more services worth the foreseeable costs?

In summary, here are five recommended steps for aligning your team before jumping into microservices:

  1. Learn about microservices while agreeing that there is no “right” definition.
  2. Define a common set of goals and objectives to avoid counterproductive debates.
  3. Discuss and memorialize your anticipated benefits and costs of adopting microservices.
  4. Avoid too eagerly hopping on the microservices bandwagon; be open to creative ideas and spirited debate about how best to architect your systems.
  5. Stay rooted in the benefits and costs your team identified.

Focus on making sure the team has a concretely defined set of common goals to work off. It’s more valuable to discuss and define what you’d like to achieve with microservices than it is to try and pin down what a microservice actually is.

Flint OS, an operating system for a cloud-first world

Given the power of today’s browser platform technology and web frontend performance, it’s not surprising that most things we want to do with the internet can be accomplished through a single browser window. We are stepping into an era where installable apps will become history, where all our applications and services will live in the cloud.

The problem is that most operating systems weren’t designed for an internet-first world. Flint OS (soon to be renamed FydeOS) is a secure, fast, and productive operating system that was built to fill that gap. It’s based on the open source Chromium OS project that also powers Google Chromebooks. Chromium OS is based on the Linux kernel and uses Google’s Chromium browser as its principal user interface, therefore it primarily supports web applications.

Compared to older operating systems, Flint OS:

  • Boots up fast and never gets slow
  • Runs on full-fledged x86 laptops; on single-board computers (SBCs) like the Raspberry Pi, Asus Tinker Board, those with RK3288 and RK3399 chips; and more
  • Works with keyboard and mouse as well as touch and swipe
  • Has a simple architecture with sophisticated security to prevent viruses and malware
  • Avoids pausing work for updates due to its automated update mechanism
  • Is adding support for Android apps
  • Increases battery life for mobile devices by running applications in the cloud
  • Is familiar to users because it looks like Google Chrome

Downloading and installing Flint OS

Flint OS runs on a wide variety of hardware (Raspberry Pi, PC, Tinker Board, and VMware), and you can find information, instructions, and downloads for different versions on the Flint OS download page.

On PCs, Flint OS must be booted via a USB flash drive (8GB or larger). Make sure to back up your USB drive, since the flashing process will erase all data on it.

To flash Flint OS for PC to the USB drive, we recommend using a new, open source, multi-platform (Windows, macOS, and Linux) tool for USB drive and SD card burning called etcher. It is in beta; we use it to test our builds and absolutely love it.

Open the Flint OS .xz file in etcher; there is no need to rename or extract the image. Select your USB drive and click Flash; etcher will prompt you once the USB drive is ready.

To run Flint OS, first configure your computer to boot from USB media. Plug the USB drive into your PC, reboot, and you are ready to enjoy Flint OS on your PC.

Installing Flint OS as dual boot (beta) is an option, but configuring it requires some knowledge of a Linux environment. (We are working on a simpler GUI version, which will be available in the near future.) If setting up Flint OS as dual boot is your preference, see our dual-boot installation instructions.

Flint OS screenshots

Here are examples of what you can expect to see once Flint OS is up and running.

Contributing to Flint OS

We’ve spent some time cleaning up Flint OS’s Raspberry Pi (RPi) build system and codebase, both based on users’ requests and so we can create a public GitHub for our Raspberry Pi images.

In the past, when people asked how to contribute, we encouraged them to check out the Chromium project. By creating our public GitHub, we are hoping to make it easier to respond to issues and collaborate with the community.

Currently there are two branches: the x11 and the master branch.

  • The x11 branch is the legacy branch for all releases running on Chromium R56 and earlier. You are welcome to build newer versions of Chromium with this branch, but there are likely to be issues.
  • The master branch is our new Freon branch that works with R57 releases of Chromium and newer. We have successfully used this to boot R59 and R60 of Chromium. Please note this branch is currently quite unstable.

Please check out Flint OS and let us know what you think. We welcome contributions, suggestions, and changes from the community.

How to manage Linux containers with Ansible Container

I love containers and use the technology every day. Even so, containers aren’t perfect. Over the past couple of months, however, a set of projects has emerged that addresses some of the problems I’ve experienced.

I started using containers with Docker, since this project made the technology so popular. Aside from using the container engine, I learned how to use docker-compose and started managing my projects with it. My productivity skyrocketed! One command to run my project, no matter how complex it was. I was so happy.

After some time, I started noticing issues. The most apparent were related to the process of creating container images. The Docker tool uses a custom file format as a recipe to produce container images—Dockerfiles. This format is easy to learn, and after a short time you are ready to produce container images on your own. The problems arise once you want to master best practices or have complex scenarios in mind.

Let’s take a break and travel to a different land: the world of Ansible. You know it? It’s awesome, right? You don’t? Well, it’s time to learn something new. Ansible is a project that allows you to manage your infrastructure by writing tasks and executing them inside environments of your choice. No need to install and set up any services; everything can easily run from your laptop. Many people already embrace Ansible.

Imagine this scenario: You invested in Ansible, you wrote plenty of Ansible roles and playbooks that you use to manage your infrastructure, and you are thinking about investing in containers. What should you do? Start writing container image definitions via shell scripts and Dockerfiles? That doesn’t sound right.

Some people from the Ansible development team asked this question and realized that those same Ansible roles and playbooks that people wrote and use daily can also be used to produce container images. But not just that—they can be used to manage the complete lifecycle of containerized projects. From these ideas, the Ansible Container project was born. It utilizes existing Ansible roles that can be turned into container images and can even be used for the complete application lifecycle, from build to deploy in production.

Let’s talk about the problems I mentioned regarding best practices in context of Dockerfiles. A word of warning: This is going to be very specific and technical. Here are the top three issues I have:

1. Shell scripts embedded in Dockerfiles.

When writing Dockerfiles, you can specify a script that will be interpreted via /bin/sh -c. It can be something like:

RUN dnf install -y nginx

where RUN is a Dockerfile instruction and the rest are its arguments (which are passed to shell). But imagine a more complex scenario:

RUN set -eux; \
    \
# this “case” statement is generated via “update.sh”
    %%ARCH-CASE%%; \
    \
    url=“https://golang.org/dl/go${GOLANG_VERSION}.${goRelArch}.tar.gz”; \
    wget -O go.tgz $url; \
    echo ${goRelSha256} *go.tgz” | sha256sum -c -; \

This one is taken from the official golang image. It doesn’t look pretty, right?

2. You can’t parse Dockerfiles easily.

Dockerfiles are a new format without a formal specification. This is tricky if you need to process Dockerfiles in your infrastructure (e.g., automate the build process a bit). The only specification is the code that is part of dockerd. The problem is that you can’t use it as a library. The easiest solution is to write a parser on your own and hope for the best. Wouldn’t it be better to use some well-known markup language, such as YAML or JSON?

3. It’s hard to control.

If you are familiar with the internals of container images, you may know that every image is composed of layers. Once the container is created, the layers are stacked onto each other (like pancakes) using union filesystem technology. The problem is, that you cannot explicitly control this layering—you can’t say, “here starts a new layer.” You are forced to change your Dockerfile in a way that may hurt readability. The bigger problem is that a set of best practices has to be followed to achieve optimal results—newcomers have a really hard time here.

Comparing Ansible language and Dockerfiles

The biggest shortcoming of Dockerfiles in comparison to Ansible is that Ansible, as a language, is much more powerful. For example, Dockerfiles have no direct concept of variables, whereas Ansible has a complete templating system (variables are just one of its features). Ansible contains a large number of modules that can be easily utilized, such as wait_for, which can be used for service readiness checks—e.g., wait until a service is ready before proceeding. With Dockerfiles, everything is a shell script. So if you need to figure out service readiness, it has to be done with shell (or installed separately). The other problem with shell scripts is that, with growing complexity, maintenance becomes a burden. Plenty of people have already figured this out and turned those shell scripts into Ansible.

If you are interested in this topic and would like to know more, please come to Open Source Summit in Prague to see my presentation on Monday, Oct. 23, at 4:20 p.m. in Palmovka room.

Learn more in Tomas Tomecek’s talk, From Dockerfiles to Ansible Container, at Open Source Summit EU, which will be held October 23-26 in Prague.

The illustrated Open Organization is now available

In April, the Open Organization Ambassadors at Opensource.com released the second version of their Open Organization Definition, a document outlining the five key characteristics any organization must embrace if it wants to leverage the power openness at scale.

Today, that definition is a book.

Richly illustrated and available immediately in full-color paperback and eBook formats, The Open Organization Definition makes an excellent primer on open principles and practices.

Download or purchase (completely at cost) your copies today, and share them with anyone in need of a plain-language introduction to transparency, inclusivity, adaptability, collaboration, and community.