How to align your team around microservices

Microservices have been a focus across the open source world for several years now. Although open source technologies such as Docker, Kubernetes, Prometheus, and Swarm make it easier than ever for organizations to adopt microservice architectures, getting your team on the same page about microservices remains a difficult challenge.

For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices. The problem is that that there is nothing inherently “micro” about microservices. Some can be small, but size is relative and there’s no standard measurement unit across organizations. A “small” service at one company might be 1 million lines of code, but far fewer at another organization.

For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices.

Some argue that microservices aren’t a new thing at all, rather a rebranding of service-oriented architecture (SOA), whereas others view microservices as an implementation of SOA, similar to how Scrum is an implementation of Agile. (For more on the ambiguity of microservice definitions, check out this upcoming book Microservices for Startups.)

How do you get your team on the same page about microservices when no precise definition exists? The most important thing when talking about microservices is to ensure that your team is grounded in a common starting point. Ambiguous definitions don’t help. It would be like trying to put Agile into practice without context for what you are trying to achieve or an understanding of precise methodologies like Scrum.

Finding common ground

Knowing the dangers of too eagerly hopping on the microservices bandwagon, a team I worked on tried not to stall on definitions and instead focused on defining the benefits we were trying to achieve with microservices adoption. Following are the three areas we focused on and lessons learned from each piece of our microservices implementation.

1. Ability to ship software faster

Our main application was a large codebase with several small teams of developers trying to build features for different purposes. This meant that every change had to try to satisfy all the different groups. For example, a database change that served only one group had to be reviewed and accepted by other groups that didn’t have as much context. This was tedious and slowed us down.

Having different groups of developers sharing the same codebase also meant that the code continually grew more complex in undeliberate ways. As the codebase grew larger, no one on the team could own it and make sure all the parts were organized and fit together optimally. This made deployment a scary ordeal. A one-line change to our application required the whole codebase to be deployed in order to push out the change. Because deploying our large application was high risk, our quality assurance process grew and, as a result, we deployed less.

With a microservices architecture, we hoped to be able to divide our code up so different teams of developers could fully own parts. This would enable teams to innovate much more quickly without tedious design, review, and deployment processes. We also hoped that having smaller codebases worked on by fewer developers would make our codebases easier to develop, test, and keep organized.

2. Flexibly with technology choices

Our main application was large, built with Ruby on Rails with a custom JavaScript framework and complex build processes. Several parts of our application hit major performance issues that were difficult to fix and brought down the rest of the application. We saw an opportunity to rewrite these parts of our application using a better approach. Our codebase was intertangled, which make rewriting feel extremely big and costly.

At the same time, one of our frontend teams wanted to pull away from our custom JavaScript framework and build product features with a newer framework like React. But mixing React into our existing application and complex frontend build process seemed expensive to configure.

As time went on, our teams grew frustrated with the feeling of being trapped in a codebase that was too big and expensive to fix or replace. By adopting microservices architecture, we hoped that keeping individual services smaller would mean that the cost to replace them with a better implementation would be much easier to manage. We also hoped to be able to pick the right tool for each job rather than being stuck with a one-size-fits-all approach. We’d have the flexibility to use multiple technologies across our different applications as we saw fit. If a team wanted to use something other than Ruby for better performance or switch from our custom JavaScript framework to React, they could do so.

3. Microservices are not a free lunch

In addition to outlining the benefits we hoped to achieve, we also made sure we were being realistic about the costs and challenges associated with building and managing microservices. Developing, hosting, and managing numerous services requires substantial overhead (and orchestrating a substantial number of different open source tools). A single, monolithic codebase running on a few processes can easily translate into a couple dozen processes across a handful of services, requiring load balancers, messaging layers, and clustering for resiliency. Managing all of this requires substantial skill and tooling.

Furthermore, microservices involve distributed systems that introduce a whole host of concerns such as network latency, fault tolerance, transactions, unreliable networks, and asynchronicity.

Setting your own microservices path

Once we defined the benefits and costs of microservices, we could talk about architecture without falling into counterproductive debates about who was doing microservices right or wrong. Instead of trying to find our way using others’ descriptions or examples of microservices, we instead focused on the core problems we were trying to solve.

  • How would having more services help us ship software faster in the next six to 12 months?
  • Were there strong technical advantages to using a specific tool for a portion of our system?
  • Did we foresee wanting to replace one of the systems with a more appropriate one down the line?
  • How did we want to structure our teams around services as we hired more people?
  • Was the productivity gain from having more services worth the foreseeable costs?

In summary, here are five recommended steps for aligning your team before jumping into microservices:

  1. Learn about microservices while agreeing that there is no “right” definition.
  2. Define a common set of goals and objectives to avoid counterproductive debates.
  3. Discuss and memorialize your anticipated benefits and costs of adopting microservices.
  4. Avoid too eagerly hopping on the microservices bandwagon; be open to creative ideas and spirited debate about how best to architect your systems.
  5. Stay rooted in the benefits and costs your team identified.

Focus on making sure the team has a concretely defined set of common goals to work off. It’s more valuable to discuss and define what you’d like to achieve with microservices than it is to try and pin down what a microservice actually is.

Flint OS, an operating system for a cloud-first world

Given the power of today’s browser platform technology and web frontend performance, it’s not surprising that most things we want to do with the internet can be accomplished through a single browser window. We are stepping into an era where installable apps will become history, where all our applications and services will live in the cloud.

The problem is that most operating systems weren’t designed for an internet-first world. Flint OS (soon to be renamed FydeOS) is a secure, fast, and productive operating system that was built to fill that gap. It’s based on the open source Chromium OS project that also powers Google Chromebooks. Chromium OS is based on the Linux kernel and uses Google’s Chromium browser as its principal user interface, therefore it primarily supports web applications.

Compared to older operating systems, Flint OS:

  • Boots up fast and never gets slow
  • Runs on full-fledged x86 laptops; on single-board computers (SBCs) like the Raspberry Pi, Asus Tinker Board, those with RK3288 and RK3399 chips; and more
  • Works with keyboard and mouse as well as touch and swipe
  • Has a simple architecture with sophisticated security to prevent viruses and malware
  • Avoids pausing work for updates due to its automated update mechanism
  • Is adding support for Android apps
  • Increases battery life for mobile devices by running applications in the cloud
  • Is familiar to users because it looks like Google Chrome

Downloading and installing Flint OS

Flint OS runs on a wide variety of hardware (Raspberry Pi, PC, Tinker Board, and VMware), and you can find information, instructions, and downloads for different versions on the Flint OS download page.

On PCs, Flint OS must be booted via a USB flash drive (8GB or larger). Make sure to back up your USB drive, since the flashing process will erase all data on it.

To flash Flint OS for PC to the USB drive, we recommend using a new, open source, multi-platform (Windows, macOS, and Linux) tool for USB drive and SD card burning called etcher. It is in beta; we use it to test our builds and absolutely love it.

Open the Flint OS .xz file in etcher; there is no need to rename or extract the image. Select your USB drive and click Flash; etcher will prompt you once the USB drive is ready.

To run Flint OS, first configure your computer to boot from USB media. Plug the USB drive into your PC, reboot, and you are ready to enjoy Flint OS on your PC.

Installing Flint OS as dual boot (beta) is an option, but configuring it requires some knowledge of a Linux environment. (We are working on a simpler GUI version, which will be available in the near future.) If setting up Flint OS as dual boot is your preference, see our dual-boot installation instructions.

Flint OS screenshots

Here are examples of what you can expect to see once Flint OS is up and running.

Contributing to Flint OS

We’ve spent some time cleaning up Flint OS’s Raspberry Pi (RPi) build system and codebase, both based on users’ requests and so we can create a public GitHub for our Raspberry Pi images.

In the past, when people asked how to contribute, we encouraged them to check out the Chromium project. By creating our public GitHub, we are hoping to make it easier to respond to issues and collaborate with the community.

Currently there are two branches: the x11 and the master branch.

  • The x11 branch is the legacy branch for all releases running on Chromium R56 and earlier. You are welcome to build newer versions of Chromium with this branch, but there are likely to be issues.
  • The master branch is our new Freon branch that works with R57 releases of Chromium and newer. We have successfully used this to boot R59 and R60 of Chromium. Please note this branch is currently quite unstable.

Please check out Flint OS and let us know what you think. We welcome contributions, suggestions, and changes from the community.

How to manage Linux containers with Ansible Container

I love containers and use the technology every day. Even so, containers aren’t perfect. Over the past couple of months, however, a set of projects has emerged that addresses some of the problems I’ve experienced.

I started using containers with Docker, since this project made the technology so popular. Aside from using the container engine, I learned how to use docker-compose and started managing my projects with it. My productivity skyrocketed! One command to run my project, no matter how complex it was. I was so happy.

After some time, I started noticing issues. The most apparent were related to the process of creating container images. The Docker tool uses a custom file format as a recipe to produce container images—Dockerfiles. This format is easy to learn, and after a short time you are ready to produce container images on your own. The problems arise once you want to master best practices or have complex scenarios in mind.

Let’s take a break and travel to a different land: the world of Ansible. You know it? It’s awesome, right? You don’t? Well, it’s time to learn something new. Ansible is a project that allows you to manage your infrastructure by writing tasks and executing them inside environments of your choice. No need to install and set up any services; everything can easily run from your laptop. Many people already embrace Ansible.

Imagine this scenario: You invested in Ansible, you wrote plenty of Ansible roles and playbooks that you use to manage your infrastructure, and you are thinking about investing in containers. What should you do? Start writing container image definitions via shell scripts and Dockerfiles? That doesn’t sound right.

Some people from the Ansible development team asked this question and realized that those same Ansible roles and playbooks that people wrote and use daily can also be used to produce container images. But not just that—they can be used to manage the complete lifecycle of containerized projects. From these ideas, the Ansible Container project was born. It utilizes existing Ansible roles that can be turned into container images and can even be used for the complete application lifecycle, from build to deploy in production.

Let’s talk about the problems I mentioned regarding best practices in context of Dockerfiles. A word of warning: This is going to be very specific and technical. Here are the top three issues I have:

1. Shell scripts embedded in Dockerfiles.

When writing Dockerfiles, you can specify a script that will be interpreted via /bin/sh -c. It can be something like:

RUN dnf install -y nginx

where RUN is a Dockerfile instruction and the rest are its arguments (which are passed to shell). But imagine a more complex scenario:

RUN set -eux; \
# this “case” statement is generated via “”
    %%ARCH-CASE%%; \
    url=“${GOLANG_VERSION}.${goRelArch}.tar.gz”; \
    wget -O go.tgz $url; \
    echo ${goRelSha256} *go.tgz” | sha256sum -c -; \

This one is taken from the official golang image. It doesn’t look pretty, right?

2. You can’t parse Dockerfiles easily.

Dockerfiles are a new format without a formal specification. This is tricky if you need to process Dockerfiles in your infrastructure (e.g., automate the build process a bit). The only specification is the code that is part of dockerd. The problem is that you can’t use it as a library. The easiest solution is to write a parser on your own and hope for the best. Wouldn’t it be better to use some well-known markup language, such as YAML or JSON?

3. It’s hard to control.

If you are familiar with the internals of container images, you may know that every image is composed of layers. Once the container is created, the layers are stacked onto each other (like pancakes) using union filesystem technology. The problem is, that you cannot explicitly control this layering—you can’t say, “here starts a new layer.” You are forced to change your Dockerfile in a way that may hurt readability. The bigger problem is that a set of best practices has to be followed to achieve optimal results—newcomers have a really hard time here.

Comparing Ansible language and Dockerfiles

The biggest shortcoming of Dockerfiles in comparison to Ansible is that Ansible, as a language, is much more powerful. For example, Dockerfiles have no direct concept of variables, whereas Ansible has a complete templating system (variables are just one of its features). Ansible contains a large number of modules that can be easily utilized, such as wait_for, which can be used for service readiness checks—e.g., wait until a service is ready before proceeding. With Dockerfiles, everything is a shell script. So if you need to figure out service readiness, it has to be done with shell (or installed separately). The other problem with shell scripts is that, with growing complexity, maintenance becomes a burden. Plenty of people have already figured this out and turned those shell scripts into Ansible.

If you are interested in this topic and would like to know more, please come to Open Source Summit in Prague to see my presentation on Monday, Oct. 23, at 4:20 p.m. in Palmovka room.

Learn more in Tomas Tomecek’s talk, From Dockerfiles to Ansible Container, at Open Source Summit EU, which will be held October 23-26 in Prague.

The illustrated Open Organization is now available

In April, the Open Organization Ambassadors at released the second version of their Open Organization Definition, a document outlining the five key characteristics any organization must embrace if it wants to leverage the power openness at scale.

Today, that definition is a book.

Richly illustrated and available immediately in full-color paperback and eBook formats, The Open Organization Definition makes an excellent primer on open principles and practices.

Download or purchase (completely at cost) your copies today, and share them with anyone in need of a plain-language introduction to transparency, inclusivity, adaptability, collaboration, and community.