How to manage Linux containers with Ansible Container

I love containers and use the technology every day. Even so, containers aren’t perfect. Over the past couple of months, however, a set of projects has emerged that addresses some of the problems I’ve experienced.

I started using containers with Docker, since this project made the technology so popular. Aside from using the container engine, I learned how to use docker-compose and started managing my projects with it. My productivity skyrocketed! One command to run my project, no matter how complex it was. I was so happy.

After some time, I started noticing issues. The most apparent were related to the process of creating container images. The Docker tool uses a custom file format as a recipe to produce container images—Dockerfiles. This format is easy to learn, and after a short time you are ready to produce container images on your own. The problems arise once you want to master best practices or have complex scenarios in mind.

Let’s take a break and travel to a different land: the world of Ansible. You know it? It’s awesome, right? You don’t? Well, it’s time to learn something new. Ansible is a project that allows you to manage your infrastructure by writing tasks and executing them inside environments of your choice. No need to install and set up any services; everything can easily run from your laptop. Many people already embrace Ansible.

Imagine this scenario: You invested in Ansible, you wrote plenty of Ansible roles and playbooks that you use to manage your infrastructure, and you are thinking about investing in containers. What should you do? Start writing container image definitions via shell scripts and Dockerfiles? That doesn’t sound right.

Some people from the Ansible development team asked this question and realized that those same Ansible roles and playbooks that people wrote and use daily can also be used to produce container images. But not just that—they can be used to manage the complete lifecycle of containerized projects. From these ideas, the Ansible Container project was born. It utilizes existing Ansible roles that can be turned into container images and can even be used for the complete application lifecycle, from build to deploy in production.

Let’s talk about the problems I mentioned regarding best practices in context of Dockerfiles. A word of warning: This is going to be very specific and technical. Here are the top three issues I have:

1. Shell scripts embedded in Dockerfiles.

When writing Dockerfiles, you can specify a script that will be interpreted via /bin/sh -c. It can be something like:

RUN dnf install -y nginx

where RUN is a Dockerfile instruction and the rest are its arguments (which are passed to shell). But imagine a more complex scenario:

RUN set -eux; \
# this “case” statement is generated via “”
    %%ARCH-CASE%%; \
    url=“${GOLANG_VERSION}.${goRelArch}.tar.gz”; \
    wget -O go.tgz $url; \
    echo ${goRelSha256} *go.tgz” | sha256sum -c -; \

This one is taken from the official golang image. It doesn’t look pretty, right?

2. You can’t parse Dockerfiles easily.

Dockerfiles are a new format without a formal specification. This is tricky if you need to process Dockerfiles in your infrastructure (e.g., automate the build process a bit). The only specification is the code that is part of dockerd. The problem is that you can’t use it as a library. The easiest solution is to write a parser on your own and hope for the best. Wouldn’t it be better to use some well-known markup language, such as YAML or JSON?

3. It’s hard to control.

If you are familiar with the internals of container images, you may know that every image is composed of layers. Once the container is created, the layers are stacked onto each other (like pancakes) using union filesystem technology. The problem is, that you cannot explicitly control this layering—you can’t say, “here starts a new layer.” You are forced to change your Dockerfile in a way that may hurt readability. The bigger problem is that a set of best practices has to be followed to achieve optimal results—newcomers have a really hard time here.

Comparing Ansible language and Dockerfiles

The biggest shortcoming of Dockerfiles in comparison to Ansible is that Ansible, as a language, is much more powerful. For example, Dockerfiles have no direct concept of variables, whereas Ansible has a complete templating system (variables are just one of its features). Ansible contains a large number of modules that can be easily utilized, such as wait_for, which can be used for service readiness checks—e.g., wait until a service is ready before proceeding. With Dockerfiles, everything is a shell script. So if you need to figure out service readiness, it has to be done with shell (or installed separately). The other problem with shell scripts is that, with growing complexity, maintenance becomes a burden. Plenty of people have already figured this out and turned those shell scripts into Ansible.

If you are interested in this topic and would like to know more, please come to Open Source Summit in Prague to see my presentation on Monday, Oct. 23, at 4:20 p.m. in Palmovka room.

Learn more in Tomas Tomecek’s talk, From Dockerfiles to Ansible Container, at Open Source Summit EU, which will be held October 23-26 in Prague.

The illustrated Open Organization is now available

In April, the Open Organization Ambassadors at released the second version of their Open Organization Definition, a document outlining the five key characteristics any organization must embrace if it wants to leverage the power openness at scale.

Today, that definition is a book.

Richly illustrated and available immediately in full-color paperback and eBook formats, The Open Organization Definition makes an excellent primer on open principles and practices.

Download or purchase (completely at cost) your copies today, and share them with anyone in need of a plain-language introduction to transparency, inclusivity, adaptability, collaboration, and community.

We're giving away a Linux-ready laptop from ZaReason

For the first time ever, is partnering with ZaReason to give away an UltraLap 5330 laptop with Linux pre-installed!

Since 2007, ZaReason has assembled, shipped, and supported hardware specifically designed for Linux, and the UltraLap 5330 is no exception—the 3.6-lb laptop ships with the Linux distribution of your choice and boasts the following hardware specs:

  • 14″ FHD display
  • Intel i3-7100U processor
  • 4GB RAM
  • 120GB M.2 SSD

So, what are you waiting for? Enter our ZaReason Laptop Giveaway by Sunday, September 24 at 11:59 p.m. Eastern Time (3:59 a.m. UTC) for your chance to win.

Have a great idea for a future giveaway? Let us know about it in the comments below.

3 text editor alternatives to Emacs and Vim

Before you start reaching for those implements of mayhem, Emacs and Vim fans, understand that this article isn’t about putting the boot to your favorite editor. I’m a professed Emacs guy, but one who also likes Vim. A lot.

That said, I realize that Emacs and Vim aren’t for everyone. It might be that the silliness of the so-called Editor war has turned some people off. Or maybe they just want an editor that is less demanding and has a more modern sheen.

If you’re looking for an alternative to Emacs or Vim, keep reading. Here are three that might interest you.


Geany is an old favorite from the days when I computed on older hardware running lightweight Linux distributions. Geany started out as my LaTeX editor, but quickly became the app in which I did all of my text editing.

Although Geany is billed as a small and fast IDE (integrated development environment), it’s definitely not just a techie’s tool. Geany is small and it is fast, even on older hardware or a Chromebook running Linux. You can use Geany for everything from editing configuration files to maintaining a task list or journal, from writing an article or a book to doing some coding and scripting.

Plugins give Geany a bit of extra oomph. Those plugins expand the editor’s capabilities, letting you code or work with markup languages more effectively, manipulate text, and even check your spelling.


Atom is a new-ish kid in the text editing neighborhood. In the short time it’s been on the scene, though, Atom has gained a dedicated following.

What makes Atom attractive is that you can customize it. If you’re of a more technical bent, you can fiddle with the editor’s configuration. If you aren’t all that technical, Atom has a number of themes you can use to change how the editor looks.

And don’t discount Atom’s thousands of packages. They extend the editor in many different ways, enabling you to turn it into the text editing or development environment that’s right for you. Atom isn’t just for coders. It’s a very good text editor for writers, too.


Maybe Atom and Geany are a bit heavy for your tastes. Maybe you want a lighter editor, something that’s not bare bones but also doesn’t have features you’ll rarely (if ever) use. In that case, Xed might be what you’re looking for.

If Xed looks familiar, it’s a fork of the Pluma text editor for the MATE desktop environment. I’ve found that Xed is a bit faster and a bit more responsive than Pluma—your mileage may vary, though.

Although Xed isn’t as rich in features as other editors, it doesn’t do too badly. It has solid syntax highlighting, a better-than-average search and replace function, a spelling checker, and a tabbed interface for editing multiple files in a single window.

Other editors worth exploring

I’m not a KDE guy, but when I worked in that environment, KDevelop was my go-to editor for heavy-duty work. It’s a lot like Geany in that KDevelop is powerful and flexible without a lot of bulk.

Although I’ve never really felt the love, more than a couple of people I know swear by Brackets. It is powerful, and I have to admit its extensions look useful.

Billed as a “text editor for developers,” Notepadqq is an editor that’s reminiscent of Notepad++. It’s in the early stages of development, but Notepadqq does look promising.

Gedit and Kate are excellent for anyone whose text editing needs are simple. They’re definitely not bare bones—they pack enough features to do heavy text editing. Both Gedit and Kate balance that by being speedy and easy to use.

Do you have another favorite text editor that’s not Emacs or Vim? Feel free to share by leaving a comment.

Top 5: Your first programming language, running Windows apps on Linux, and more

For more discussion on open source and the role of the CIO in the enterprise, join us at The

The opinions expressed on this website are those of each author, not of the author’s employer or of Red Hat. aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. Red Hat and the Shadowman logo are trademarks of Red Hat, Inc., registered in the United States and other countries.

Diversity and inclusion: Stop talking and do your homework

Open source undoubtedly has a diversity problem. In fact, tech has a diversity problem. But this isn’t news?—?women, people of color, parents, non-technical contributors, gay, lesbian, transgender, and other marginalized people and allies have shared stories of challenge for years.

At Mozilla, we believe that to influence positive change in diversity and inclusion (D&I) in our communities, and more broadly in open source, we need to learn, empathize, innovate, and take action. Open source is missing out on diverse perspectives and experiences that can drive change for a better world because we’re stuck in our ways—continually leaning on long-held assumptions about why we lose people. Counting who makes it through the gauntlet of tasks and exclusive cultural norms that leads to a first pull request can’t be enough. Neither can celebrating increased diversity on stage at technical conferences, especially when the audience remains homogeneous and abuse goes unchallenged.

This year, leading with our organizational strategy for D&I, we are in investing in a D&I strategy for Mozilla’s communities informed by three months of research.

Following are early recommendations emerging from our research.

Build and sustain diverse communities

1. Provide organizational support to established identity groups

For reasons of safety, friendship, mentorship, advocacy, and empowerment, we found positive support for identity groups. Identity groups are sub-communities formed under a dimension of diversity, such as language, gender, or even a specific skillset. Such groups can act as springboards into and out of the greater community.

2. Develop inclusive community leadership models

To combat gatekeeping and the myth of meritocracy, community roles must be designed with greater accountability for health, inclusion, and especially for recognizing achievements of others as core functions.

3. Implement project-wide strategies for toxic behavior

The perceived risk of losing productivity and momentum when addressing toxic behavior is shown to interfere with and risk community health. Insights amplified by HR industry findings show that, although toxic individuals are often highly productive, their cost in lost productivity far outweighs their perceived value. Strategies for combatting this in open communities should include cross-project communication about such decisions to avoid alienating or losing contributors.

4. Integrate D&I standards and best practices into product lifecycles

Extending on the notion of cross-project collaboration is the strong sense that building D&I standards into product lifecycles would benefit maintainers and community leaders, create reach, increase collaboration, and break down silos. An analogy is how web standards enable open communities to build on one other’s work across various open ecosystems.

5. Build inclusivity into events

Project and community events, although trending in positive directions by putting diversity on stage, struggle with homogenous audiences, unclear processes for code-of-conduct reporting, and neglect of neurodiversity issues. A series of recommendations is coming based on this research, and Mozfest has done a great job in past year of building inclusiveness into programming.

Design models for accessible communication

6. Break the language barrier

Quantitative research showed only 21% of our respondents spoke English as a first language. Prioritizing offering all key communications in multiple languages, or providing transcripts that can be easily localized, is important. The intersection of language and other diversity issues raised almost impossible barriers (for example, a new mother whose first language isn’t English runs out of time translating a presentation made in English).

7. Generate diverse network capabilities

Contrary to the spirit of openness, many (if not a majority of) projects are working on similar D&I problems—with learning rarely shared between, or even within, communities and projects. New generations of community managers and leaders identify the same issues—and begin again. Later this year, we’ll propose an initiative to bring together learning, document ways to build communication, and collaborate towards innovation desperately needed to move the needle in D&I.

8. Experiment with accessible communication

In our interviews, we were surprised to learn that text-based interviews were preferred not only by those with limited bandwidth, but also those who identified as introverts, preferred anonymity, or have a non-English first language. The simple act of changing the way we talk to people can have wide-ranging impacts, so we should experiment often with different modes of communication.

9. Avoid exclusion by technical jargon

Technical jargon or lingo and overly complicated language were cited as critical challenges for getting involved in projects.?Our data shows that technical confidence might be influencing that barrier, and men were nearly twice as likely to rate their technical confidence highly. These findings indicate that it’s critically important to limit jargon and to shift from technical posturing to empathy in participatory design. Rust is working on this.

Frameworks for incentive and consequence

10. Mobilize community participation guidelines

In recent conversations with other open project leaders, I’ve realized this is a pivotal moment for open projects that have adopted codes of conduct. We’re at a critical stage in making inclusive and open project governance effective and understood—making it? real. Although enforcing our guidelines sometimes feels uncomfortable and even meets resistance, there are far more people for whom empowerment, safety, and inclusion will be celebrated and embraced.

11. Standardize incentives and recognition

Although the people we interviewed want to feel valued, they also said it’s important that their accomplishments are publicly recognized in formats with real-world value. It’s worth noting that recognition in open communities tends to skew toward people most able to surface their accomplishments and technical contributions, which may exclude more reserved people.

12. Design inclusive systems that protect identity

Many systems do not adequately protect the information of people who register in community portals, and thus exclude or expose those who prefer to hide personal data for reasons of safety and privacy. The research showed a variety of non-obvious ways we ask for and store gender-identity information. D&I standards are a way forward in providing structure, predictability, and safety in systems, as well as mechanisms to track our progress.

More detailed findings on our research and path forward can be found on Mozilla’s Open Innovation Blog.

Learn more in Emma Irwin & Larissa Shapiro’s talk, “Time for Action—Innovating for D&I in Open Source Communities,” at Open Source Summit, Sept. 11-14 in Los Angeles.

GNOME at 20: Four reasons it's still my favorite GUI

The GNOME desktop turns 20 on August 15, and I’m so excited! Twenty years is a major milestone for any open source software project, especially a graphical desktop environment like GNOME that has to appeal to many different users. The 20th anniversary is definitely something to celebrate!

Why is GNOME such a big deal? For me, it’s because it represented a huge step forward in the Linux desktop. I installed my first Linux system in 1993. In the early days of Linux, the most prevalent graphical environment was TWM, the tabbed window manager. The modern desktop didn’t exist yet.

But as Linux became more popular, we saw an explosion of different graphical environments, such as FVWM (1993) and FVWM95 (1995), and their derivatives, including Window Maker (1996), LessTif (1996), Enlightenment (1997), and Xfce (1997). Each filled a different niche. Nothing was integrated. Rather, FVWM and its clones simply managed windows. Toolkits were not standardized; each window might use a different one. As a result, early Linux graphical environments were a mishmash of various styles. Window Maker offered the most improvements, with a more uniform look and feel, but it still lacked the integration of a true desktop.

I was thrilled when the GNOME project released a true Linux desktop environment in 1999. GNOME 1 leveraged the GTK+ toolkit, the same object-oriented widget toolkit used to build the GIMP graphics program.

The first GNOME release looked very similar to Windows 98, the then-current version of Microsoft Windows, a wise decision that immediately provided a familiar graphical interface for new Linux users. GNOME 1 also offered desktop management and integration, not simply window management. Files and folders could be dropped on the desktop, providing easy access. This was a major advancement. In short order, many major Linux distributions included GNOME as the default desktop. Finally, Linux had a true desktop.

Over time, GNOME continued to evolve. In 2002, GNOME’s second major release, GNOME 2, cleaned up the user interface and tweaked the overall design. I found this quite invigorating. Instead of a single toolbar or panel at the bottom of the screen, GNOME 2 used two panels: one at the top of the screen, and one at the bottom. The top panel included the GNOME Applications menu, an Actions menu, and shortcuts to frequently used applications. The bottom panel provided icons of running programs and a representation of the other workspaces available on the system. Using the two panels provided a cleaner user interface, separating “things you can do” (top panel) and “things you are doing” (bottom panel).

I loved the GNOME 2 desktop, and it remained my favorite for years. Lots of other users felt the same, and GNOME 2 became a de facto standard for the Linux desktop. Successive versions made incremental improvements to GNOME’s user interface, but the general design concept of “things you can do” and “things you are doing” remained the same.

Despite the success and broad appeal of GNOME, the GNOME team realized that GNOME 2 had become difficult for many to use. The applications launch menu required too many clicks. Workspaces were difficult to use. Open windows were easy to lose under piles of other application windows. In 2008, the GNOME team embarked on a mission to update the GNOME interface. That effort produced GNOME 3.

GNOME 3 removed the traditional task bar in favor of an Overview mode that shows all running applications. Instead of using a launch menu, users start applications with an Activities hot button in the black bar at the top. Selecting the Activities menu brings up the Overview mode, showing both things you can do (with the favorite applications launcher to the left of the screen), and things you are doing (window representations of open applications).

Since its initial release, the GNOME 3 team has put in a lot of effort to improve it and make it easier to use. Today’s GNOME is modern yet familiar, striking that difficult balance between features and utility.

4 reasons GNOME is my favorite GUI

Here at GNOME’s 20th anniversary, I’d like to highlight four reasons why GNOME 3 is still my favorite desktop today:

1. It’s easy to get to work

GNOME 3 makes it easy to find my most frequently used applications in the favorite applications launcher. I can add my most-used applications here, so getting to work is just a click away. I can still find less frequently used applications in the Applications menu, or I can just start typing the name of the program to quickly search for the application.

2. Open windows are easy to find

Most of the time, I have two or three windows open at once, so it’s easy to use Alt+Tab to switch among them. But when I’m working on a project, I might have 10 or more windows open on my desktop. Even with a large number of open applications, it’s straightforward to find the one that I want. Move the mouse to the Activities hot corner, and the desktop switches to Overview mode with representations of all your open windows. Simply click on a window, and GNOME puts that application on top.

3. No wasted screen space

With other desktop environments, windows have a title bar with the name of the application, plus a few controls to minimize, maximize, and close the window. When all you need is a button to close the window, this is wasted screen space. GNOME 3 is designed to minimize the decorations around your windows and give you more screen space. GNOME even locates certain Action buttons in the window’s title bar, saving you even more space. It may not sound like much, but it all adds up when you have a lot of open windows.

4. The desktop of the future

Today, computers are more than a box with a monitor, keyboard, and mouse. We use smartphones and tablets alongside our desktop and laptop computers. In many cases, mobile computing (phones and tablets) displaces the traditional computer for many tasks. I think it’s clear that the mobile and desktop interfaces are merging. Before too long, we will use the same interface for both desktop and mobile. The key to making this work is a user interface that truly unifies the platforms and their unique use cases. We aren’t quite there yet, but GNOME 3 seems well positioned to fill this gap. I look forward to seeing this area develop and improve.

Testing in production: Yes, you can (and should)

I wrote a piece recently about why we are all distributed systems engineers now. To my surprise, lots of people objected to the observation that you have to test large distributed systems in production. 

It seems testing in production has gotten a bad rap—despite the fact that we all do it, all the time.

Maybe we associate it with cowboy engineering. We hear “testing in production” and assume this means no unit tests, functional tests, or continuous integration.

It’s good to try and catch things before production—we should do that too! But these things aren’t mutually exclusive. Here are some things to consider about testing in production.

1. You already do it

There are lots of things you already test in prod—because there’s no other way you can test them. Sure, you can spin up clones of various system components or entire systems, and capture real traffic to replay offline (the gold standard of systems testing). But many systems are too big, complex, and cost-prohibitive to clone.

Imagine trying to spin up a copy of Facebook for testing (with its multiple, globally distributed data centers). Imagine trying to spin up a copy of the national electrical grid. Even if you succeed, next you need the same number of clients, the same concurrency, same pipelining and usage patterns, etc. The unpredictability of user traffic makes it impossible to mock; even if you could perfectly reproduce yesterday’s traffic, you still can’t predict tomorrow’s.

It’s easy to get dragged down into bikeshedding about cloning environments and miss the real point: Only production is production, and every time you deploy there you are testing a unique combination of deploy code + software + environment. (Just ask anyone who’s ever confidently deployed to “Staging”, and then “Producktion” (sic).) 

2. So does everyone else

You can’t spin up a copy of Facebook. You can’t spin up a copy of the national power grid. Some things just aren’t amenable to cloning. And that’s fine. You simply can’t usefully mimic the qualities of size and chaos that tease out the long, thin tail of bugs or behaviors you care about.

And you shouldn’t try.

Facebook doesn’t try to spin up a copy of Facebook either. They invest in the tools that allow thousands and thousands of engineers to deploy safely to production every day and observe people interacting with the code they wrote. So does Netflix. So does everyone who is fortunate enough to outgrow the delusion that this is a tractable problem.

3. It’s probably fine

There’s a lot of value in testing… to a point. But if you can catch 80% to 90% of the bugs with 10% to 20% of the effort—and you can—the rest is more usefully poured into making your systems resilient, not preventing failure.

You should be practicing failure regularly. Ideally, everyone who has access to production knows how to do a deploy and rollback, or how to get to a known-good state fast. They should know what a normal operating system looks like, and how to debug basic problems. Knowing how to deal with failure should not be rare.

If you test in production, dealing with failure won’t be rare. I’m talking about things like, “Does this have a memory leak?” Maybe run it as a canary on five hosts overnight and see. “Does this functionality work as planned?” At some point, just ship it with a feature flag so only certain users can exercise it. Stuff like that. Practice shipping and fixing lots of small problems, instead of a few big and dramatic releases.

4. You’ve got bigger problems

You’re shipping code every day and causing self-inflicted damage on the regular, and you can’t tell what it’s doing before, during, or after. It’s not the breaking stuff that’s the problem; you can break things safely. It’s the second part—not knowing what it’s doing—that’s not OK. This bigger problem can be addressed by:

  • Canarying. Automated canarying. Automated canarying in graduated levels with automatic promotion. Multiple canaries in simultaneous flight!
  • Making deploys more automated, robust, and fast (5 minutes on the upper bound is good)
  • Making rollbacks wicked fast and reliable
  • Using instrumentation, observability, and other early warning signs for staged canaries
  • Doing end-to-end health checks of key endpoints
  • Choosing good defaults, feature flags, developer tooling
  • Educating, sharing best practices, standardizing practices, making the easy/fast way the right way
  • Taking as much code and as many back-end components as possible out of the critical path
  • Limiting the blast radius of any given user or change
  • Exploring production, verifying that the expected changes are what actually happened. Knowing what normal looks like

These things are all a great use of your time. Unlike staging and test environments, which are notoriously fragile and flaky and hard to keep in sync with prod.

Do those things

Release engineering is a systematically underinvested skillset at companies with more than 50 people. Your deploys are the cause of nearly all your failures because they inject chaos into your system. Having a staging copy of production is not going to do much to change that (and it adds a large category of problems colloquially known as “it looked just like production, so I just dropped that table…”).

Embrace failure. Chaos and failure are your friends. The issue is not if you will fail, it is when you will fail, and whether you will notice. It’s between whether it will annoy all of your users because the entire site is down, or if it will annoy only a few users until you fix it at your leisure the next morning.

Once upon a time, these were optional skills, even specialties. Not anymore. These are table stakes in your new career as a distributed systems engineer.

Lean into it. It’s probably fine.

3 new OpenStack guides

If your job involves doing development or system administration in the cloud, you know how hard it can be to keep up with the quick pace of innovation. OpenStack is just one example of a project with lots of moving parts and a ton of amazing features that operators would benefit from becoming more familiar with.

The good news is there are a lot of ways to keep up. You’ve got the official project documentation, of course, as well as the documentation and support from your distribution of choice. There are also plenty of printed books, certification and training programs, and lots of great community-created resources.

Here on, we take a look for recently published guides and tutorials across blogs and other websites from the last month, and bring them to you in one handy blog post. Let’s jump in.

  • TripleO is one of the more popular ways to deploy OpenStack, by utilizing OpenStack’s own core functionality to help deploy the cloud. But if you work in an environment where certain security precautions are mandated, it’s important to ensure that the images you use to provision your OpenStack resources are sufficiently hardened. Learn how to create security hardened images for use with TripleO in this guide.

  • Kubernetes is another important tool for cloud operators, providing orchestration of containers and connecting them to the resources they need. But Kubernetes still needs the underlying cloud resources to deploy; here’s how to deploy Kubernetes on top of your OpenStack cloud using Ansible.

  • Finally this month, let’s look at a brand new website aptly named “Learn OpenStack.” Designed by an author trying to document his own learnings with OpenStack deployment, this guide looks at OpenStack and several of the tools involved in its setup and deployment, including Linux, Ansible, virtualization tools, and more. A work in progress, you can contribute to the effort with corrections or enhancements through GitHub, here.

That’s it for this time around. Want more? Take a look at our complete set of OpenStack guides, howtos, and tutorials containing over three years of community-generated content you’ll love. And if you’ve found a great tutorial, guide, or how-to that we could share in our next update, be sure to let us know in the comments below.

Tips for finding partners open enough to work with you

Imagine I’m working on the front line of an open organization, and I’m committed to following principles like transparency, inclusivity, adaptability, collaboration, community, accountability, and commitment to guide that front-line work. A huge problem comes up. My fellow front-line workers and I can’t handle it on our own, so we discuss the problem and decide that one of us has to take it to top management. I’m selected to do that.

When I do, I learn there is nothing we can do about the problem within the company. So management decides to let me present the issue to outside individuals who can help us.

In my search for the expertise required to fix the problem, I learned that no single individual has that expertise—and that we must find an outside, skilled partner (company) to help us address the issue.

All companies face this kind of problem and must form strategy business alliances from time to time. But it’s especially common for open organizations, which Jim Whitehurst (in The Open Organization) specifically defines as organizations that “engage participative communities both inside and out.” How, though, does this actually work?

Let’s take a look at how transparency, inclusivity, adaptability, collaboration, community, accountability and commitment impact on two partner companies working together on a project.

Three stages of collaboration

Several years back, I formed an alliance between my company’s operation in China and an American company. My company is Japanese, and establishing a working relationship between American, Japanese, and Chinese partners was challenging (I’ll discuss this project more in detail later). Being successful meant I had to study various ways to form effective business alliances.

Basically, this is what I learned and put in practice in China. Developing strategic business alliances with a partner company involves three stages:

  • Stage 1 is the “Discovery” stage.
  • Stage 2 is the “Implementation” stage
  • Stage 3 is the “Maintenance” stage

Here’s what you can do in each stage to form lasting, effective and open alliances with external partners.


In this stage, you want to decide on what you want to achieve with your proposed alliance. Simply put: What is your goal? The more detail with which you can express this goal (and its sub-goals), the higher your chance of success.

Next, you want to evaluate organizations that can support you to achieve those goals. What do you want them to do? What should you be responsible for (what don’t you want them to do)? How do you want them to behave with you, especially regarding open organization principles? You should group each potential partner into three categories:

  • Those following these principles now
  • Those not following these principles now but who want to follow these principles and could with some support, explanation and training
  • Those that do not have the desire or character to be more open in their behavior

After evaluating candidates, you should approach your ideal partner with a proposal for how you can work together on the specific project and reach an agreement.

This stage is the most important of the three. If you can get it right, the entire project will unfold in a timely and cost effective way. Quite often, companies do not spend enough time being open, inclusive, and collaborative to come to the best decision on what they want to achieve and what parameters are ideal for the project.


In this stage, you’ll start working with your alliance business partner on the project. Before you do that, you have to get to know your partner—and you have to get them to know you and your team. Your new partner may subscribe to open organization principles in general, but in practice those principles might not guide every member of the team. You’ll therefore want to build a project team on both their side and yours, both of which adhere to the principles.

As I mentioned in a previous article, you will encounter people who will resist the project, and you’ll need to screen them out. More importantly, you must find those individuals that will be very committed to the project and have the expertise to ensure success.

When starting a new project in any organization, you’ll likely face at least three challenges:

  • Competition with ongoing business for scarce resources
  • Divided time, energy, and attention of shared staff
  • Disharmony in the partnership and building a new community

Competition with ongoing business for scarce resources

Quite often, companies do not spend enough time being open, inclusive, and collaborative to come to the best decision on what they want to achieve and what parameters are ideal for the project.

If the needs of the new joint project grow, your project leader may have to prioritize your project over ongoing business (both yours and that of your partner’s!). You both might have to request a higher budget. On the other hand, the ongoing business leaders might promote their own ongoing, core business to increase direct profits. So make a formal, documented allocation of funds for the project and an allocation of shared personnel’s time. Confirm a balance between short-term (mostly ongoing related) and long-term (mostly the new joint project) gains. If the use of resources for a new joint project impacts (in any way) ongoing business, the new joint project budget should cover the losses. Leaders should discuss all contingency plans in advance of the concern. This where transparency, adaptability, and collaboration become very important.

Divided time, energy and attention of shared staff

Your shared staff may consider the new joint project a distraction to their work. The shared staff from each company might be under short-term time pressure, for example. This is where front-line project commitment comes in. The shared staff might not consider the new joint project important. The shared staff might have stronger loyalties and formal ties to the ongoing business operation. The shared staff might feel the new joint project will damage the ongoing business operation (weaken brand and customer/supplier loyalties, cannibalize current business, etc.). In this case, you’ll need to make sure that all stakeholders understand and believe in the value of the new joint project. This concept should be repeatedly promoted from the top level, mid-management level and operational level. All senior executives should be new joint project advocates when there is stress in time, energy, and attention. Furthermore, the new joint project leaders must be flexible and adaptable when the ongoing business becomes overloaded, as they are the profit-center of the organization that funds all projects. At the departmental level, the ongoing operation could charge the new joint project for excess work provided. A special bonus could be given to shared staff that work over a certain amount. This is where adaptability, collaboration, accountability, and commitment become very important.

Disharmony in partnership and building a new community

Differences are important for adding value to a project, but they could cause rivalry, too.

Differences are important for adding value to a project, but they could cause rivalry, too. One common source of conflict can be perceived skill level of individuals. Conflict could result if management heaps too much praise to one side (either ongoing business or the new joint project). Conflict could result from differing opinions on performance assessments. Conflict on compensation could occur. Conflict on decision authority could occur. To avoid these types of conflict, make the division of responsibility as clear as possible. Reinforce common values for both groups. Add more internal staff (less outside hires) on the project team to support cooperation, as they have established relationships. Locate key staff near the dedicated team for face-to-face interaction. This is where transparency, inclusivity, collaboration, community and commitment become exceedingly important.


After all the start-up concerns in the joint project have been addressed, and the project is showing signs of success, you should implement periodic evaluations. Is the team still behaving with a great deal of transparency, inclusivity, adaptability, collaboration, community, accountability, and commitment? Here again, consider three answers to these questions (“yes,” “no,” “developmental”). For “yes” groups, leave everything as-is. For “no” groups, consider major personnel and structural changes. For “developmental” groups, consider training, role playing, and possibly closer supervision.

The above is just an overview of bringing open organizations principles in strategic business alliance projects. Companies large and small need to form strategic alliances, so in the next part of this series I’ll present some actual case studies for analysis and review.