Top 5: Your first programming language, running Windows apps on Linux, and more

For more discussion on open source and the role of the CIO in the enterprise, join us at The EnterprisersProject.com.

The opinions expressed on this website are those of each author, not of the author’s employer or of Red Hat.

Opensource.com aspires to publish all content under a Creative Commons license but may not be able to do so in all cases. You are responsible for ensuring that you have the necessary permission to reuse any work on this site. Red Hat and the Shadowman logo are trademarks of Red Hat, Inc., registered in the United States and other countries.

Diversity and inclusion: Stop talking and do your homework

Open source undoubtedly has a diversity problem. In fact, tech has a diversity problem. But this isn’t news?—?women, people of color, parents, non-technical contributors, gay, lesbian, transgender, and other marginalized people and allies have shared stories of challenge for years.

At Mozilla, we believe that to influence positive change in diversity and inclusion (D&I) in our communities, and more broadly in open source, we need to learn, empathize, innovate, and take action. Open source is missing out on diverse perspectives and experiences that can drive change for a better world because we’re stuck in our ways—continually leaning on long-held assumptions about why we lose people. Counting who makes it through the gauntlet of tasks and exclusive cultural norms that leads to a first pull request can’t be enough. Neither can celebrating increased diversity on stage at technical conferences, especially when the audience remains homogeneous and abuse goes unchallenged.

This year, leading with our organizational strategy for D&I, we are in investing in a D&I strategy for Mozilla’s communities informed by three months of research.

Following are early recommendations emerging from our research.

Build and sustain diverse communities

1. Provide organizational support to established identity groups

For reasons of safety, friendship, mentorship, advocacy, and empowerment, we found positive support for identity groups. Identity groups are sub-communities formed under a dimension of diversity, such as language, gender, or even a specific skillset. Such groups can act as springboards into and out of the greater community.

2. Develop inclusive community leadership models

To combat gatekeeping and the myth of meritocracy, community roles must be designed with greater accountability for health, inclusion, and especially for recognizing achievements of others as core functions.

3. Implement project-wide strategies for toxic behavior

The perceived risk of losing productivity and momentum when addressing toxic behavior is shown to interfere with and risk community health. Insights amplified by HR industry findings show that, although toxic individuals are often highly productive, their cost in lost productivity far outweighs their perceived value. Strategies for combatting this in open communities should include cross-project communication about such decisions to avoid alienating or losing contributors.

4. Integrate D&I standards and best practices into product lifecycles

Extending on the notion of cross-project collaboration is the strong sense that building D&I standards into product lifecycles would benefit maintainers and community leaders, create reach, increase collaboration, and break down silos. An analogy is how web standards enable open communities to build on one other’s work across various open ecosystems.

5. Build inclusivity into events

Project and community events, although trending in positive directions by putting diversity on stage, struggle with homogenous audiences, unclear processes for code-of-conduct reporting, and neglect of neurodiversity issues. A series of recommendations is coming based on this research, and Mozfest has done a great job in past year of building inclusiveness into programming.

Design models for accessible communication

6. Break the language barrier

Quantitative research showed only 21% of our respondents spoke English as a first language. Prioritizing offering all key communications in multiple languages, or providing transcripts that can be easily localized, is important. The intersection of language and other diversity issues raised almost impossible barriers (for example, a new mother whose first language isn’t English runs out of time translating a presentation made in English).

7. Generate diverse network capabilities

Contrary to the spirit of openness, many (if not a majority of) projects are working on similar D&I problems—with learning rarely shared between, or even within, communities and projects. New generations of community managers and leaders identify the same issues—and begin again. Later this year, we’ll propose an initiative to bring together learning, document ways to build communication, and collaborate towards innovation desperately needed to move the needle in D&I.

8. Experiment with accessible communication

In our interviews, we were surprised to learn that text-based interviews were preferred not only by those with limited bandwidth, but also those who identified as introverts, preferred anonymity, or have a non-English first language. The simple act of changing the way we talk to people can have wide-ranging impacts, so we should experiment often with different modes of communication.

9. Avoid exclusion by technical jargon

Technical jargon or lingo and overly complicated language were cited as critical challenges for getting involved in projects.?Our data shows that technical confidence might be influencing that barrier, and men were nearly twice as likely to rate their technical confidence highly. These findings indicate that it’s critically important to limit jargon and to shift from technical posturing to empathy in participatory design. Rust is working on this.

Frameworks for incentive and consequence

10. Mobilize community participation guidelines

In recent conversations with other open project leaders, I’ve realized this is a pivotal moment for open projects that have adopted codes of conduct. We’re at a critical stage in making inclusive and open project governance effective and understood—making it? real. Although enforcing our guidelines sometimes feels uncomfortable and even meets resistance, there are far more people for whom empowerment, safety, and inclusion will be celebrated and embraced.

11. Standardize incentives and recognition

Although the people we interviewed want to feel valued, they also said it’s important that their accomplishments are publicly recognized in formats with real-world value. It’s worth noting that recognition in open communities tends to skew toward people most able to surface their accomplishments and technical contributions, which may exclude more reserved people.

12. Design inclusive systems that protect identity

Many systems do not adequately protect the information of people who register in community portals, and thus exclude or expose those who prefer to hide personal data for reasons of safety and privacy. The research showed a variety of non-obvious ways we ask for and store gender-identity information. D&I standards are a way forward in providing structure, predictability, and safety in systems, as well as mechanisms to track our progress.

More detailed findings on our research and path forward can be found on Mozilla’s Open Innovation Blog.

Learn more in Emma Irwin & Larissa Shapiro’s talk, “Time for Action—Innovating for D&I in Open Source Communities,” at Open Source Summit, Sept. 11-14 in Los Angeles.

How to Install Nginx, MariaDB and PHP (FEMP) Stack on FreeBSD

This tutorial will guide you on how to install and configure FBEMP in FreeBSD 11.x latest release. FBEMP is an acronym which describes the following collection of software:

FreeBSD 11.1 Unix-like distribution, Nginx web server, MariaDB relational database management system (a community fork of MySQL) and PHP dynamic programming language which runs on server-side.

Requirements

  1. Installation of FreeBSD 11.x
  2. 10 Things to Do After FreeBSD Installation

Step 1: Install Nginx Web Server on FreeBSD

1. The first service we’ll install for our FBEMP stack in FreeBSD is the web server, represented by Nginx software.

Nginx web server has more pre-complied packages available in FreeBSD 11.x PORTS. In order to get a list of Nginx binaries from Ports repositories, issue the following commands in your server terminal.

# ls /usr/ports/www/ | grep nginx
# pkg search -o nginx
Find Nginx Packages

Find Nginx Packages


2. In this particular configuration, we’ll install the main package version of Nginx by issuing the below command. The pkg package management will ask you if you want to proceed with installing the nginx package. Answer with yes (y in command line) in order to start the installation process.

# pkg install nginx
Install Nginx on FreeBSD

Install Nginx on FreeBSD

3. After Nginx web server package was installed in your system, execute the following commands in order to enable the daemon system-wide and start the service in your system.

# sysrc nginx_enable="yes"
# service nginx start
Start and Enable Nginx on FreeBSD

Start and Enable Nginx on FreeBSD

4. Next, using the sockstat command, verify Nginx service network sockets, if they are binding on 80/TCP port, by issuing the below command. The output of sockstat command will be piped through grep utility in order to reduce the returned results only to nginx string.

# sockstat -4 | grep nginx

5. Finally, open a browser on a desktop computer in your network and visit Nginx default web page via HTTP protocol. Write the FQDN of your machine or your domain name or the IP address of your server in browser’s URL filed to request Nginx web server default web page. The message “Welcome to nginx!” should be displayed in your browser, as illustrated in the below screenshot.

http://yourdomain.com
http://your_server_IP
http://your_machine_FQDN
Verify Nginx on FreeBSD

Verify Nginx on FreeBSD

6. The default weboot directory for Nginx web content in located in /usr/local/www/nginx/ absolute system path. In this location you should create, copy or install web content files, such as .html or .php files, for your website.

To change this location, edit nginx main configuration file and change the root directive to reflect your new webroot path.

# nano /usr/local/etc/nginx/nginx.conf

Here, search and update the following line to reflect your new webroot path:

root /path/to/new/webroot;

Step 2: Install PHP on FreeBSD

7. Unlike Apache HTTP server, Nginx does not have the capability to natively process PHP code. In return, Nginx web server passes PHP requests to a PHP interpreter, such as php-fpm FastCGI daemon, which inspects and executes the code. The resulted code is then returned back to Nginx, which re-assembles the code back to the requested html format and sends the code further to visitor web browser.

FreeBSD 11.x Ports repositories offers multiple binary versions for PHP programming language, such as PHP 5.6, PHP 7.0 and PHP 7.1 releases. In order to display all available pre-compiled PHP versions in FreeBSD 11.x, run the below commands .

# pkg search -o php
# ls /usr/ports/lang/ | grep php

8. You can choose to install whatever version of PHP you find best suited for the web application you run in your system. However, in this guide we’ll install PHP latest version.

To install PHP 7.1 release and some PHP important modules required for diverse web applications, run the following command.

# pkg install php71 php71-mysqli php71-mcrypt php71-zlib php71-gd php71-json mod_php71 php71-mbstring php71-curl

9. After you’ve installed PHP packages in your system, open PHP-FPM configuration file for Nginx and adjust the user and group values to match the value on the Nginx runtime user, which is www. First, make a backup of the file with the below command.

# cp /usr/local/etc/php-fpm.d/www.conf{,.backup}

Then, open the file and update the following lines as presented in the below sample.

user = www
group = www
Configure PHP-FPM on FreeBSD

Configure PHP-FPM on FreeBSD

10. Also, create a PHP configuration file used for production by issuing the below command. On this file you can make custom changes that will be applied to PHP interpreter at runtime.

# cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini

For instance, change the date.timezone setting for PHP interpreter in order to update your machine physical location as shown in the below example. PHP timezone list can be found here: http://php.net/manual/en/timezones.php.

# vi /usr/local/etc/php.ini

Add following timezone (set timezone as per your country).

date.timezone = Europe/London

You can also adjust other PHP variables, such as maximum file size of uploaded file, which can be increased by modifying the below values:

upload_max_filesize = 10M
post_max_size = 10M

11. After, you’ve made the custom settings for PHP, enable and start PHP-FPM daemon in order to apply the new configurations by issuing the below commands.

# sysrc php_fpm_enable=yes
# service php-fpm start
Start and Enable PHP-FPM on FreeBSD

Start and Enable PHP-FPM on FreeBSD

12. By default, PHP-FPM daemon in FreeBSD binds on a local network socket on port 9000/TCP. To display PHP-FPM network sockets execute the following command.

# sockstat -4 -6| grep php-fpm

13. In order for Nginx web server to pass the PHP scripts to FastCGI gateway server, which is listening on 127.0.0.1:9000 socket, open Nginx main configuration file and add the following block of code as illustrated in the below sample.

# vi /usr/local/etc/nginx/nginx.conf

FastCGI code block for nginx:

 location ~ \.php$ {
root /usr/local/www/nginx;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params;
}
Configure FastCGI for Nginx on FreeBSD

Configure FastCGI for Nginx on FreeBSD

14. In order to view the current PHP information for your server, create an info.php file in Nginx weboot path by issuing the following command.

# echo "<?php phpinfo(); ?>" | tee /usr/local/www/nginx/info.php

15. Then, test and restart Nginx daemon to apply the PHP FastCGI settings and visit the info.php page in a browser.

# nginx -t # Test nginx configuration file for syntax errors
# service nginx restart

Replace the IP address or domain name in the below links accordingly. PHP info page should display information as illustrated in the below screenshot.

http://yourdomain.com/info.php
http://server_IP-or-FQDN/info.php
Check PHP Information in FreeBSD

Check PHP Information in FreeBSD

Step 3: Install MariaDB on FreeBSD

16. The last component missing from your FEMP stack in the database. MariaDB/MySQL is one of the most associated open source RDBMS software with Nginx web server used for deploying dynamic websites.

Actually, MariaDB/MySQL is one of the most used relational databases in the world. Searching through FreeBSD Ports, you can find multiple releases of MariaDB/MySQL.

In this guide we’ll install MariaDB database, which is a community fork of MySQL database. To search for available versions of MariaDB, issue the following commands in terminal.

# ls -al /usr/ports/databases/ | grep mariadb
# pkg search mariadb
Find MariaDB Packages

Find MariaDB Packages

17. To install the latest version of MariaDB database server execute the following command. You should also install the PHP relational database driver module used by PHP scripts for connecting to MySQL.

# pkg install mariadb102-server php71-mysqli

18. After the database has been installed, enable MySQL daemon and start the database service by running the following commands.

# sysrc mysql_enable="YES" # service mysql-server start

19. Also, make sure you restart PHP-FPM daemon in order to load MySQL driver extension.

# service php-fpm restart
20. On the next step, secure MariaDB database by launching mysql_secure_installation script. Use the below sample of the installation script in order to answer the questions. Basically, say yes (y) for all asked questions to secure the database and type a strong password for MySQL root user.
# /usr/local/bin/mysql_secure_installation

MySQL Secure Installation Script Output

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!

21. To test MariaDB database connection from console, execute the below command.

# mysql -u root -p -e "show status like ‘Connections’"

22. In order to further secure MariaDB, which by default to listens for incoming network connections on 0.0.0.0:3306/TCP socket, issue the below command to force the service to bind on loopback interface and completely disallow remote access. Afterwards, restart MySQL service to apply the new configuration.

# sysrc mysql_args="--bind-address=127.0.0.1"
# service mysql-server restart
Bind MariaDB to Loopback Address

Bind MariaDB to Loopback Address

Verify if the localhost binding was successfully applied by running netstat command as shown in the below example.

# netstat -an -p tcp

That’s all! You’ve successfully installed Nginx web server, MariaDB relational database and PHP server-side programming language in FreeBSD. You can now start building dynamic web pages to serve web content to your visitors.

GNOME at 20: Four reasons it's still my favorite GUI

The GNOME desktop turns 20 on August 15, and I’m so excited! Twenty years is a major milestone for any open source software project, especially a graphical desktop environment like GNOME that has to appeal to many different users. The 20th anniversary is definitely something to celebrate!

Why is GNOME such a big deal? For me, it’s because it represented a huge step forward in the Linux desktop. I installed my first Linux system in 1993. In the early days of Linux, the most prevalent graphical environment was TWM, the tabbed window manager. The modern desktop didn’t exist yet.

But as Linux became more popular, we saw an explosion of different graphical environments, such as FVWM (1993) and FVWM95 (1995), and their derivatives, including Window Maker (1996), LessTif (1996), Enlightenment (1997), and Xfce (1997). Each filled a different niche. Nothing was integrated. Rather, FVWM and its clones simply managed windows. Toolkits were not standardized; each window might use a different one. As a result, early Linux graphical environments were a mishmash of various styles. Window Maker offered the most improvements, with a more uniform look and feel, but it still lacked the integration of a true desktop.

I was thrilled when the GNOME project released a true Linux desktop environment in 1999. GNOME 1 leveraged the GTK+ toolkit, the same object-oriented widget toolkit used to build the GIMP graphics program.

The first GNOME release looked very similar to Windows 98, the then-current version of Microsoft Windows, a wise decision that immediately provided a familiar graphical interface for new Linux users. GNOME 1 also offered desktop management and integration, not simply window management. Files and folders could be dropped on the desktop, providing easy access. This was a major advancement. In short order, many major Linux distributions included GNOME as the default desktop. Finally, Linux had a true desktop.

Over time, GNOME continued to evolve. In 2002, GNOME’s second major release, GNOME 2, cleaned up the user interface and tweaked the overall design. I found this quite invigorating. Instead of a single toolbar or panel at the bottom of the screen, GNOME 2 used two panels: one at the top of the screen, and one at the bottom. The top panel included the GNOME Applications menu, an Actions menu, and shortcuts to frequently used applications. The bottom panel provided icons of running programs and a representation of the other workspaces available on the system. Using the two panels provided a cleaner user interface, separating “things you can do” (top panel) and “things you are doing” (bottom panel).

I loved the GNOME 2 desktop, and it remained my favorite for years. Lots of other users felt the same, and GNOME 2 became a de facto standard for the Linux desktop. Successive versions made incremental improvements to GNOME’s user interface, but the general design concept of “things you can do” and “things you are doing” remained the same.

Despite the success and broad appeal of GNOME, the GNOME team realized that GNOME 2 had become difficult for many to use. The applications launch menu required too many clicks. Workspaces were difficult to use. Open windows were easy to lose under piles of other application windows. In 2008, the GNOME team embarked on a mission to update the GNOME interface. That effort produced GNOME 3.

GNOME 3 removed the traditional task bar in favor of an Overview mode that shows all running applications. Instead of using a launch menu, users start applications with an Activities hot button in the black bar at the top. Selecting the Activities menu brings up the Overview mode, showing both things you can do (with the favorite applications launcher to the left of the screen), and things you are doing (window representations of open applications).

Since its initial release, the GNOME 3 team has put in a lot of effort to improve it and make it easier to use. Today’s GNOME is modern yet familiar, striking that difficult balance between features and utility.

4 reasons GNOME is my favorite GUI

Here at GNOME’s 20th anniversary, I’d like to highlight four reasons why GNOME 3 is still my favorite desktop today:

1. It’s easy to get to work

GNOME 3 makes it easy to find my most frequently used applications in the favorite applications launcher. I can add my most-used applications here, so getting to work is just a click away. I can still find less frequently used applications in the Applications menu, or I can just start typing the name of the program to quickly search for the application.

2. Open windows are easy to find

Most of the time, I have two or three windows open at once, so it’s easy to use Alt+Tab to switch among them. But when I’m working on a project, I might have 10 or more windows open on my desktop. Even with a large number of open applications, it’s straightforward to find the one that I want. Move the mouse to the Activities hot corner, and the desktop switches to Overview mode with representations of all your open windows. Simply click on a window, and GNOME puts that application on top.

3. No wasted screen space

With other desktop environments, windows have a title bar with the name of the application, plus a few controls to minimize, maximize, and close the window. When all you need is a button to close the window, this is wasted screen space. GNOME 3 is designed to minimize the decorations around your windows and give you more screen space. GNOME even locates certain Action buttons in the window’s title bar, saving you even more space. It may not sound like much, but it all adds up when you have a lot of open windows.

4. The desktop of the future

Today, computers are more than a box with a monitor, keyboard, and mouse. We use smartphones and tablets alongside our desktop and laptop computers. In many cases, mobile computing (phones and tablets) displaces the traditional computer for many tasks. I think it’s clear that the mobile and desktop interfaces are merging. Before too long, we will use the same interface for both desktop and mobile. The key to making this work is a user interface that truly unifies the platforms and their unique use cases. We aren’t quite there yet, but GNOME 3 seems well positioned to fill this gap. I look forward to seeing this area develop and improve.

Testing in production: Yes, you can (and should)

I wrote a piece recently about why we are all distributed systems engineers now. To my surprise, lots of people objected to the observation that you have to test large distributed systems in production. 

It seems testing in production has gotten a bad rap—despite the fact that we all do it, all the time.

Maybe we associate it with cowboy engineering. We hear “testing in production” and assume this means no unit tests, functional tests, or continuous integration.

It’s good to try and catch things before production—we should do that too! But these things aren’t mutually exclusive. Here are some things to consider about testing in production.

1. You already do it

There are lots of things you already test in prod—because there’s no other way you can test them. Sure, you can spin up clones of various system components or entire systems, and capture real traffic to replay offline (the gold standard of systems testing). But many systems are too big, complex, and cost-prohibitive to clone.

Imagine trying to spin up a copy of Facebook for testing (with its multiple, globally distributed data centers). Imagine trying to spin up a copy of the national electrical grid. Even if you succeed, next you need the same number of clients, the same concurrency, same pipelining and usage patterns, etc. The unpredictability of user traffic makes it impossible to mock; even if you could perfectly reproduce yesterday’s traffic, you still can’t predict tomorrow’s.

It’s easy to get dragged down into bikeshedding about cloning environments and miss the real point: Only production is production, and every time you deploy there you are testing a unique combination of deploy code + software + environment. (Just ask anyone who’s ever confidently deployed to “Staging”, and then “Producktion” (sic).) 

2. So does everyone else

You can’t spin up a copy of Facebook. You can’t spin up a copy of the national power grid. Some things just aren’t amenable to cloning. And that’s fine. You simply can’t usefully mimic the qualities of size and chaos that tease out the long, thin tail of bugs or behaviors you care about.

And you shouldn’t try.

Facebook doesn’t try to spin up a copy of Facebook either. They invest in the tools that allow thousands and thousands of engineers to deploy safely to production every day and observe people interacting with the code they wrote. So does Netflix. So does everyone who is fortunate enough to outgrow the delusion that this is a tractable problem.

3. It’s probably fine

There’s a lot of value in testing… to a point. But if you can catch 80% to 90% of the bugs with 10% to 20% of the effort—and you can—the rest is more usefully poured into making your systems resilient, not preventing failure.

You should be practicing failure regularly. Ideally, everyone who has access to production knows how to do a deploy and rollback, or how to get to a known-good state fast. They should know what a normal operating system looks like, and how to debug basic problems. Knowing how to deal with failure should not be rare.

If you test in production, dealing with failure won’t be rare. I’m talking about things like, “Does this have a memory leak?” Maybe run it as a canary on five hosts overnight and see. “Does this functionality work as planned?” At some point, just ship it with a feature flag so only certain users can exercise it. Stuff like that. Practice shipping and fixing lots of small problems, instead of a few big and dramatic releases.

4. You’ve got bigger problems

You’re shipping code every day and causing self-inflicted damage on the regular, and you can’t tell what it’s doing before, during, or after. It’s not the breaking stuff that’s the problem; you can break things safely. It’s the second part—not knowing what it’s doing—that’s not OK. This bigger problem can be addressed by:

  • Canarying. Automated canarying. Automated canarying in graduated levels with automatic promotion. Multiple canaries in simultaneous flight!
  • Making deploys more automated, robust, and fast (5 minutes on the upper bound is good)
  • Making rollbacks wicked fast and reliable
  • Using instrumentation, observability, and other early warning signs for staged canaries
  • Doing end-to-end health checks of key endpoints
  • Choosing good defaults, feature flags, developer tooling
  • Educating, sharing best practices, standardizing practices, making the easy/fast way the right way
  • Taking as much code and as many back-end components as possible out of the critical path
  • Limiting the blast radius of any given user or change
  • Exploring production, verifying that the expected changes are what actually happened. Knowing what normal looks like

These things are all a great use of your time. Unlike staging and test environments, which are notoriously fragile and flaky and hard to keep in sync with prod.

Do those things

Release engineering is a systematically underinvested skillset at companies with more than 50 people. Your deploys are the cause of nearly all your failures because they inject chaos into your system. Having a staging copy of production is not going to do much to change that (and it adds a large category of problems colloquially known as “it looked just like production, so I just dropped that table…”).

Embrace failure. Chaos and failure are your friends. The issue is not if you will fail, it is when you will fail, and whether you will notice. It’s between whether it will annoy all of your users because the entire site is down, or if it will annoy only a few users until you fix it at your leisure the next morning.

Once upon a time, these were optional skills, even specialties. Not anymore. These are table stakes in your new career as a distributed systems engineer.

Lean into it. It’s probably fine.

3 new OpenStack guides

If your job involves doing development or system administration in the cloud, you know how hard it can be to keep up with the quick pace of innovation. OpenStack is just one example of a project with lots of moving parts and a ton of amazing features that operators would benefit from becoming more familiar with.

The good news is there are a lot of ways to keep up. You’ve got the official project documentation, of course, as well as the documentation and support from your distribution of choice. There are also plenty of printed books, certification and training programs, and lots of great community-created resources.

Here on Opensource.com, we take a look for recently published guides and tutorials across blogs and other websites from the last month, and bring them to you in one handy blog post. Let’s jump in.

  • TripleO is one of the more popular ways to deploy OpenStack, by utilizing OpenStack’s own core functionality to help deploy the cloud. But if you work in an environment where certain security precautions are mandated, it’s important to ensure that the images you use to provision your OpenStack resources are sufficiently hardened. Learn how to create security hardened images for use with TripleO in this guide.

  • Kubernetes is another important tool for cloud operators, providing orchestration of containers and connecting them to the resources they need. But Kubernetes still needs the underlying cloud resources to deploy; here’s how to deploy Kubernetes on top of your OpenStack cloud using Ansible.

  • Finally this month, let’s look at a brand new website aptly named “Learn OpenStack.” Designed by an author trying to document his own learnings with OpenStack deployment, this guide looks at OpenStack and several of the tools involved in its setup and deployment, including Linux, Ansible, virtualization tools, and more. A work in progress, you can contribute to the effort with corrections or enhancements through GitHub, here.


That’s it for this time around. Want more? Take a look at our complete set of OpenStack guides, howtos, and tutorials containing over three years of community-generated content you’ll love. And if you’ve found a great tutorial, guide, or how-to that we could share in our next update, be sure to let us know in the comments below.

Tips for finding partners open enough to work with you

Imagine I’m working on the front line of an open organization, and I’m committed to following principles like transparency, inclusivity, adaptability, collaboration, community, accountability, and commitment to guide that front-line work. A huge problem comes up. My fellow front-line workers and I can’t handle it on our own, so we discuss the problem and decide that one of us has to take it to top management. I’m selected to do that.

When I do, I learn there is nothing we can do about the problem within the company. So management decides to let me present the issue to outside individuals who can help us.

In my search for the expertise required to fix the problem, I learned that no single individual has that expertise—and that we must find an outside, skilled partner (company) to help us address the issue.

All companies face this kind of problem and must form strategy business alliances from time to time. But it’s especially common for open organizations, which Jim Whitehurst (in The Open Organization) specifically defines as organizations that “engage participative communities both inside and out.” How, though, does this actually work?

Let’s take a look at how transparency, inclusivity, adaptability, collaboration, community, accountability and commitment impact on two partner companies working together on a project.

Three stages of collaboration

Several years back, I formed an alliance between my company’s operation in China and an American company. My company is Japanese, and establishing a working relationship between American, Japanese, and Chinese partners was challenging (I’ll discuss this project more in detail later). Being successful meant I had to study various ways to form effective business alliances.

Basically, this is what I learned and put in practice in China. Developing strategic business alliances with a partner company involves three stages:

  • Stage 1 is the “Discovery” stage.
  • Stage 2 is the “Implementation” stage
  • Stage 3 is the “Maintenance” stage

Here’s what you can do in each stage to form lasting, effective and open alliances with external partners.

Discovery

In this stage, you want to decide on what you want to achieve with your proposed alliance. Simply put: What is your goal? The more detail with which you can express this goal (and its sub-goals), the higher your chance of success.

Next, you want to evaluate organizations that can support you to achieve those goals. What do you want them to do? What should you be responsible for (what don’t you want them to do)? How do you want them to behave with you, especially regarding open organization principles? You should group each potential partner into three categories:

  • Those following these principles now
  • Those not following these principles now but who want to follow these principles and could with some support, explanation and training
  • Those that do not have the desire or character to be more open in their behavior

After evaluating candidates, you should approach your ideal partner with a proposal for how you can work together on the specific project and reach an agreement.

This stage is the most important of the three. If you can get it right, the entire project will unfold in a timely and cost effective way. Quite often, companies do not spend enough time being open, inclusive, and collaborative to come to the best decision on what they want to achieve and what parameters are ideal for the project.

Implementation

In this stage, you’ll start working with your alliance business partner on the project. Before you do that, you have to get to know your partner—and you have to get them to know you and your team. Your new partner may subscribe to open organization principles in general, but in practice those principles might not guide every member of the team. You’ll therefore want to build a project team on both their side and yours, both of which adhere to the principles.

As I mentioned in a previous article, you will encounter people who will resist the project, and you’ll need to screen them out. More importantly, you must find those individuals that will be very committed to the project and have the expertise to ensure success.

When starting a new project in any organization, you’ll likely face at least three challenges:

  • Competition with ongoing business for scarce resources
  • Divided time, energy, and attention of shared staff
  • Disharmony in the partnership and building a new community

Competition with ongoing business for scarce resources

Quite often, companies do not spend enough time being open, inclusive, and collaborative to come to the best decision on what they want to achieve and what parameters are ideal for the project.

If the needs of the new joint project grow, your project leader may have to prioritize your project over ongoing business (both yours and that of your partner’s!). You both might have to request a higher budget. On the other hand, the ongoing business leaders might promote their own ongoing, core business to increase direct profits. So make a formal, documented allocation of funds for the project and an allocation of shared personnel’s time. Confirm a balance between short-term (mostly ongoing related) and long-term (mostly the new joint project) gains. If the use of resources for a new joint project impacts (in any way) ongoing business, the new joint project budget should cover the losses. Leaders should discuss all contingency plans in advance of the concern. This where transparency, adaptability, and collaboration become very important.

Divided time, energy and attention of shared staff

Your shared staff may consider the new joint project a distraction to their work. The shared staff from each company might be under short-term time pressure, for example. This is where front-line project commitment comes in. The shared staff might not consider the new joint project important. The shared staff might have stronger loyalties and formal ties to the ongoing business operation. The shared staff might feel the new joint project will damage the ongoing business operation (weaken brand and customer/supplier loyalties, cannibalize current business, etc.). In this case, you’ll need to make sure that all stakeholders understand and believe in the value of the new joint project. This concept should be repeatedly promoted from the top level, mid-management level and operational level. All senior executives should be new joint project advocates when there is stress in time, energy, and attention. Furthermore, the new joint project leaders must be flexible and adaptable when the ongoing business becomes overloaded, as they are the profit-center of the organization that funds all projects. At the departmental level, the ongoing operation could charge the new joint project for excess work provided. A special bonus could be given to shared staff that work over a certain amount. This is where adaptability, collaboration, accountability, and commitment become very important.

Disharmony in partnership and building a new community

Differences are important for adding value to a project, but they could cause rivalry, too.

Differences are important for adding value to a project, but they could cause rivalry, too. One common source of conflict can be perceived skill level of individuals. Conflict could result if management heaps too much praise to one side (either ongoing business or the new joint project). Conflict could result from differing opinions on performance assessments. Conflict on compensation could occur. Conflict on decision authority could occur. To avoid these types of conflict, make the division of responsibility as clear as possible. Reinforce common values for both groups. Add more internal staff (less outside hires) on the project team to support cooperation, as they have established relationships. Locate key staff near the dedicated team for face-to-face interaction. This is where transparency, inclusivity, collaboration, community and commitment become exceedingly important.

Maintenance

After all the start-up concerns in the joint project have been addressed, and the project is showing signs of success, you should implement periodic evaluations. Is the team still behaving with a great deal of transparency, inclusivity, adaptability, collaboration, community, accountability, and commitment? Here again, consider three answers to these questions (“yes,” “no,” “developmental”). For “yes” groups, leave everything as-is. For “no” groups, consider major personnel and structural changes. For “developmental” groups, consider training, role playing, and possibly closer supervision.

The above is just an overview of bringing open organizations principles in strategic business alliance projects. Companies large and small need to form strategic alliances, so in the next part of this series I’ll present some actual case studies for analysis and review.

We're giving away FOUR LulzBot 3D printers

It’s that time of year again. As students and teachers head back to school, we’re celebrating by giving away four LulzBot 3D printers in our biggest giveaway ever!

One grand prize winner will receive a LulzBot Taz 6, a top-of-the-line 3D printer that retails for US $2,500 and boasts an impressive 280x280x250mm (nearly the size of a basketball) heated print surface. Three other lucky winners will receive a LulzBot Mini valued at US $1,250. With a print area of 152x152x158mm, it’s a great choice for beginners looking to get some 3D printing experience.

So, what are you waiting for? Enter by this Sunday, August 20 at 11:59 p.m. Eastern Time (ET) for a chance to win. Note: You don’t need to be a student or educator to enter. All professions are welcome!

If you’re a teacher, librarian, or work in a museum or makerspace, integrate 3D printing into your curriculum by checking out the LulzBot education pricing program which provides educators with discounts, helpful product bundles, extended warranties, and more.

Good luck and happy printing from all of us on the Opensource.com team!

How my two-week project turned into a full time open source startup

Over a year ago, I decided to build a software business that focused on custom web application development, startups, and unique website projects. I had built a very strong and talented team of people who were ambitious to help me start this company as their side gig. We called it Vampeo. We acquired a bunch of projects and started development while keeping our full-time day jobs.

Long-running projects

After four months of delivering some of our projects, I realized something significant. No project was ever completed. Once each project (e.g., website) was delivered, every client asked for additional features, support, maintenance, updates, and even future projects.

These additional services introduced a new stream of recurring revenue for Vampeo. Clients would pay for servers, email addresses that we set up through G Suite, SSL renewals, website edits, etc.

Wasting my time with invoices

In November 2016, I started gathering all the invoices to email to our clients. I had a Quickbooks Online account to send invoices to clients, however, there was a much larger problem. Many of our services were offered as monthly or yearly subscriptions. For example, clients would pay Vampeo monthly for their servers and emails, annually for domain and SSL, and hourly fees on demand for feature developments. It was extremely hard to send invoices to our customers at the end of each month or keep track of who hadn’t paid their annual fees. I started falling behind in invoices, losing money, and losing track of our maintained services.

A small project to automate my business

There was no easy solution to our problem. Our service offerings and billing were handled in separate applications and required lots of manual work. We needed a system with the following features:

  • Ability to automatically charge the client based on the services they have with us
  • Customer self-service portal for clients to log in to an online account, view, edit, request cancellation of their current services, and communicate with us for additional work
  • Internal inventory of our work to keep track of all our active and archived projects and provide total revenue, profit, and progress

Every commercial solution we found was too expensive without covering every use case, and every open source solution was outdated with a very bad UI/UX. So we decided to spend our two-week New Year holiday to develop a very simple platform that leverages Stripe’s API to build a web application that fulfills all the above features. Boy was I wrong about the two-week timeframe!

Two weeks turned into months, and then… ServiceBot

The entire development revolved around our mindset of open sourcing our work. It required proper architecture, planning, and implementation. Our years of experience as automation architects and engineers got the best of us. We started adding more features, automating the billing using Stripe, creating a notification system, and much more. Our platform grew from a simple Node.js and Express app into one that uses Node.js, Express, React, Redux, and many more cutting-edge npm libraries.

The decision was clear; this wasn’t just a side project anymore, this was the real thing. We were a team of four developers and one graphic designer, and we spent every minute of our free time (outside of our day jobs) on developing this system. We called it ServiceBot, an open source gig management system, a platform you can use to start selling and managing your gig in just minutes.

We released our v0.1 Beta in May and showcased it at Collision 2017. The feedback was extremely positive, as it seemed like every other service-based startup was facing similar issues with billing. After Collision, we’ve spent the summer re-tuning our code and feature set.

It is now eight months since we started building ServiceBot, and we are now on version 0.5 beta. ServiceBot’s GitHub repository contains all of our hard work, and we want to share it and get feedback.

For this reason, we have decided to offer limited open-beta ServiceBot instances on our website. It will take just a couple of minutes to set up your ServiceBot website without any technical knowledge, installation, or lengthy configuration. All that’s needed is a Stripe account, as ServiceBot is tightly integrated with Stripe.

If you are interested in testing out our limited open-beta instances, you can sign up on our front page. 

We hope to grow ServiceBot into a complete automation system to help businesses cut costs by automating their daily operations and the lifecycle of their services.

This was originally posted on ServiceBot’s blog and is republished with permission.

Why containers are the best way to test software performance

Software performance and scalability are frequent topics when we talk about application development. A big reason for that is an application’s performance and scalability directly affect its success in the market. An application, no matter how good its user interface, won’t claim market share if its response time is sluggish.

This is why we spend so much time improving an application’s performance and scalability as its user base grows.

Where usual testing practices fail

Fortunately, we have a lot of tools to test software behavior under high-stress conditions. There are also tools to help identify the causes of performance and scalability issues, and other benchmark tools can stress-test systems to provide a relative measure of a system’s stability under a high load; however, we run into problems with performance and scale engineering when we try to use these tools to understand the performance of enterprise products. Generally, these products are not single applications; instead they may consist of several different applications interacting with each other to provide a consistent and unified user experience.

We may not get any meaningful data about a product’s performance and scalability issues if we test only its individual components.

We may not get any meaningful data about a product’s performance and scalability issues if we test only its individual components. The real numbers can be gathered only when we test the application in real-life scenarios, that is by subjecting the entire enterprise application to a real-life workload.

The question becomes: How can we achieve this real-life workload in a test scenario?

Containers to the rescue

The answer is containers. To explain how containers can help us understand a product’s performance and scalability, let’s look at Puppet, a software configuration management tool, as an example.

Puppet uses a client-server architecture, where there are one or more Puppet masters (servers), and the systems that are to be configured using Puppet run Puppet agents (clients).

To understand an application’s performance and scalability, we need to stress the Puppet masters with high load from the agents running on various systems.

To do this, we can install puppet-master on one system, then run multiple containers that are each running our operating system, over which we install and run puppet-agent.

Next, we need to configure the Puppet agents to interact with the Puppet master to manage the system configuration. This stresses the server when it handles the request and stresses the client when it updates the software configuration.

So, how did the containers help here? Couldn’t we have just simulated the load on the Puppet master through a script?

The answer is no. It might have simulated the load, but we would have gotten a highly unrealistic view of its performance.

The reason for this is quite simple. In real life, a user system will run a number of other processes besides puppet-agent or puppet-master, where each process consumes a certain amount of system resources and hence directly impacts the performance of the puppet by limiting the resources Puppet can use.

This was a simple example, but the performance and scale engineering of enterprise applications can get really challenging when dealing with products that combine more than a handful of components. This is where containers shine.

Why containers and not something else?

A genuine question is: Why use containers and not virtual machines (VMs) or just bare-metal machines?

The logic behind running containers is related to the number of container images of a system can we launch, as well as their cost versus the alternatives.

Although VMs provide a powerful mechanism, they also incur a lot of overhead on system resources, thereby limiting the number of systems that can be replicated on a single bare-metal server. By contrast, it is fairly easy to launch even 1,000 containers on the same system, depending on what kind of simulation you are trying to achieve, while keeping the resource overhead low.

With bare-metal servers, the performance and scale can be as realistic as needed, but a major problem is cost overhead. Will you buy 1,000 servers for performance and scale experiments?

That’s why containers overall provide an economical and scalable way of testing products’ performance against a real-life scenario while keeping resources, overhead, and costs in check.

Learn more in Saurabh Badhwar’s talk Testing Software Performance and Scalability Using Containers at Open Source Summit in Los Angeles.