FreeBSD 11.1 Installation Guide

FreeBSD is a free, powerful, robust, flexible and stable Open Source operating system based on Unix which is designed with security and speed in mind.

FreeBSD can operate on a large variety of modern CPU architectures and can power servers, desktops and some kind of custom embedded systems, the most notable being Raspberry PI SBC. As in Linux case, FreeBSD comes with a large collection of pre-compiled software packages, more than 20,000 packages, that can be simply installed in the system from their repositories, called “Ports”.

Requirements:

  1. Download FreeBSD 11.1 CD 1 ISO Image

This tutorial will guide you on how to install the latest version of FreeBSD on an amd64 machine. Typically this installation covers only the command line version of the operating system, which makes it best-suited for servers.

If you don’t require a custom installation, you can skip the installation process and download and run a pre-build Virtual Machine image for VMware, VirtualBox, QEMU-KVM or Hyper-V.

FreeBSD Installation Guide


1. First, get the latest FreeBSD CD 1 ISO image released from FreeBSD download page and burn it to a CD.

Place the CD image into your machine CD/DVD drive and reboot the machine into BIOS/UEFI mode or boot menu sequence by pressing a special key (usually esc, F2, F11, F12) during the power-on sequence.

Instruct the BIOS/UEFI to use the CD/DVD appropriate drive to boot from and the first screen of the installation process should be displayed on your screen.

Press [Enter] key to start the installation process.

FreeBSD Grub Menu

FreeBSD Grub Menu

2. On the next screen select Install option and press [Enter] to continue.

FreeBSD Installer

FreeBSD Installer

3. Select your keyboard layout from the list and press [Enter] to move forward with the installation process.

FreeBSD Keyboard Layout

FreeBSD Keyboard Layout

4. Next, type a descriptive name for your machine hostname and press [Enter] to continue.

FreeBSD Machine Hostname

FreeBSD Machine Hostname

5. On the next screen select what components you want to install in the system by pressing the [space] key. For a production server it’s recommended you choose only lib32 compatibility libraries and Ports tree.

Press [enter] key after you’ve made your selections in order to continue.

FreeBSD Components

FreeBSD Components

6. Next choose the method your hard disk will be partitioned. Select Auto – Unix File SystemGuided Disk Setup and press [enter] key to move to the next screen.

In case you have more than one disk and need a resilient file system you should opt for ZFS method. However, this guide will only cover UFS file system.

FreeBSD Partitioning

FreeBSD Partitioning

7. On the next screen select to perform FreeBSD OS installation on the entire disk and press [enter] key again to continue.

However, be aware that this option is destructive and will completely wipe-out all your disk data. If the disk holds data, you should make a backup before continuing further.

FreeBSD Installation Disk

FreeBSD Installation Disk

8. Next, select you hard disk partition layout. In case your machine is UEFI based and the installation is performed from UEFI mode (not CSM or Legacy mode) or the disk is larger than 2TB, you must use GPT partition table.

Also, it’s recommended to disable Secure Boot option from UEFI menu if the installation is performed in UEFI mode. In case of an older hardware you’re safe to partition the disk in MBR scheme.

FreeBSD Partition Layout

FreeBSD Partition Layout

9. In the next screen review the automatically created partition table of your system and navigate to Finish using [tab] key to accept the changes.

Press [enter] to continue and on the new pop-up screen select Commit to start the effective installation process. The installation process can take up from 10 to 30 minutes depending on your machine resources and HDD speed.

FreeBSD Partition Summary

FreeBSD Partition Summary

FreeBSD Installation Changes

FreeBSD Installation Changes

FreeBSD Installation Progress

FreeBSD Installation Progress

FreeBSD Installation Continues

FreeBSD Installation Continues

10. After the installer extracts and writes the operating system data to your machine drive, you will be prompted to specify the password for the root account.

Choose a strong password for root account and press [enter] to continue. The password won’t be echoed on the screen.

FreeBSD Root Password

FreeBSD Root Password

11. On the next step, select the network interface you want to configure and press [enter] to setup the NIC.

FreeBSD Network Configuration

FreeBSD Network Configuration

12. Choose to use IPv4 protocol for your NIC and select to configure the network interface manually with a static IP address by negating the DHCP protocol as illustrated in the below screenshots.

FreeBSD IPv4

FreeBSD IPv4

FreeBSD DHCP

FreeBSD DHCP

13. Next, add your static network IP configurations (IP address, netmask and gateway) for this interface and press [enter] key to continue.

FreeBSD IP Configuration

FreeBSD IP Configuration

14. If the network equipment at your premises (switches, routers, servers, firewalls etc) is IPv4 based then is no point on configuring IPv6 protocol for this NIC. Choose No from the IPv6 prompt to continue.

FreeBSD IPv6 Disable

FreeBSD IPv6 Disable

15. The final network configuration for your machine involves setting-up the DNS resolver. Add your domain name for local resolving, if that’s the case, and the IP addresses of two DNS servers you run in your network, used for resolving domain names, or use the IP addresses of some public DNS caching servers. When you finish, press OK to save changes and move further.

FreeBSD DNS Configuration

FreeBSD DNS Configuration

16. Next, from the time zone selector choose the physical region where your machine is located and hit OK.

FreeBSD Timezone

FreeBSD Timezone

17. Select your country from the list and accept the abbreviation for your time setting.

FreeBSD Country Selection

FreeBSD Country Selection

18. Next, adjust the date and time setting for your machine if that’s the case or choose to Skip the setting in case your system time is correctly configured.

FreeBSD Time and Date Settings

FreeBSD Time and Date Settings

FreeBSD Set Date

FreeBSD Set Date

19. On the next step select by hitting the [space] key the following daemons to run system-wide: SSH, NTP and powerd.

Select powerd service in case your machine CPU supports adaptive power control. If FreeBSD is installed under a virtual machine you can skip powerd start-up service during the system boot initialization sequence.

Also, if you don’t connect into your machine remotely, you can skip SSH service automatic start-up during system boot. When you finish press OK to continue.

FreeBSD System Configuration

FreeBSD System Configuration

20. At the next screen, check the following options in order to minimally harden your system security: Disable reading kernel message buffer for unprivileged users, Disable process debugging facilities for unprivileged users, Clean /tmp filesystem on startup, Disable Syslogd network socket and Sendmail service in case you’re not planning to run a mail server.

FreeBSD System Hardening

FreeBSD System Hardening

21. Next, the installer will ask you whether you will like to add a new system user. Choose yes and follow the prompt in order to add the user information. It’s safe to leave the default settings for the user by pressing [enter] key.

You can select Bourne shell (sh) or C improved shell (tcsh) as the default shell for your user. When you finish, answer yes at the final question to create the user.

The prompt will ask you if you want to add another user in your system. If that’s not the case, answer with no in order to continue with the final stage of the installation process.

FreeBSD User Account

FreeBSD User Account

Create FreeBSD User Account

Create FreeBSD User Account

FreeBSD User Account Summary

FreeBSD User Account Summary

22. Finally, a new screen will provide a list of options you can choose in order to modify your system configuration. If you have nothing else to modify on your system, select the Exit option in order to complete the installation and answer with no to not open a new shell in the system and hit on Reboot to restart the machine.

FreeBSD Final Configuration

FreeBSD Final Configuration

FreeBSD Manual Configuration

FreeBSD Manual Configuration

FreeBSD Installation Complete

FreeBSD Installation Complete

23. Remove the CD image from the machine drive and press [enter] at the first prompt to start the system and logon into the console.

FreeBSD Login Shell

FreeBSD Login Shell

Congratulations! You’ve just installed FreeBSD operating system in your machine. In the next tutorial we’ll discuss some initial configurations of FreeBSD and how to manage the system further from command line.

How to Rename File While Downloading with Wget in Linux

Wget utility is a popular and feature-rich command-line based file downloader for Unix-like operating systems and Windows OS. It supports non-interactive downloading of files over protocols such as HTTP, HTTPS, and FTP.

It’s designed to work reliably with slow or unstable network connections. Importantly, in case of network disruptions, it enables you to continue getting a partially-downloaded file by running a particular command again.

Suggested Read: 5 Linux Command Line Based Tools for Downloading Files

In this short article, we will explain how to rename a file while downloading with wget command on the Linux terminal.

By default, wget downloads a file and saves it with the original name in the URL – in the current directory. What if the original file name is relatively long as the one shown in the screen shot below.

$ wget -c https://gist.github.com/chales/11359952/archive/25f48802442b7986070036d214a2a37b8486282d.zip
Wget Download File

Wget Download File


Taking the example above, to rename the downloaded file with wget command to something else, you can use the -O or --output-document flag with the -c or --continue options helps to continue getting a partially-downloaded file as we explained at the start.

$ wget -c https://gist.github.com/chales/11359952/archive/25f48802442b7986070036d214a2a37b8486282d.zip -O db-connection-test.zip
Wget Rename Download File

Wget Rename Download File

Note that -O flag tells wget to perform shell redirection other than instructing it to use the new name instead of the original name in the URL. This is what practically happens:

$ wget -cO - https://gist.github.com/chales/11359952/archive/25f48802442b7986070036d214a2a37b8486282d.zip > db-connection-test.zip
$ ls
Wget - Rename File While Downloading

Wget – Rename File While Downloading

The file is written to standard output and then redirected by the shell to the specified file as shown in the screen shot above.

If you want to download videos from You-tube and other sites from the command line, you can install and use YouTube-DL in Linux.

That’s all for now! In this article, we showed how to rename the downloaded file with wget command. To send us any queries or add your thoughts to this article, use the comment form below.

How to Boot into Single User Mode in CentOS/RHEL 7

Single User Mode (sometimes known as Maintenance Mode) is a mode in Unix-like operating systems such as Linux operate, where a handful of services are started at system boot for basic functionality to enable a single superuser perform certain critical tasks.

It is runlevel 1 under system SysV init, and runlevel1.target or rescue.target in systemd. Importantly, the services, if any, started at this runlevel/target varies by distribution. It’s generally useful for maintenance or emergency repairs (since it doesn’t offer any network services at all), when a computer is not capable of normal operations.

Some of the low-level repairs include running such as fsck of damaged disk partitions, reset root password if you have lost it, fix “failed to mount /etc/fstab” error – just to mention the most critical of them. And also when the system fails to boot normally.

In this tutorial, we will describe how to boot into single user mode on CentOS 7. Note that practically this will help you enter the emergency mode and access an emergency shell.

How to Boot into Single User Mode


1. First restart your CentOS 7 machine, once boot process starts, wait for the GRUB boot menu to appear as shown in the screen shot below.

CentOS 7 Grub Menu

CentOS 7 Grub Menu

2. Next, select your Kernel version from the grub menu item and press e key to edit the first boot option. Now use the Down arrow key to find the kernel line (starts with “linux16“), then change the argument ro to rw init=/sysroot/bin/sh as shown in the screen shot below.

Edit Grub Boot Options

Edit Grub Boot Options

3. Once you have finished the task in the previous step, press Ctrl-X or F10 to boot into single user mode (access an emergency shell).

CentOS 7 Emergency Shell

CentOS 7 Emergency Shell

4. Now mount root (/) filesystem using the following command.

# chroot /sysroot/

At this point, you can perform all the necessary low-level system maintenance tasks. Once you are done, reboot the system using this command.

# reboot -f

You may also liked to read following articles.

  1. How to Hack Your Own Linux System
  2. Linux Directory Structure and Important Files Paths Explained
  3. How to Create and Run New Service Units in Systemd Using Shell Script
  4. How to Manage ‘Systemd’ Services and Units Using ‘Systemctl’ in Linux

Lastly, the single user mode or maintenance mode is not password-protected by default, so any one with malicious intend and physical access to your computer can enter the emergency mode and “destroy” your system.

Next, we will show you how to password-protect single user mode on CentOS 7. Until then, stay connected to Tecmint.com.

How to Change Runlevels (targets) in SystemD

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to Change Runlevels (targets) in SystemD’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/08/Change-Runlevels-Targets-in-Systemd.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

GNOME at 20: Four reasons it's still my favorite GUI

The GNOME desktop turns 20 on August 15, and I’m so excited! Twenty years is a major milestone for any open source software project, especially a graphical desktop environment like GNOME that has to appeal to many different users. The 20th anniversary is definitely something to celebrate!

Why is GNOME such a big deal? For me, it’s because it represented a huge step forward in the Linux desktop. I installed my first Linux system in 1993. In the early days of Linux, the most prevalent graphical environment was TWM, the tabbed window manager. The modern desktop didn’t exist yet.

But as Linux became more popular, we saw an explosion of different graphical environments, such as FVWM (1993) and FVWM95 (1995), and their derivatives, including Window Maker (1996), LessTif (1996), Enlightenment (1997), and Xfce (1997). Each filled a different niche. Nothing was integrated. Rather, FVWM and its clones simply managed windows. Toolkits were not standardized; each window might use a different one. As a result, early Linux graphical environments were a mishmash of various styles. Window Maker offered the most improvements, with a more uniform look and feel, but it still lacked the integration of a true desktop.

I was thrilled when the GNOME project released a true Linux desktop environment in 1999. GNOME 1 leveraged the GTK+ toolkit, the same object-oriented widget toolkit used to build the GIMP graphics program.

The first GNOME release looked very similar to Windows 98, the then-current version of Microsoft Windows, a wise decision that immediately provided a familiar graphical interface for new Linux users. GNOME 1 also offered desktop management and integration, not simply window management. Files and folders could be dropped on the desktop, providing easy access. This was a major advancement. In short order, many major Linux distributions included GNOME as the default desktop. Finally, Linux had a true desktop.

Over time, GNOME continued to evolve. In 2002, GNOME’s second major release, GNOME 2, cleaned up the user interface and tweaked the overall design. I found this quite invigorating. Instead of a single toolbar or panel at the bottom of the screen, GNOME 2 used two panels: one at the top of the screen, and one at the bottom. The top panel included the GNOME Applications menu, an Actions menu, and shortcuts to frequently used applications. The bottom panel provided icons of running programs and a representation of the other workspaces available on the system. Using the two panels provided a cleaner user interface, separating “things you can do” (top panel) and “things you are doing” (bottom panel).

I loved the GNOME 2 desktop, and it remained my favorite for years. Lots of other users felt the same, and GNOME 2 became a de facto standard for the Linux desktop. Successive versions made incremental improvements to GNOME’s user interface, but the general design concept of “things you can do” and “things you are doing” remained the same.

Despite the success and broad appeal of GNOME, the GNOME team realized that GNOME 2 had become difficult for many to use. The applications launch menu required too many clicks. Workspaces were difficult to use. Open windows were easy to lose under piles of other application windows. In 2008, the GNOME team embarked on a mission to update the GNOME interface. That effort produced GNOME 3.

GNOME 3 removed the traditional task bar in favor of an Overview mode that shows all running applications. Instead of using a launch menu, users start applications with an Activities hot button in the black bar at the top. Selecting the Activities menu brings up the Overview mode, showing both things you can do (with the favorite applications launcher to the left of the screen), and things you are doing (window representations of open applications).

Since its initial release, the GNOME 3 team has put in a lot of effort to improve it and make it easier to use. Today’s GNOME is modern yet familiar, striking that difficult balance between features and utility.

4 reasons GNOME is my favorite GUI

Here at GNOME’s 20th anniversary, I’d like to highlight four reasons why GNOME 3 is still my favorite desktop today:

1. It’s easy to get to work

GNOME 3 makes it easy to find my most frequently used applications in the favorite applications launcher. I can add my most-used applications here, so getting to work is just a click away. I can still find less frequently used applications in the Applications menu, or I can just start typing the name of the program to quickly search for the application.

2. Open windows are easy to find

Most of the time, I have two or three windows open at once, so it’s easy to use Alt+Tab to switch among them. But when I’m working on a project, I might have 10 or more windows open on my desktop. Even with a large number of open applications, it’s straightforward to find the one that I want. Move the mouse to the Activities hot corner, and the desktop switches to Overview mode with representations of all your open windows. Simply click on a window, and GNOME puts that application on top.

3. No wasted screen space

With other desktop environments, windows have a title bar with the name of the application, plus a few controls to minimize, maximize, and close the window. When all you need is a button to close the window, this is wasted screen space. GNOME 3 is designed to minimize the decorations around your windows and give you more screen space. GNOME even locates certain Action buttons in the window’s title bar, saving you even more space. It may not sound like much, but it all adds up when you have a lot of open windows.

4. The desktop of the future

Today, computers are more than a box with a monitor, keyboard, and mouse. We use smartphones and tablets alongside our desktop and laptop computers. In many cases, mobile computing (phones and tablets) displaces the traditional computer for many tasks. I think it’s clear that the mobile and desktop interfaces are merging. Before too long, we will use the same interface for both desktop and mobile. The key to making this work is a user interface that truly unifies the platforms and their unique use cases. We aren’t quite there yet, but GNOME 3 seems well positioned to fill this gap. I look forward to seeing this area develop and improve.

Testing in production: Yes, you can (and should)

I wrote a piece recently about why we are all distributed systems engineers now. To my surprise, lots of people objected to the observation that you have to test large distributed systems in production. 

It seems testing in production has gotten a bad rap—despite the fact that we all do it, all the time.

Maybe we associate it with cowboy engineering. We hear “testing in production” and assume this means no unit tests, functional tests, or continuous integration.

It’s good to try and catch things before production—we should do that too! But these things aren’t mutually exclusive. Here are some things to consider about testing in production.

1. You already do it

There are lots of things you already test in prod—because there’s no other way you can test them. Sure, you can spin up clones of various system components or entire systems, and capture real traffic to replay offline (the gold standard of systems testing). But many systems are too big, complex, and cost-prohibitive to clone.

Imagine trying to spin up a copy of Facebook for testing (with its multiple, globally distributed data centers). Imagine trying to spin up a copy of the national electrical grid. Even if you succeed, next you need the same number of clients, the same concurrency, same pipelining and usage patterns, etc. The unpredictability of user traffic makes it impossible to mock; even if you could perfectly reproduce yesterday’s traffic, you still can’t predict tomorrow’s.

It’s easy to get dragged down into bikeshedding about cloning environments and miss the real point: Only production is production, and every time you deploy there you are testing a unique combination of deploy code + software + environment. (Just ask anyone who’s ever confidently deployed to “Staging”, and then “Producktion” (sic).) 

2. So does everyone else

You can’t spin up a copy of Facebook. You can’t spin up a copy of the national power grid. Some things just aren’t amenable to cloning. And that’s fine. You simply can’t usefully mimic the qualities of size and chaos that tease out the long, thin tail of bugs or behaviors you care about.

And you shouldn’t try.

Facebook doesn’t try to spin up a copy of Facebook either. They invest in the tools that allow thousands and thousands of engineers to deploy safely to production every day and observe people interacting with the code they wrote. So does Netflix. So does everyone who is fortunate enough to outgrow the delusion that this is a tractable problem.

3. It’s probably fine

There’s a lot of value in testing… to a point. But if you can catch 80% to 90% of the bugs with 10% to 20% of the effort—and you can—the rest is more usefully poured into making your systems resilient, not preventing failure.

You should be practicing failure regularly. Ideally, everyone who has access to production knows how to do a deploy and rollback, or how to get to a known-good state fast. They should know what a normal operating system looks like, and how to debug basic problems. Knowing how to deal with failure should not be rare.

If you test in production, dealing with failure won’t be rare. I’m talking about things like, “Does this have a memory leak?” Maybe run it as a canary on five hosts overnight and see. “Does this functionality work as planned?” At some point, just ship it with a feature flag so only certain users can exercise it. Stuff like that. Practice shipping and fixing lots of small problems, instead of a few big and dramatic releases.

4. You’ve got bigger problems

You’re shipping code every day and causing self-inflicted damage on the regular, and you can’t tell what it’s doing before, during, or after. It’s not the breaking stuff that’s the problem; you can break things safely. It’s the second part—not knowing what it’s doing—that’s not OK. This bigger problem can be addressed by:

  • Canarying. Automated canarying. Automated canarying in graduated levels with automatic promotion. Multiple canaries in simultaneous flight!
  • Making deploys more automated, robust, and fast (5 minutes on the upper bound is good)
  • Making rollbacks wicked fast and reliable
  • Using instrumentation, observability, and other early warning signs for staged canaries
  • Doing end-to-end health checks of key endpoints
  • Choosing good defaults, feature flags, developer tooling
  • Educating, sharing best practices, standardizing practices, making the easy/fast way the right way
  • Taking as much code and as many back-end components as possible out of the critical path
  • Limiting the blast radius of any given user or change
  • Exploring production, verifying that the expected changes are what actually happened. Knowing what normal looks like

These things are all a great use of your time. Unlike staging and test environments, which are notoriously fragile and flaky and hard to keep in sync with prod.

Do those things

Release engineering is a systematically underinvested skillset at companies with more than 50 people. Your deploys are the cause of nearly all your failures because they inject chaos into your system. Having a staging copy of production is not going to do much to change that (and it adds a large category of problems colloquially known as “it looked just like production, so I just dropped that table…”).

Embrace failure. Chaos and failure are your friends. The issue is not if you will fail, it is when you will fail, and whether you will notice. It’s between whether it will annoy all of your users because the entire site is down, or if it will annoy only a few users until you fix it at your leisure the next morning.

Once upon a time, these were optional skills, even specialties. Not anymore. These are table stakes in your new career as a distributed systems engineer.

Lean into it. It’s probably fine.

3 new OpenStack guides

If your job involves doing development or system administration in the cloud, you know how hard it can be to keep up with the quick pace of innovation. OpenStack is just one example of a project with lots of moving parts and a ton of amazing features that operators would benefit from becoming more familiar with.

The good news is there are a lot of ways to keep up. You’ve got the official project documentation, of course, as well as the documentation and support from your distribution of choice. There are also plenty of printed books, certification and training programs, and lots of great community-created resources.

Here on Opensource.com, we take a look for recently published guides and tutorials across blogs and other websites from the last month, and bring them to you in one handy blog post. Let’s jump in.

  • TripleO is one of the more popular ways to deploy OpenStack, by utilizing OpenStack’s own core functionality to help deploy the cloud. But if you work in an environment where certain security precautions are mandated, it’s important to ensure that the images you use to provision your OpenStack resources are sufficiently hardened. Learn how to create security hardened images for use with TripleO in this guide.

  • Kubernetes is another important tool for cloud operators, providing orchestration of containers and connecting them to the resources they need. But Kubernetes still needs the underlying cloud resources to deploy; here’s how to deploy Kubernetes on top of your OpenStack cloud using Ansible.

  • Finally this month, let’s look at a brand new website aptly named “Learn OpenStack.” Designed by an author trying to document his own learnings with OpenStack deployment, this guide looks at OpenStack and several of the tools involved in its setup and deployment, including Linux, Ansible, virtualization tools, and more. A work in progress, you can contribute to the effort with corrections or enhancements through GitHub, here.


That’s it for this time around. Want more? Take a look at our complete set of OpenStack guides, howtos, and tutorials containing over three years of community-generated content you’ll love. And if you’ve found a great tutorial, guide, or how-to that we could share in our next update, be sure to let us know in the comments below.

Tips for finding partners open enough to work with you

Imagine I’m working on the front line of an open organization, and I’m committed to following principles like transparency, inclusivity, adaptability, collaboration, community, accountability, and commitment to guide that front-line work. A huge problem comes up. My fellow front-line workers and I can’t handle it on our own, so we discuss the problem and decide that one of us has to take it to top management. I’m selected to do that.

When I do, I learn there is nothing we can do about the problem within the company. So management decides to let me present the issue to outside individuals who can help us.

In my search for the expertise required to fix the problem, I learned that no single individual has that expertise—and that we must find an outside, skilled partner (company) to help us address the issue.

All companies face this kind of problem and must form strategy business alliances from time to time. But it’s especially common for open organizations, which Jim Whitehurst (in The Open Organization) specifically defines as organizations that “engage participative communities both inside and out.” How, though, does this actually work?

Let’s take a look at how transparency, inclusivity, adaptability, collaboration, community, accountability and commitment impact on two partner companies working together on a project.

Three stages of collaboration

Several years back, I formed an alliance between my company’s operation in China and an American company. My company is Japanese, and establishing a working relationship between American, Japanese, and Chinese partners was challenging (I’ll discuss this project more in detail later). Being successful meant I had to study various ways to form effective business alliances.

Basically, this is what I learned and put in practice in China. Developing strategic business alliances with a partner company involves three stages:

  • Stage 1 is the “Discovery” stage.
  • Stage 2 is the “Implementation” stage
  • Stage 3 is the “Maintenance” stage

Here’s what you can do in each stage to form lasting, effective and open alliances with external partners.

Discovery

In this stage, you want to decide on what you want to achieve with your proposed alliance. Simply put: What is your goal? The more detail with which you can express this goal (and its sub-goals), the higher your chance of success.

Next, you want to evaluate organizations that can support you to achieve those goals. What do you want them to do? What should you be responsible for (what don’t you want them to do)? How do you want them to behave with you, especially regarding open organization principles? You should group each potential partner into three categories:

  • Those following these principles now
  • Those not following these principles now but who want to follow these principles and could with some support, explanation and training
  • Those that do not have the desire or character to be more open in their behavior

After evaluating candidates, you should approach your ideal partner with a proposal for how you can work together on the specific project and reach an agreement.

This stage is the most important of the three. If you can get it right, the entire project will unfold in a timely and cost effective way. Quite often, companies do not spend enough time being open, inclusive, and collaborative to come to the best decision on what they want to achieve and what parameters are ideal for the project.

Implementation

In this stage, you’ll start working with your alliance business partner on the project. Before you do that, you have to get to know your partner—and you have to get them to know you and your team. Your new partner may subscribe to open organization principles in general, but in practice those principles might not guide every member of the team. You’ll therefore want to build a project team on both their side and yours, both of which adhere to the principles.

As I mentioned in a previous article, you will encounter people who will resist the project, and you’ll need to screen them out. More importantly, you must find those individuals that will be very committed to the project and have the expertise to ensure success.

When starting a new project in any organization, you’ll likely face at least three challenges:

  • Competition with ongoing business for scarce resources
  • Divided time, energy, and attention of shared staff
  • Disharmony in the partnership and building a new community

Competition with ongoing business for scarce resources

Quite often, companies do not spend enough time being open, inclusive, and collaborative to come to the best decision on what they want to achieve and what parameters are ideal for the project.

If the needs of the new joint project grow, your project leader may have to prioritize your project over ongoing business (both yours and that of your partner’s!). You both might have to request a higher budget. On the other hand, the ongoing business leaders might promote their own ongoing, core business to increase direct profits. So make a formal, documented allocation of funds for the project and an allocation of shared personnel’s time. Confirm a balance between short-term (mostly ongoing related) and long-term (mostly the new joint project) gains. If the use of resources for a new joint project impacts (in any way) ongoing business, the new joint project budget should cover the losses. Leaders should discuss all contingency plans in advance of the concern. This where transparency, adaptability, and collaboration become very important.

Divided time, energy and attention of shared staff

Your shared staff may consider the new joint project a distraction to their work. The shared staff from each company might be under short-term time pressure, for example. This is where front-line project commitment comes in. The shared staff might not consider the new joint project important. The shared staff might have stronger loyalties and formal ties to the ongoing business operation. The shared staff might feel the new joint project will damage the ongoing business operation (weaken brand and customer/supplier loyalties, cannibalize current business, etc.). In this case, you’ll need to make sure that all stakeholders understand and believe in the value of the new joint project. This concept should be repeatedly promoted from the top level, mid-management level and operational level. All senior executives should be new joint project advocates when there is stress in time, energy, and attention. Furthermore, the new joint project leaders must be flexible and adaptable when the ongoing business becomes overloaded, as they are the profit-center of the organization that funds all projects. At the departmental level, the ongoing operation could charge the new joint project for excess work provided. A special bonus could be given to shared staff that work over a certain amount. This is where adaptability, collaboration, accountability, and commitment become very important.

Disharmony in partnership and building a new community

Differences are important for adding value to a project, but they could cause rivalry, too.

Differences are important for adding value to a project, but they could cause rivalry, too. One common source of conflict can be perceived skill level of individuals. Conflict could result if management heaps too much praise to one side (either ongoing business or the new joint project). Conflict could result from differing opinions on performance assessments. Conflict on compensation could occur. Conflict on decision authority could occur. To avoid these types of conflict, make the division of responsibility as clear as possible. Reinforce common values for both groups. Add more internal staff (less outside hires) on the project team to support cooperation, as they have established relationships. Locate key staff near the dedicated team for face-to-face interaction. This is where transparency, inclusivity, collaboration, community and commitment become exceedingly important.

Maintenance

After all the start-up concerns in the joint project have been addressed, and the project is showing signs of success, you should implement periodic evaluations. Is the team still behaving with a great deal of transparency, inclusivity, adaptability, collaboration, community, accountability, and commitment? Here again, consider three answers to these questions (“yes,” “no,” “developmental”). For “yes” groups, leave everything as-is. For “no” groups, consider major personnel and structural changes. For “developmental” groups, consider training, role playing, and possibly closer supervision.

The above is just an overview of bringing open organizations principles in strategic business alliance projects. Companies large and small need to form strategic alliances, so in the next part of this series I’ll present some actual case studies for analysis and review.

How to Find Files With SUID and SGID Permissions in Linux

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: {via: ‘tecmint’}},
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘{total}’,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to Find Files With SUID and SGID Permissions in Linux’,media: ‘https://www.tecmint.com/wp-content/uploads/2017/08/Find-Files-with-SUID-SGID-Linux.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){ var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset({top: contentBottom – shareContainer.outerHeight(),left: startSharePosition.left});
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset({top: scrollTop + topSpacing, left: startSharePosition.left});
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});
]]>

We're giving away FOUR LulzBot 3D printers

It’s that time of year again. As students and teachers head back to school, we’re celebrating by giving away four LulzBot 3D printers in our biggest giveaway ever!

One grand prize winner will receive a LulzBot Taz 6, a top-of-the-line 3D printer that retails for US $2,500 and boasts an impressive 280x280x250mm (nearly the size of a basketball) heated print surface. Three other lucky winners will receive a LulzBot Mini valued at US $1,250. With a print area of 152x152x158mm, it’s a great choice for beginners looking to get some 3D printing experience.

So, what are you waiting for? Enter by this Sunday, August 20 at 11:59 p.m. Eastern Time (ET) for a chance to win. Note: You don’t need to be a student or educator to enter. All professions are welcome!

If you’re a teacher, librarian, or work in a museum or makerspace, integrate 3D printing into your curriculum by checking out the LulzBot education pricing program which provides educators with discounts, helpful product bundles, extended warranties, and more.

Good luck and happy printing from all of us on the Opensource.com team!