GNOME at 20: Four reasons it's still my favorite GUI

The GNOME desktop turns 20 on August 15, and I’m so excited! Twenty years is a major milestone for any open source software project, especially a graphical desktop environment like GNOME that has to appeal to many different users. The 20th anniversary is definitely something to celebrate!

Why is GNOME such a big deal? For me, it’s because it represented a huge step forward in the Linux desktop. I installed my first Linux system in 1993. In the early days of Linux, the most prevalent graphical environment was TWM, the tabbed window manager. The modern desktop didn’t exist yet.

But as Linux became more popular, we saw an explosion of different graphical environments, such as FVWM (1993) and FVWM95 (1995), and their derivatives, including Window Maker (1996), LessTif (1996), Enlightenment (1997), and Xfce (1997). Each filled a different niche. Nothing was integrated. Rather, FVWM and its clones simply managed windows. Toolkits were not standardized; each window might use a different one. As a result, early Linux graphical environments were a mishmash of various styles. Window Maker offered the most improvements, with a more uniform look and feel, but it still lacked the integration of a true desktop.

I was thrilled when the GNOME project released a true Linux desktop environment in 1999. GNOME 1 leveraged the GTK+ toolkit, the same object-oriented widget toolkit used to build the GIMP graphics program.

The first GNOME release looked very similar to Windows 98, the then-current version of Microsoft Windows, a wise decision that immediately provided a familiar graphical interface for new Linux users. GNOME 1 also offered desktop management and integration, not simply window management. Files and folders could be dropped on the desktop, providing easy access. This was a major advancement. In short order, many major Linux distributions included GNOME as the default desktop. Finally, Linux had a true desktop.

Over time, GNOME continued to evolve. In 2002, GNOME’s second major release, GNOME 2, cleaned up the user interface and tweaked the overall design. I found this quite invigorating. Instead of a single toolbar or panel at the bottom of the screen, GNOME 2 used two panels: one at the top of the screen, and one at the bottom. The top panel included the GNOME Applications menu, an Actions menu, and shortcuts to frequently used applications. The bottom panel provided icons of running programs and a representation of the other workspaces available on the system. Using the two panels provided a cleaner user interface, separating “things you can do” (top panel) and “things you are doing” (bottom panel).

I loved the GNOME 2 desktop, and it remained my favorite for years. Lots of other users felt the same, and GNOME 2 became a de facto standard for the Linux desktop. Successive versions made incremental improvements to GNOME’s user interface, but the general design concept of “things you can do” and “things you are doing” remained the same.

Despite the success and broad appeal of GNOME, the GNOME team realized that GNOME 2 had become difficult for many to use. The applications launch menu required too many clicks. Workspaces were difficult to use. Open windows were easy to lose under piles of other application windows. In 2008, the GNOME team embarked on a mission to update the GNOME interface. That effort produced GNOME 3.

GNOME 3 removed the traditional task bar in favor of an Overview mode that shows all running applications. Instead of using a launch menu, users start applications with an Activities hot button in the black bar at the top. Selecting the Activities menu brings up the Overview mode, showing both things you can do (with the favorite applications launcher to the left of the screen), and things you are doing (window representations of open applications).

Since its initial release, the GNOME 3 team has put in a lot of effort to improve it and make it easier to use. Today’s GNOME is modern yet familiar, striking that difficult balance between features and utility.

4 reasons GNOME is my favorite GUI

Here at GNOME’s 20th anniversary, I’d like to highlight four reasons why GNOME 3 is still my favorite desktop today:

1. It’s easy to get to work

GNOME 3 makes it easy to find my most frequently used applications in the favorite applications launcher. I can add my most-used applications here, so getting to work is just a click away. I can still find less frequently used applications in the Applications menu, or I can just start typing the name of the program to quickly search for the application.

2. Open windows are easy to find

Most of the time, I have two or three windows open at once, so it’s easy to use Alt+Tab to switch among them. But when I’m working on a project, I might have 10 or more windows open on my desktop. Even with a large number of open applications, it’s straightforward to find the one that I want. Move the mouse to the Activities hot corner, and the desktop switches to Overview mode with representations of all your open windows. Simply click on a window, and GNOME puts that application on top.

3. No wasted screen space

With other desktop environments, windows have a title bar with the name of the application, plus a few controls to minimize, maximize, and close the window. When all you need is a button to close the window, this is wasted screen space. GNOME 3 is designed to minimize the decorations around your windows and give you more screen space. GNOME even locates certain Action buttons in the window’s title bar, saving you even more space. It may not sound like much, but it all adds up when you have a lot of open windows.

4. The desktop of the future

Today, computers are more than a box with a monitor, keyboard, and mouse. We use smartphones and tablets alongside our desktop and laptop computers. In many cases, mobile computing (phones and tablets) displaces the traditional computer for many tasks. I think it’s clear that the mobile and desktop interfaces are merging. Before too long, we will use the same interface for both desktop and mobile. The key to making this work is a user interface that truly unifies the platforms and their unique use cases. We aren’t quite there yet, but GNOME 3 seems well positioned to fill this gap. I look forward to seeing this area develop and improve.

Testing in production: Yes, you can (and should)

I wrote a piece recently about why we are all distributed systems engineers now. To my surprise, lots of people objected to the observation that you have to test large distributed systems in production. 

It seems testing in production has gotten a bad rap—despite the fact that we all do it, all the time.

Maybe we associate it with cowboy engineering. We hear “testing in production” and assume this means no unit tests, functional tests, or continuous integration.

It’s good to try and catch things before production—we should do that too! But these things aren’t mutually exclusive. Here are some things to consider about testing in production.

1. You already do it

There are lots of things you already test in prod—because there’s no other way you can test them. Sure, you can spin up clones of various system components or entire systems, and capture real traffic to replay offline (the gold standard of systems testing). But many systems are too big, complex, and cost-prohibitive to clone.

Imagine trying to spin up a copy of Facebook for testing (with its multiple, globally distributed data centers). Imagine trying to spin up a copy of the national electrical grid. Even if you succeed, next you need the same number of clients, the same concurrency, same pipelining and usage patterns, etc. The unpredictability of user traffic makes it impossible to mock; even if you could perfectly reproduce yesterday’s traffic, you still can’t predict tomorrow’s.

It’s easy to get dragged down into bikeshedding about cloning environments and miss the real point: Only production is production, and every time you deploy there you are testing a unique combination of deploy code + software + environment. (Just ask anyone who’s ever confidently deployed to “Staging”, and then “Producktion” (sic).) 

2. So does everyone else

You can’t spin up a copy of Facebook. You can’t spin up a copy of the national power grid. Some things just aren’t amenable to cloning. And that’s fine. You simply can’t usefully mimic the qualities of size and chaos that tease out the long, thin tail of bugs or behaviors you care about.

And you shouldn’t try.

Facebook doesn’t try to spin up a copy of Facebook either. They invest in the tools that allow thousands and thousands of engineers to deploy safely to production every day and observe people interacting with the code they wrote. So does Netflix. So does everyone who is fortunate enough to outgrow the delusion that this is a tractable problem.

3. It’s probably fine

There’s a lot of value in testing… to a point. But if you can catch 80% to 90% of the bugs with 10% to 20% of the effort—and you can—the rest is more usefully poured into making your systems resilient, not preventing failure.

You should be practicing failure regularly. Ideally, everyone who has access to production knows how to do a deploy and rollback, or how to get to a known-good state fast. They should know what a normal operating system looks like, and how to debug basic problems. Knowing how to deal with failure should not be rare.

If you test in production, dealing with failure won’t be rare. I’m talking about things like, “Does this have a memory leak?” Maybe run it as a canary on five hosts overnight and see. “Does this functionality work as planned?” At some point, just ship it with a feature flag so only certain users can exercise it. Stuff like that. Practice shipping and fixing lots of small problems, instead of a few big and dramatic releases.

4. You’ve got bigger problems

You’re shipping code every day and causing self-inflicted damage on the regular, and you can’t tell what it’s doing before, during, or after. It’s not the breaking stuff that’s the problem; you can break things safely. It’s the second part—not knowing what it’s doing—that’s not OK. This bigger problem can be addressed by:

  • Canarying. Automated canarying. Automated canarying in graduated levels with automatic promotion. Multiple canaries in simultaneous flight!
  • Making deploys more automated, robust, and fast (5 minutes on the upper bound is good)
  • Making rollbacks wicked fast and reliable
  • Using instrumentation, observability, and other early warning signs for staged canaries
  • Doing end-to-end health checks of key endpoints
  • Choosing good defaults, feature flags, developer tooling
  • Educating, sharing best practices, standardizing practices, making the easy/fast way the right way
  • Taking as much code and as many back-end components as possible out of the critical path
  • Limiting the blast radius of any given user or change
  • Exploring production, verifying that the expected changes are what actually happened. Knowing what normal looks like

These things are all a great use of your time. Unlike staging and test environments, which are notoriously fragile and flaky and hard to keep in sync with prod.

Do those things

Release engineering is a systematically underinvested skillset at companies with more than 50 people. Your deploys are the cause of nearly all your failures because they inject chaos into your system. Having a staging copy of production is not going to do much to change that (and it adds a large category of problems colloquially known as “it looked just like production, so I just dropped that table…”).

Embrace failure. Chaos and failure are your friends. The issue is not if you will fail, it is when you will fail, and whether you will notice. It’s between whether it will annoy all of your users because the entire site is down, or if it will annoy only a few users until you fix it at your leisure the next morning.

Once upon a time, these were optional skills, even specialties. Not anymore. These are table stakes in your new career as a distributed systems engineer.

Lean into it. It’s probably fine.

3 new OpenStack guides

If your job involves doing development or system administration in the cloud, you know how hard it can be to keep up with the quick pace of innovation. OpenStack is just one example of a project with lots of moving parts and a ton of amazing features that operators would benefit from becoming more familiar with.

The good news is there are a lot of ways to keep up. You’ve got the official project documentation, of course, as well as the documentation and support from your distribution of choice. There are also plenty of printed books, certification and training programs, and lots of great community-created resources.

Here on Opensource.com, we take a look for recently published guides and tutorials across blogs and other websites from the last month, and bring them to you in one handy blog post. Let’s jump in.

  • TripleO is one of the more popular ways to deploy OpenStack, by utilizing OpenStack’s own core functionality to help deploy the cloud. But if you work in an environment where certain security precautions are mandated, it’s important to ensure that the images you use to provision your OpenStack resources are sufficiently hardened. Learn how to create security hardened images for use with TripleO in this guide.

  • Kubernetes is another important tool for cloud operators, providing orchestration of containers and connecting them to the resources they need. But Kubernetes still needs the underlying cloud resources to deploy; here’s how to deploy Kubernetes on top of your OpenStack cloud using Ansible.

  • Finally this month, let’s look at a brand new website aptly named “Learn OpenStack.” Designed by an author trying to document his own learnings with OpenStack deployment, this guide looks at OpenStack and several of the tools involved in its setup and deployment, including Linux, Ansible, virtualization tools, and more. A work in progress, you can contribute to the effort with corrections or enhancements through GitHub, here.


That’s it for this time around. Want more? Take a look at our complete set of OpenStack guides, howtos, and tutorials containing over three years of community-generated content you’ll love. And if you’ve found a great tutorial, guide, or how-to that we could share in our next update, be sure to let us know in the comments below.

Tips for finding partners open enough to work with you

Imagine I’m working on the front line of an open organization, and I’m committed to following principles like transparency, inclusivity, adaptability, collaboration, community, accountability, and commitment to guide that front-line work. A huge problem comes up. My fellow front-line workers and I can’t handle it on our own, so we discuss the problem and decide that one of us has to take it to top management. I’m selected to do that.

When I do, I learn there is nothing we can do about the problem within the company. So management decides to let me present the issue to outside individuals who can help us.

In my search for the expertise required to fix the problem, I learned that no single individual has that expertise—and that we must find an outside, skilled partner (company) to help us address the issue.

All companies face this kind of problem and must form strategy business alliances from time to time. But it’s especially common for open organizations, which Jim Whitehurst (in The Open Organization) specifically defines as organizations that “engage participative communities both inside and out.” How, though, does this actually work?

Let’s take a look at how transparency, inclusivity, adaptability, collaboration, community, accountability and commitment impact on two partner companies working together on a project.

Three stages of collaboration

Several years back, I formed an alliance between my company’s operation in China and an American company. My company is Japanese, and establishing a working relationship between American, Japanese, and Chinese partners was challenging (I’ll discuss this project more in detail later). Being successful meant I had to study various ways to form effective business alliances.

Basically, this is what I learned and put in practice in China. Developing strategic business alliances with a partner company involves three stages:

  • Stage 1 is the “Discovery” stage.
  • Stage 2 is the “Implementation” stage
  • Stage 3 is the “Maintenance” stage

Here’s what you can do in each stage to form lasting, effective and open alliances with external partners.

Discovery

In this stage, you want to decide on what you want to achieve with your proposed alliance. Simply put: What is your goal? The more detail with which you can express this goal (and its sub-goals), the higher your chance of success.

Next, you want to evaluate organizations that can support you to achieve those goals. What do you want them to do? What should you be responsible for (what don’t you want them to do)? How do you want them to behave with you, especially regarding open organization principles? You should group each potential partner into three categories:

  • Those following these principles now
  • Those not following these principles now but who want to follow these principles and could with some support, explanation and training
  • Those that do not have the desire or character to be more open in their behavior

After evaluating candidates, you should approach your ideal partner with a proposal for how you can work together on the specific project and reach an agreement.

This stage is the most important of the three. If you can get it right, the entire project will unfold in a timely and cost effective way. Quite often, companies do not spend enough time being open, inclusive, and collaborative to come to the best decision on what they want to achieve and what parameters are ideal for the project.

Implementation

In this stage, you’ll start working with your alliance business partner on the project. Before you do that, you have to get to know your partner—and you have to get them to know you and your team. Your new partner may subscribe to open organization principles in general, but in practice those principles might not guide every member of the team. You’ll therefore want to build a project team on both their side and yours, both of which adhere to the principles.

As I mentioned in a previous article, you will encounter people who will resist the project, and you’ll need to screen them out. More importantly, you must find those individuals that will be very committed to the project and have the expertise to ensure success.

When starting a new project in any organization, you’ll likely face at least three challenges:

  • Competition with ongoing business for scarce resources
  • Divided time, energy, and attention of shared staff
  • Disharmony in the partnership and building a new community

Competition with ongoing business for scarce resources

Quite often, companies do not spend enough time being open, inclusive, and collaborative to come to the best decision on what they want to achieve and what parameters are ideal for the project.

If the needs of the new joint project grow, your project leader may have to prioritize your project over ongoing business (both yours and that of your partner’s!). You both might have to request a higher budget. On the other hand, the ongoing business leaders might promote their own ongoing, core business to increase direct profits. So make a formal, documented allocation of funds for the project and an allocation of shared personnel’s time. Confirm a balance between short-term (mostly ongoing related) and long-term (mostly the new joint project) gains. If the use of resources for a new joint project impacts (in any way) ongoing business, the new joint project budget should cover the losses. Leaders should discuss all contingency plans in advance of the concern. This where transparency, adaptability, and collaboration become very important.

Divided time, energy and attention of shared staff

Your shared staff may consider the new joint project a distraction to their work. The shared staff from each company might be under short-term time pressure, for example. This is where front-line project commitment comes in. The shared staff might not consider the new joint project important. The shared staff might have stronger loyalties and formal ties to the ongoing business operation. The shared staff might feel the new joint project will damage the ongoing business operation (weaken brand and customer/supplier loyalties, cannibalize current business, etc.). In this case, you’ll need to make sure that all stakeholders understand and believe in the value of the new joint project. This concept should be repeatedly promoted from the top level, mid-management level and operational level. All senior executives should be new joint project advocates when there is stress in time, energy, and attention. Furthermore, the new joint project leaders must be flexible and adaptable when the ongoing business becomes overloaded, as they are the profit-center of the organization that funds all projects. At the departmental level, the ongoing operation could charge the new joint project for excess work provided. A special bonus could be given to shared staff that work over a certain amount. This is where adaptability, collaboration, accountability, and commitment become very important.

Disharmony in partnership and building a new community

Differences are important for adding value to a project, but they could cause rivalry, too.

Differences are important for adding value to a project, but they could cause rivalry, too. One common source of conflict can be perceived skill level of individuals. Conflict could result if management heaps too much praise to one side (either ongoing business or the new joint project). Conflict could result from differing opinions on performance assessments. Conflict on compensation could occur. Conflict on decision authority could occur. To avoid these types of conflict, make the division of responsibility as clear as possible. Reinforce common values for both groups. Add more internal staff (less outside hires) on the project team to support cooperation, as they have established relationships. Locate key staff near the dedicated team for face-to-face interaction. This is where transparency, inclusivity, collaboration, community and commitment become exceedingly important.

Maintenance

After all the start-up concerns in the joint project have been addressed, and the project is showing signs of success, you should implement periodic evaluations. Is the team still behaving with a great deal of transparency, inclusivity, adaptability, collaboration, community, accountability, and commitment? Here again, consider three answers to these questions (“yes,” “no,” “developmental”). For “yes” groups, leave everything as-is. For “no” groups, consider major personnel and structural changes. For “developmental” groups, consider training, role playing, and possibly closer supervision.

The above is just an overview of bringing open organizations principles in strategic business alliance projects. Companies large and small need to form strategic alliances, so in the next part of this series I’ll present some actual case studies for analysis and review.

We're giving away FOUR LulzBot 3D printers

It’s that time of year again. As students and teachers head back to school, we’re celebrating by giving away four LulzBot 3D printers in our biggest giveaway ever!

One grand prize winner will receive a LulzBot Taz 6, a top-of-the-line 3D printer that retails for US $2,500 and boasts an impressive 280x280x250mm (nearly the size of a basketball) heated print surface. Three other lucky winners will receive a LulzBot Mini valued at US $1,250. With a print area of 152x152x158mm, it’s a great choice for beginners looking to get some 3D printing experience.

So, what are you waiting for? Enter by this Sunday, August 20 at 11:59 p.m. Eastern Time (ET) for a chance to win. Note: You don’t need to be a student or educator to enter. All professions are welcome!

If you’re a teacher, librarian, or work in a museum or makerspace, integrate 3D printing into your curriculum by checking out the LulzBot education pricing program which provides educators with discounts, helpful product bundles, extended warranties, and more.

Good luck and happy printing from all of us on the Opensource.com team!

How my two-week project turned into a full time open source startup

Over a year ago, I decided to build a software business that focused on custom web application development, startups, and unique website projects. I had built a very strong and talented team of people who were ambitious to help me start this company as their side gig. We called it Vampeo. We acquired a bunch of projects and started development while keeping our full-time day jobs.

Long-running projects

After four months of delivering some of our projects, I realized something significant. No project was ever completed. Once each project (e.g., website) was delivered, every client asked for additional features, support, maintenance, updates, and even future projects.

These additional services introduced a new stream of recurring revenue for Vampeo. Clients would pay for servers, email addresses that we set up through G Suite, SSL renewals, website edits, etc.

Wasting my time with invoices

In November 2016, I started gathering all the invoices to email to our clients. I had a Quickbooks Online account to send invoices to clients, however, there was a much larger problem. Many of our services were offered as monthly or yearly subscriptions. For example, clients would pay Vampeo monthly for their servers and emails, annually for domain and SSL, and hourly fees on demand for feature developments. It was extremely hard to send invoices to our customers at the end of each month or keep track of who hadn’t paid their annual fees. I started falling behind in invoices, losing money, and losing track of our maintained services.

A small project to automate my business

There was no easy solution to our problem. Our service offerings and billing were handled in separate applications and required lots of manual work. We needed a system with the following features:

  • Ability to automatically charge the client based on the services they have with us
  • Customer self-service portal for clients to log in to an online account, view, edit, request cancellation of their current services, and communicate with us for additional work
  • Internal inventory of our work to keep track of all our active and archived projects and provide total revenue, profit, and progress

Every commercial solution we found was too expensive without covering every use case, and every open source solution was outdated with a very bad UI/UX. So we decided to spend our two-week New Year holiday to develop a very simple platform that leverages Stripe’s API to build a web application that fulfills all the above features. Boy was I wrong about the two-week timeframe!

Two weeks turned into months, and then… ServiceBot

The entire development revolved around our mindset of open sourcing our work. It required proper architecture, planning, and implementation. Our years of experience as automation architects and engineers got the best of us. We started adding more features, automating the billing using Stripe, creating a notification system, and much more. Our platform grew from a simple Node.js and Express app into one that uses Node.js, Express, React, Redux, and many more cutting-edge npm libraries.

The decision was clear; this wasn’t just a side project anymore, this was the real thing. We were a team of four developers and one graphic designer, and we spent every minute of our free time (outside of our day jobs) on developing this system. We called it ServiceBot, an open source gig management system, a platform you can use to start selling and managing your gig in just minutes.

We released our v0.1 Beta in May and showcased it at Collision 2017. The feedback was extremely positive, as it seemed like every other service-based startup was facing similar issues with billing. After Collision, we’ve spent the summer re-tuning our code and feature set.

It is now eight months since we started building ServiceBot, and we are now on version 0.5 beta. ServiceBot’s GitHub repository contains all of our hard work, and we want to share it and get feedback.

For this reason, we have decided to offer limited open-beta ServiceBot instances on our website. It will take just a couple of minutes to set up your ServiceBot website without any technical knowledge, installation, or lengthy configuration. All that’s needed is a Stripe account, as ServiceBot is tightly integrated with Stripe.

If you are interested in testing out our limited open-beta instances, you can sign up on our front page. 

We hope to grow ServiceBot into a complete automation system to help businesses cut costs by automating their daily operations and the lifecycle of their services.

This was originally posted on ServiceBot’s blog and is republished with permission.

Why containers are the best way to test software performance

Software performance and scalability are frequent topics when we talk about application development. A big reason for that is an application’s performance and scalability directly affect its success in the market. An application, no matter how good its user interface, won’t claim market share if its response time is sluggish.

This is why we spend so much time improving an application’s performance and scalability as its user base grows.

Where usual testing practices fail

Fortunately, we have a lot of tools to test software behavior under high-stress conditions. There are also tools to help identify the causes of performance and scalability issues, and other benchmark tools can stress-test systems to provide a relative measure of a system’s stability under a high load; however, we run into problems with performance and scale engineering when we try to use these tools to understand the performance of enterprise products. Generally, these products are not single applications; instead they may consist of several different applications interacting with each other to provide a consistent and unified user experience.

We may not get any meaningful data about a product’s performance and scalability issues if we test only its individual components.

We may not get any meaningful data about a product’s performance and scalability issues if we test only its individual components. The real numbers can be gathered only when we test the application in real-life scenarios, that is by subjecting the entire enterprise application to a real-life workload.

The question becomes: How can we achieve this real-life workload in a test scenario?

Containers to the rescue

The answer is containers. To explain how containers can help us understand a product’s performance and scalability, let’s look at Puppet, a software configuration management tool, as an example.

Puppet uses a client-server architecture, where there are one or more Puppet masters (servers), and the systems that are to be configured using Puppet run Puppet agents (clients).

To understand an application’s performance and scalability, we need to stress the Puppet masters with high load from the agents running on various systems.

To do this, we can install puppet-master on one system, then run multiple containers that are each running our operating system, over which we install and run puppet-agent.

Next, we need to configure the Puppet agents to interact with the Puppet master to manage the system configuration. This stresses the server when it handles the request and stresses the client when it updates the software configuration.

So, how did the containers help here? Couldn’t we have just simulated the load on the Puppet master through a script?

The answer is no. It might have simulated the load, but we would have gotten a highly unrealistic view of its performance.

The reason for this is quite simple. In real life, a user system will run a number of other processes besides puppet-agent or puppet-master, where each process consumes a certain amount of system resources and hence directly impacts the performance of the puppet by limiting the resources Puppet can use.

This was a simple example, but the performance and scale engineering of enterprise applications can get really challenging when dealing with products that combine more than a handful of components. This is where containers shine.

Why containers and not something else?

A genuine question is: Why use containers and not virtual machines (VMs) or just bare-metal machines?

The logic behind running containers is related to the number of container images of a system can we launch, as well as their cost versus the alternatives.

Although VMs provide a powerful mechanism, they also incur a lot of overhead on system resources, thereby limiting the number of systems that can be replicated on a single bare-metal server. By contrast, it is fairly easy to launch even 1,000 containers on the same system, depending on what kind of simulation you are trying to achieve, while keeping the resource overhead low.

With bare-metal servers, the performance and scale can be as realistic as needed, but a major problem is cost overhead. Will you buy 1,000 servers for performance and scale experiments?

That’s why containers overall provide an economical and scalable way of testing products’ performance against a real-life scenario while keeping resources, overhead, and costs in check.

Learn more in Saurabh Badhwar’s talk Testing Software Performance and Scalability Using Containers at Open Source Summit in Los Angeles.

How to avoid leaving money on the table with salary negotiation

Although any sort of negotiation can be stressful, negotiating compensation for a new job—especially when you have the opportunity to get paid to work on open source software—can be especially intimidating. Because of this, many people, particularly women and minorities, choose not to negotiate at all. Unfortunately, this choice may come with a $500,000 penalty. That’s how much money the average person loses throughout their lifetime by choosing not to negotiate their wages.

Talking about the importance of wage negotiation in America is impossible without talking about the wage gap for women and minorities. A few years ago, the big buzz was about “79 cents to the dollar” that women were paid in comparison to men. Data show that the U.S. pay gap has improved marginally, and women are now on average receiving 80 cents to the male dollar. This number varies by location, and ground is being lost in some places. The disparity is even worse for women of color and other marginalized groups. We don’t even have statistics for the difficulties experienced by transgender and gender-nonconforming people, who often face some of the most severe barriers in the workplace.

Makes you look at your paycheck a little differently, doesn’t it?

Don’t let it get you down, though. Although there is a lot that must be done at the corporate and social policy levels, you can help improve your own situation by choosing to negotiate. Making that choice isn’t always easy when you’re fresh out of school or new to the industry and only have open source contributions to showcase your skills. But, once you decide to negotiate—and learn how to do it well—a lot can change. For example, last year I made the choice to negotiate my salary and increased my monthly take-home pay by more than 50%. It wasn’t easy. There was a risk it could backfire, but with a little courage and elbow grease, the result was certainly worth the effort.

Remember: If you don’t ask, you don’t get.

Like the idea, but not sure where to start? Try negotiating on small things that don’t matter. Start frequenting yard sales and flea markets. Negotiate when you buy something, just to practice your skills. This will help boost your confidence and get you used to the process. Focus on what you can gain, not what you can lose. Recognize that the process is a bit of a game, and you can have fun in the interaction. Remember: If you don’t ask, you don’t get.

I like to think of negotiation as a two-phase process: Phase One happens before the offer, sometimes even before the interview, whereas Phase Two occurs when you sit down with HR, the hiring manager, or the recruiter and hash out the details.

Phase One

Start by looking at your own finances. Figure out your monthly and yearly budget. Decide what you need to earn to scrape by and what you need to be comfortable—whatever comfortable looks like for you. Don’t forget to include the cost of saving for emergencies and retirement. Once you have this information, start looking at pay-scale data for the position you are considering, both nationally and locally. Looking at both is important to get a baseline for what you can expect the company to offer, which may be different locally from the national average.

Now, put those numbers aside for a moment, and write a list of how wonderful you are. No, really—write a list of all your qualifications, professional accomplishments, and open source contributions. You don’t have to show it to anyone, but you should keep it close at hand. Now that you have all this in front of you, take a walk or whatever you do to relax, and decide, in your own mind, how much your knowledge and expertise are worth.

Think about what matters to you besides your direct monetary compensation.

Then think about what matters to you besides your direct monetary compensation. How much time off would you like? What would you like your work hours to be? Do you prefer to work in an office or remotely? What kind of sign-on bonus do you expect? Do you want to go back to school for an advanced degree? Would you like your employer to pay for it and allow flexibility in your schedule so you can attend classes?

There are a ton of fringe benefits to employment, and often we forget that many are negotiable. Once you know what you want, decide where you’re willing to bend; for example, you might be willing to accept a little less money to have extra holiday days or to work remotely. Once you know where you’re flexible, create a salary range. The low end is the rate that you absolutely will not go below, and the high end is what you prefer. Now make a table and in the first column write down regular intervals within that range. In the second column, do a little quick and dirty math to add 10% to each number. If you think they’ll offer between US$ 50,000 and 55,000, your table may look something like this:

Offer Offer +10%
$ 50,000 $ 55,000
$ 51,000 $ 56,100
$ 52,000 $ 57,200
$ 53,000 $ 53,800
$ 54,000 $ 59,400
$ 55,000 $ 60,500

Now that you’ve done your research and prep work, you’re ready to negotiate.

Phase Two

This is the day. The company you’ve been interviewing with for that job you’d love (or desperately need) has extended a job offer. Whether it comes by letter, phone call, email, or while sitting in a cold office across from a steely eyed negotiator, the result is the same: It’s time to step up to the plate and take your swing. Throughout the negotiations, make sure to stay polite, enthusiastic, and firm, use cooperative language, and if you’re offered an insulting number, don’t be afraid to walk away.

Don’t tell the recruiter your previous salary or salary expectations when they ask.

The hardest part comes first, but with practice it will become second nature. Don’t tell the recruiter your previous salary or salary expectations when they ask, and they will. Instead, give them a friendly smile and say something like, “I’m far more interested in designing widgets here at ACME Enterprises than I am in the compensation package.”

This gentle pushback does two things: First, it tells the other person that you know the game, and second, it keeps you from anchoring the negotiations against your previous pay. This should work most of the time. If the recruiter asks a second time, simply say, “I will consider any reasonable offer.” This is again putting the ball back in their court while not losing any ground. In rare cases, the company may push back and ask a third time. Don’t sweat it. Say something like, “You’re in a much better position to know how much I’m worth to your company than I am.” It’s hard to argue with that logic!

Try to negotiate in person or on the phone.

Once you get an offer, even if the company mails or emails it to you, try to negotiate in person or on the phone. I prefer to do it by phone, because I can be somewhere I feel comfortable and have notes in front of me, and I don’t have to police my facial expressions and body language. Even if the offer is more than you dreamed, repeat the number and stop talking. Jack Chapman, a career coach and author, calls this “the Flinch.” Because people are uncomfortable with silence, the person you are negotiating with is likely to try to fill the lull in the conversation, often with a better offer. Look at the table you made in Phase One, and counter their offer with one 10% higher.

Your last step is to ask for a compensation review in six months.

A little haggling between the numbers will probably follow. The person you’re negotiating with may need to speak to someone higher up and come back to you later in the day—this is all part of the process. Once you have a number that you both are happy with, cinch the deal and use that as a baseline to negotiate your fringe benefits. Maybe you’re willing to give up that 10% you negotiated to get extra holiday days, or a company car, or whatever is important to you. Your last step is to ask for a compensation review in six months. This gives you half a year to show them how great you are, then you can ask for more money in your glowing pay review.

Negotiating a job offer can feel a little overwhelming, but if you practice, do your research, and remain calm, enthusiastic, and firm, you’ll end up with both a more rewarding experience and a more satisfying pay stub.

What are your experiences with negotiation? Do you have a tip or trick that never fails? Tell me about it in the comments below.

Top 5: 13 years of OpenStreetMap, Linux-powered guitar amps, and more

In this week’s top 5, we take a look at maps, robots, and more!

This week’s top articles

5. 6 hardware projects for upgrading your home

When you make your house a little smarter, you’re going to want to use open hardware. Editor Alex Sanchez shares some projects that you can do yourself on your old house.

4. 7 open source Twitter bots to follow

Do you like Twitter, but wish it wasn’t so full of humans? Editor Jason Baker shares some bots you might want to follow or use as inspiration to make your own.

3. How to make a low-cost guitar amp with Linux

Are you having trouble getting your music to go to 11? Seth Kenlon shows you how to turn your computer into a rock and roll machine.

2. Make your own Twitter bot with Python and Raspberry Pi

Community moderator Ben Nuttall shares how you can use a Raspberry Pi and the twython library to write your own Twitter bot. Now you can remind your friends to take out their trash cans every week.

1. 13 amazing maps to celebrate 13 years of OpenStreetMap

Editor Jason Baker shares how this community mapping project has done a lot over the years. And the best part is that no one will yell at you to roll the maps.

How to create a blog with AsciiDoc

I work daily with content management tools and support documentation writers whose preferred markup language is AsciiDoc. It has a simple syntax, but enough features to keep even a hardcore documentation nerd happy. AsciiDoc allows you to write documentation in a more natural way and mark it up cleanly for presentation on the web or as a PDF. This got me thinking, “Wouldn’t it be handy to be able to maintain a website purely with AsciiDoc?”

After some googling and chatting with colleagues, I found Hugo, a publishing platform that can transform articles written in Markdown or AsciiDoc into usable content for the web. It is a very feature-rich platform, with a rich language for working with templates and theming, and it’s a lot of fun to work with.

One big advantage to me is that Hugo doesn’t require a database to support a blog site with plenty of functionality. The pages are rendered in HTML, so sites are blazingly fast and very easy to maintain. It even comes with its own server, so I can test my site while I work on it. As long as your server can deliver HTML, you’re good to go.

The lack of a database or need for a language such as PHP reduce the risk of SQL injection, making it especially handy for creating secure sites. It also makes a website faster than one on a traditional platform, and combining it with a content-delivery network (CDN) produces a very fast website.

It supports tasks that normally are driven by a blogging platform; for example, it can automatically populate an RSS feed when a new article is added. Everyone on your team can run a copy of the site locally, so they can work on their articles in a draft state and refrain from publishing them until they’re ready. If you combine it with a Git branching strategy, multiple authors can work on blogs and articles, then book them back into your main branch when ready to publish. Other interactive elements, such as comments, can be added with Disqus.

A different kind of development platform

When I develop a blog, I start with the idea of “content first” and try not to get tied up with the platform. This is a fine idea, but, in reality, I constantly tweak the site. Then I tweak it a little more, then a weekend is gone, and I haven’t written any content. I spent all my time playing with the theme or working on back-end services.

Using the Hugo platform with the AsciiDoc markup language and AsciiDoctor, a digital Swiss Army knife for AsciiDoc, helps me focus on content and structure rather than presentation. Hugo has a decent template system, so I can do a lot more with a lot less code. AsciiDoc helps me write documents with a nice structure, and Hugo uses AsciiDoctor to convert the documents into other formats, such as PDF or Linux man pages, as I write them. Because I can preview them locally as HTML, I can identify places my content needs work. By running Hugo in a console, I can see issues with my document whenever I save it, so I can fix them and move on. This is different from my usual routine:

“My blog post is done! And now to send my masterpiece to the world! …? Wait a minute, why is all my text a H1? I hate myself.”
                     —Me, at 3 a.m. on very little sleep and too much coffee

Documentation workflow

I normally write my first drafts in plain English. I use a new branch in Git for each article, which keeps things nice and simple until I am ready to publish. Once I give my article a couple of edits to make sure everything flows well, I add AsciiDoc markup so Hugo can format the article as clean HTML. When the article is ready to publish, I merge it back into my master branch.

Often I work on documents that include standardized text or content (e.g., information about licensing, support, or company descriptions). I use an include statement for that boilerplate content and set it up in my template or define content types to add it, depending on what I’m working on. This method makes standard, repetitive content more modular and easier to maintain.

You can also define metadata that your theme can use to organize content, e.g., tagging articles, grouping content, defining a page as a “solution” or a “FAQ,” etc. This is especially handy with AsciiDoc, as a document’s header will have a standard metadata section, which may be different between an article and a blog post. I can define the metadata within Hugo, and it does the work for me when I create a new piece of content.

My preferred editor, Vim, has syntax files available for AsciiDoc. If you are looking for a more visual approach to working with content, I recommend the Atom editor with the AsciiDoc Preview plugin. It provides a real-time preview of your page, making it easy to check your document. Atom was created by GitHub and has built-in support for working with Git, so it’s straightforward to work on documents across different branches.

Overall, I am very happy with Hugo and AsciiDoc. My process is more content-focused. I have a great workflow with Git, and site performance is noticeably better versus a traditional PHP/MySQL content management system.

Getting started

If you are interested in getting started with Hugo and AsciiDoc, my demo on GitHub provides content, a theme, and notes on how to get up and running. The README also contains step-by-step instructions on downloading and configuring Hugo and writing with AsciiDoc, as well as links to resources to help you get started.

Have you used AsciiDoc and Hugo? Please post links to your projects in the comments.