Why containers are the best way to test software performance

Software performance and scalability are frequent topics when we talk about application development. A big reason for that is an application’s performance and scalability directly affect its success in the market. An application, no matter how good its user interface, won’t claim market share if its response time is sluggish.

This is why we spend so much time improving an application’s performance and scalability as its user base grows.

Where usual testing practices fail

Fortunately, we have a lot of tools to test software behavior under high-stress conditions. There are also tools to help identify the causes of performance and scalability issues, and other benchmark tools can stress-test systems to provide a relative measure of a system’s stability under a high load; however, we run into problems with performance and scale engineering when we try to use these tools to understand the performance of enterprise products. Generally, these products are not single applications; instead they may consist of several different applications interacting with each other to provide a consistent and unified user experience.

We may not get any meaningful data about a product’s performance and scalability issues if we test only its individual components.

We may not get any meaningful data about a product’s performance and scalability issues if we test only its individual components. The real numbers can be gathered only when we test the application in real-life scenarios, that is by subjecting the entire enterprise application to a real-life workload.

The question becomes: How can we achieve this real-life workload in a test scenario?

Containers to the rescue

The answer is containers. To explain how containers can help us understand a product’s performance and scalability, let’s look at Puppet, a software configuration management tool, as an example.

Puppet uses a client-server architecture, where there are one or more Puppet masters (servers), and the systems that are to be configured using Puppet run Puppet agents (clients).

To understand an application’s performance and scalability, we need to stress the Puppet masters with high load from the agents running on various systems.

To do this, we can install puppet-master on one system, then run multiple containers that are each running our operating system, over which we install and run puppet-agent.

Next, we need to configure the Puppet agents to interact with the Puppet master to manage the system configuration. This stresses the server when it handles the request and stresses the client when it updates the software configuration.

So, how did the containers help here? Couldn’t we have just simulated the load on the Puppet master through a script?

The answer is no. It might have simulated the load, but we would have gotten a highly unrealistic view of its performance.

The reason for this is quite simple. In real life, a user system will run a number of other processes besides puppet-agent or puppet-master, where each process consumes a certain amount of system resources and hence directly impacts the performance of the puppet by limiting the resources Puppet can use.

This was a simple example, but the performance and scale engineering of enterprise applications can get really challenging when dealing with products that combine more than a handful of components. This is where containers shine.

Why containers and not something else?

A genuine question is: Why use containers and not virtual machines (VMs) or just bare-metal machines?

The logic behind running containers is related to the number of container images of a system can we launch, as well as their cost versus the alternatives.

Although VMs provide a powerful mechanism, they also incur a lot of overhead on system resources, thereby limiting the number of systems that can be replicated on a single bare-metal server. By contrast, it is fairly easy to launch even 1,000 containers on the same system, depending on what kind of simulation you are trying to achieve, while keeping the resource overhead low.

With bare-metal servers, the performance and scale can be as realistic as needed, but a major problem is cost overhead. Will you buy 1,000 servers for performance and scale experiments?

That’s why containers overall provide an economical and scalable way of testing products’ performance against a real-life scenario while keeping resources, overhead, and costs in check.

Learn more in Saurabh Badhwar’s talk Testing Software Performance and Scalability Using Containers at Open Source Summit in Los Angeles.

How to avoid leaving money on the table with salary negotiation

Although any sort of negotiation can be stressful, negotiating compensation for a new job—especially when you have the opportunity to get paid to work on open source software—can be especially intimidating. Because of this, many people, particularly women and minorities, choose not to negotiate at all. Unfortunately, this choice may come with a $500,000 penalty. That’s how much money the average person loses throughout their lifetime by choosing not to negotiate their wages.

Talking about the importance of wage negotiation in America is impossible without talking about the wage gap for women and minorities. A few years ago, the big buzz was about “79 cents to the dollar” that women were paid in comparison to men. Data show that the U.S. pay gap has improved marginally, and women are now on average receiving 80 cents to the male dollar. This number varies by location, and ground is being lost in some places. The disparity is even worse for women of color and other marginalized groups. We don’t even have statistics for the difficulties experienced by transgender and gender-nonconforming people, who often face some of the most severe barriers in the workplace.

Makes you look at your paycheck a little differently, doesn’t it?

Don’t let it get you down, though. Although there is a lot that must be done at the corporate and social policy levels, you can help improve your own situation by choosing to negotiate. Making that choice isn’t always easy when you’re fresh out of school or new to the industry and only have open source contributions to showcase your skills. But, once you decide to negotiate—and learn how to do it well—a lot can change. For example, last year I made the choice to negotiate my salary and increased my monthly take-home pay by more than 50%. It wasn’t easy. There was a risk it could backfire, but with a little courage and elbow grease, the result was certainly worth the effort.

Remember: If you don’t ask, you don’t get.

Like the idea, but not sure where to start? Try negotiating on small things that don’t matter. Start frequenting yard sales and flea markets. Negotiate when you buy something, just to practice your skills. This will help boost your confidence and get you used to the process. Focus on what you can gain, not what you can lose. Recognize that the process is a bit of a game, and you can have fun in the interaction. Remember: If you don’t ask, you don’t get.

I like to think of negotiation as a two-phase process: Phase One happens before the offer, sometimes even before the interview, whereas Phase Two occurs when you sit down with HR, the hiring manager, or the recruiter and hash out the details.

Phase One

Start by looking at your own finances. Figure out your monthly and yearly budget. Decide what you need to earn to scrape by and what you need to be comfortable—whatever comfortable looks like for you. Don’t forget to include the cost of saving for emergencies and retirement. Once you have this information, start looking at pay-scale data for the position you are considering, both nationally and locally. Looking at both is important to get a baseline for what you can expect the company to offer, which may be different locally from the national average.

Now, put those numbers aside for a moment, and write a list of how wonderful you are. No, really—write a list of all your qualifications, professional accomplishments, and open source contributions. You don’t have to show it to anyone, but you should keep it close at hand. Now that you have all this in front of you, take a walk or whatever you do to relax, and decide, in your own mind, how much your knowledge and expertise are worth.

Think about what matters to you besides your direct monetary compensation.

Then think about what matters to you besides your direct monetary compensation. How much time off would you like? What would you like your work hours to be? Do you prefer to work in an office or remotely? What kind of sign-on bonus do you expect? Do you want to go back to school for an advanced degree? Would you like your employer to pay for it and allow flexibility in your schedule so you can attend classes?

There are a ton of fringe benefits to employment, and often we forget that many are negotiable. Once you know what you want, decide where you’re willing to bend; for example, you might be willing to accept a little less money to have extra holiday days or to work remotely. Once you know where you’re flexible, create a salary range. The low end is the rate that you absolutely will not go below, and the high end is what you prefer. Now make a table and in the first column write down regular intervals within that range. In the second column, do a little quick and dirty math to add 10% to each number. If you think they’ll offer between US$ 50,000 and 55,000, your table may look something like this:

Offer Offer +10%
$ 50,000 $ 55,000
$ 51,000 $ 56,100
$ 52,000 $ 57,200
$ 53,000 $ 53,800
$ 54,000 $ 59,400
$ 55,000 $ 60,500

Now that you’ve done your research and prep work, you’re ready to negotiate.

Phase Two

This is the day. The company you’ve been interviewing with for that job you’d love (or desperately need) has extended a job offer. Whether it comes by letter, phone call, email, or while sitting in a cold office across from a steely eyed negotiator, the result is the same: It’s time to step up to the plate and take your swing. Throughout the negotiations, make sure to stay polite, enthusiastic, and firm, use cooperative language, and if you’re offered an insulting number, don’t be afraid to walk away.

Don’t tell the recruiter your previous salary or salary expectations when they ask.

The hardest part comes first, but with practice it will become second nature. Don’t tell the recruiter your previous salary or salary expectations when they ask, and they will. Instead, give them a friendly smile and say something like, “I’m far more interested in designing widgets here at ACME Enterprises than I am in the compensation package.”

This gentle pushback does two things: First, it tells the other person that you know the game, and second, it keeps you from anchoring the negotiations against your previous pay. This should work most of the time. If the recruiter asks a second time, simply say, “I will consider any reasonable offer.” This is again putting the ball back in their court while not losing any ground. In rare cases, the company may push back and ask a third time. Don’t sweat it. Say something like, “You’re in a much better position to know how much I’m worth to your company than I am.” It’s hard to argue with that logic!

Try to negotiate in person or on the phone.

Once you get an offer, even if the company mails or emails it to you, try to negotiate in person or on the phone. I prefer to do it by phone, because I can be somewhere I feel comfortable and have notes in front of me, and I don’t have to police my facial expressions and body language. Even if the offer is more than you dreamed, repeat the number and stop talking. Jack Chapman, a career coach and author, calls this “the Flinch.” Because people are uncomfortable with silence, the person you are negotiating with is likely to try to fill the lull in the conversation, often with a better offer. Look at the table you made in Phase One, and counter their offer with one 10% higher.

Your last step is to ask for a compensation review in six months.

A little haggling between the numbers will probably follow. The person you’re negotiating with may need to speak to someone higher up and come back to you later in the day—this is all part of the process. Once you have a number that you both are happy with, cinch the deal and use that as a baseline to negotiate your fringe benefits. Maybe you’re willing to give up that 10% you negotiated to get extra holiday days, or a company car, or whatever is important to you. Your last step is to ask for a compensation review in six months. This gives you half a year to show them how great you are, then you can ask for more money in your glowing pay review.

Negotiating a job offer can feel a little overwhelming, but if you practice, do your research, and remain calm, enthusiastic, and firm, you’ll end up with both a more rewarding experience and a more satisfying pay stub.

What are your experiences with negotiation? Do you have a tip or trick that never fails? Tell me about it in the comments below.

Top 5: 13 years of OpenStreetMap, Linux-powered guitar amps, and more

In this week’s top 5, we take a look at maps, robots, and more!

This week’s top articles

5. 6 hardware projects for upgrading your home

When you make your house a little smarter, you’re going to want to use open hardware. Editor Alex Sanchez shares some projects that you can do yourself on your old house.

4. 7 open source Twitter bots to follow

Do you like Twitter, but wish it wasn’t so full of humans? Editor Jason Baker shares some bots you might want to follow or use as inspiration to make your own.

3. How to make a low-cost guitar amp with Linux

Are you having trouble getting your music to go to 11? Seth Kenlon shows you how to turn your computer into a rock and roll machine.

2. Make your own Twitter bot with Python and Raspberry Pi

Community moderator Ben Nuttall shares how you can use a Raspberry Pi and the twython library to write your own Twitter bot. Now you can remind your friends to take out their trash cans every week.

1. 13 amazing maps to celebrate 13 years of OpenStreetMap

Editor Jason Baker shares how this community mapping project has done a lot over the years. And the best part is that no one will yell at you to roll the maps.

How to create a blog with AsciiDoc

I work daily with content management tools and support documentation writers whose preferred markup language is AsciiDoc. It has a simple syntax, but enough features to keep even a hardcore documentation nerd happy. AsciiDoc allows you to write documentation in a more natural way and mark it up cleanly for presentation on the web or as a PDF. This got me thinking, “Wouldn’t it be handy to be able to maintain a website purely with AsciiDoc?”

After some googling and chatting with colleagues, I found Hugo, a publishing platform that can transform articles written in Markdown or AsciiDoc into usable content for the web. It is a very feature-rich platform, with a rich language for working with templates and theming, and it’s a lot of fun to work with.

One big advantage to me is that Hugo doesn’t require a database to support a blog site with plenty of functionality. The pages are rendered in HTML, so sites are blazingly fast and very easy to maintain. It even comes with its own server, so I can test my site while I work on it. As long as your server can deliver HTML, you’re good to go.

The lack of a database or need for a language such as PHP reduce the risk of SQL injection, making it especially handy for creating secure sites. It also makes a website faster than one on a traditional platform, and combining it with a content-delivery network (CDN) produces a very fast website.

It supports tasks that normally are driven by a blogging platform; for example, it can automatically populate an RSS feed when a new article is added. Everyone on your team can run a copy of the site locally, so they can work on their articles in a draft state and refrain from publishing them until they’re ready. If you combine it with a Git branching strategy, multiple authors can work on blogs and articles, then book them back into your main branch when ready to publish. Other interactive elements, such as comments, can be added with Disqus.

A different kind of development platform

When I develop a blog, I start with the idea of “content first” and try not to get tied up with the platform. This is a fine idea, but, in reality, I constantly tweak the site. Then I tweak it a little more, then a weekend is gone, and I haven’t written any content. I spent all my time playing with the theme or working on back-end services.

Using the Hugo platform with the AsciiDoc markup language and AsciiDoctor, a digital Swiss Army knife for AsciiDoc, helps me focus on content and structure rather than presentation. Hugo has a decent template system, so I can do a lot more with a lot less code. AsciiDoc helps me write documents with a nice structure, and Hugo uses AsciiDoctor to convert the documents into other formats, such as PDF or Linux man pages, as I write them. Because I can preview them locally as HTML, I can identify places my content needs work. By running Hugo in a console, I can see issues with my document whenever I save it, so I can fix them and move on. This is different from my usual routine:

“My blog post is done! And now to send my masterpiece to the world! …? Wait a minute, why is all my text a H1? I hate myself.”
                     —Me, at 3 a.m. on very little sleep and too much coffee

Documentation workflow

I normally write my first drafts in plain English. I use a new branch in Git for each article, which keeps things nice and simple until I am ready to publish. Once I give my article a couple of edits to make sure everything flows well, I add AsciiDoc markup so Hugo can format the article as clean HTML. When the article is ready to publish, I merge it back into my master branch.

Often I work on documents that include standardized text or content (e.g., information about licensing, support, or company descriptions). I use an include statement for that boilerplate content and set it up in my template or define content types to add it, depending on what I’m working on. This method makes standard, repetitive content more modular and easier to maintain.

You can also define metadata that your theme can use to organize content, e.g., tagging articles, grouping content, defining a page as a “solution” or a “FAQ,” etc. This is especially handy with AsciiDoc, as a document’s header will have a standard metadata section, which may be different between an article and a blog post. I can define the metadata within Hugo, and it does the work for me when I create a new piece of content.

My preferred editor, Vim, has syntax files available for AsciiDoc. If you are looking for a more visual approach to working with content, I recommend the Atom editor with the AsciiDoc Preview plugin. It provides a real-time preview of your page, making it easy to check your document. Atom was created by GitHub and has built-in support for working with Git, so it’s straightforward to work on documents across different branches.

Overall, I am very happy with Hugo and AsciiDoc. My process is more content-focused. I have a great workflow with Git, and site performance is noticeably better versus a traditional PHP/MySQL content management system.

Getting started

If you are interested in getting started with Hugo and AsciiDoc, my demo on GitHub provides content, a theme, and notes on how to get up and running. The README also contains step-by-step instructions on downloading and configuring Hugo and writing with AsciiDoc, as well as links to resources to help you get started.

Have you used AsciiDoc and Hugo? Please post links to your projects in the comments.

Install St George IPG on CentOS with cPanel

yum install swig gcc gcc-c++ autoconf automake sed php-devel
mkdir -p /opt/stgeorgeipg/ && cd /opt/stgeorgeipg/
wget --user-agent="Mozilla/5.0 (Windows NT 5.2; rv:2.0.1) Gecko/20100101 Firefox/4.0.1" https://www.ipg.stgeorge.com.au/downloads/StGeorgeLinuxAPI-3.3.tar.gz
tar -xvzf StGeorgeLinuxAPI-3.3.tar.gz
cd webpaySWIG-3.3

Once downloaded and extracted, edit the makefilePhp5 as follows:

PHP_EXTENSIONS = /usr/local/lib/php/extensions/no-debug-non-zts-20100525
PHP_INCLUDE_DIR = /home/cpeasyapache/src/php-5.4.35/

Note the paths may vary on different PHP versions.

make -f makefilePhp5

If running on 64 bit, you need to link up the 32 bit SSL binaries to the 64 bit binaries, as follows. You can also check what each .so* file requires by using ldd.

ln -s /usr/lib64/libssl.so.1.0.1e /usr/lib64/libssl.so.6
ln -s /usr/lib64/libcrypto.so.1.0.1e /usr/lib64/libcrypto.so.6

Finally, edit your php.ini – find it via `whereis php` and add the following line:

extension = webpay_php.so

Restart Apache

service httpd restart

 

Notes:

Always double check the permissions / ownership!

UPGRADING

Ensure that you re-make the makefilePhp with the new easyapache PHP version,

Includes = /home/cpeasyapache/src/php-5.4.25

10 Useful “IP” Commands to Configure Network Interfaces

In this post, we are going to review how we can assign Static IP Address, Static Route, Default Gateway etc.  Assigning IP Address on demand using IP command. IFCONFIG command is deprecated and replaced by IP command in Linux. However, IFCONFIG command is still works and available for most of the Linux distributions

How do i Configure Static IP Address Internet Protocol (IPv4)

To configure static IP Address, you need to update or edit network configuration file to assign an Static IP Address to a system. You must be superuser with su (switch user) command from terminal or command prompt.

For RHEL/CentOS/Fedora

Open and edit network configuration file for (eth0 or eth1) using your favorite editor. For example, to assigning IP Address to eth0 interface as follows.

[root@tecmint ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
Simple output:
DEVICE="eth0"
BOOTPROTO=static
ONBOOT=yes
TYPE="Ethernet"
IPADDR=192.168.50.2
NAME="System eth0"
HWADDR=00:0C:29:28:FD:4C
GATEWAY=192.168.50.1

For Ubuntu/Debian/Linux Mint

Assign Static IP Address to eth0 interface editing configuration file /etc/network/interfaces to make permanent changes as shown below.

auto eth0
iface eth0 inet static
address 192.168.50.2
netmask 255.255.255.0
gateway 192.168.50.1

Next, restart network services after entering all the details using the following command.

# /etc/init.d/networking restart
$ sudo /etc/init.d/networking restart

1. How to Assign a IP Address to Specific Interface

The following command used to assign IP Address to a specific interface (eth1) on the fly.

# ip addr add 192.168.50.5 dev eth1
$ sudo ip addr add 192.168.50.5 dev eth1

Note: Unfortunately all these settings will be lost after a system restart.

2. How to Check an IP Address

To get the depth information of your network interfaces like IP Address, MAC Address information, use the following command as shown below.

# ip addr show
$ sudo ip addr show
Sample Output
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:28:fd:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.50.2/24 brd 192.168.50.255 scope global eth0
    inet6 fe80::20c:29ff:fe28:fd4c/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:0c:29:28:fd:56 brd ff:ff:ff:ff:ff:ff
    inet 192.168.50.5/24 scope global eth1
    inet6 fe80::20c:29ff:fe28:fd56/64 scope link
       valid_lft forever preferred_lft forever

3. How to Remove an IP Address

The following command will remove an assigned IP address from the given interface (eth1).

# ip addr del 192.168.50.5/24 dev eth1
$ sudo ip addr del 192.168.50.5/24 dev eth1

4. How to Enable Network Interface

The “up” flag with interface name (eth1) enables a network interface. For example, the following command will activates the eth1 network interface.

# ip link set eth1 up
$ sudo ip link set eth1 up

5. How to Disable Network Interface

The “down” flag with interface name (eth1) disables a network interface. For example, the following command will De-activates the eth1 network interface.

# ip link set eth1 down
$ sudo ip link set eth1 down

6. How do I Check Route Table?

Type the following command to check the routing table information of system.

# ip route show
$ sudo ip route show
Sample Output
10.10.20.0/24 via 192.168.50.100 dev eth0
192.168.160.0/24 dev eth1  proto kernel  scope link  src 192.168.160.130  metric 1
192.168.50.0/24 dev eth0  proto kernel  scope link  src 192.168.50.2
169.254.0.0/16 dev eth0  scope link  metric 1002
default via 192.168.50.1 dev eth0  proto static

7. How do I Add Static Route

Why you need to add Static routes or Manual routes, because that the traffic must not pass through the default gateway. We need to add Static routes to pass traffic from best way to reach the destination.

# ip route add 10.10.20.0/24 via 192.168.50.100 dev eth0
$ sudo ip route add 10.10.20.0/24 via 192.168.50.100 dev eth0

8. How to Remove Static Route

To remove assigned static route, simply type the following command.

# ip route del 10.10.20.0/24
$ sudo ip route del 10.10.20.0/24

9. How do I Add Persistence Static Routes

All the above route will be lost after a system restart. To add permanent Static route, edit file /etc/sysconfig/network-scripts/route-eth0 (We are storing static route for (eth0) and add the following lines and save and exist. By default route-eth0 file will not be there, need to be created.

For RHEL/CentOS/Fedora

# vi /etc/sysconfig/network-scripts/route-eth0
10.10.20.0/24 via 192.168.50.100 dev eth0

For Ubuntu/Debian/Linux Mint

Open the file /etc/network/interfaces and at the end add the persistence Static routes. IP Addresses may differ in your environment.

$ sudo vi /etc/network/interfaces
auto eth0
iface eth0 inet static
address 192.168.50.2
netmask 255.255.255.0
gateway 192.168.50.100
#########{Static Route}###########
up ip route add 10.10.20.0/24 via 192.168.50.100 dev eth0

Next, restart network services after entering all the details using the following command.

# /etc/init.d/network restart
$ sudo /etc/init.d/network restart

10. How do I Add Default Gateway

Default gateway can be specified globally or for in interface-specific config file. Advantage of default gateway is If we have more than one NIC is present in the system. You can add default gateway on the fly as shown below command.

# ip route add default via 192.168.50.100
$ sudo ip route add default via 192.168.50.100

Learning Linux bash scripting for beginners

Bash (Bourne-Again Shell) is a Linux and Unix-like system shell or command language interpreter. It is a default shell on many operating systems including Linux and Apple OS X.

If you have always used a graphic user interface like KDE or Gnome or MS-Windows or Apple OS X, you are likely to find bash shell confusing. If you spend some time with the bash shell prompt and it will be difficult for you to go back.

learn-bash

Here are a list of tutorials and helpful resources to help you learn bash scripting and bash shell itself.

1. BASH Programming – Introduction HOW-TO : This tutorials intends to help you to start programming basic-intermediate shell scripts. It does not intend to be an advanced document.

2. Advanced Bash-Scripting Guide : An in-depth exploration of the art of shell scripting. A must read to master bash shell scripting for all Unix users.

3. Learn Bash In Y Minutes : A quick tour of bash programming language.

4. BASH Frequently Asked Questions : Greg’s Wiki includes answers to many bash programming problems in Q & A format.

5. Linux Shell Scripting Tutorial : A beginners bash shell scripting handbook for new Linux users, sysadmins and school students studying Linux/Unix or computer science.

6. Bash Hackers Wiki : This wiki provide human-readable documentation and information for bash includes tons of examples.

7. Google’s Shell Style Guide : A thorough and general purpose understanding of bash programming by Google.

8. bash — Standard Shell : A thorough understanding of bash programming for Gentoo developers by Gentoo project.

10. Bash By Examples Part I, II, and III : Fundamental programming in the BASH where you will learn how to program in bash by example.

11. Bash Guide for Beginners : This is a practical guide which, while not always being too serious, tries to give real-life instead of theoretical examples.

Have a favorite online bash tutorial or new books? Let’s hear about it in the comments below.

15 essential commands to check hardware information on Linux

1. lscpu

The lscpu command reports information about the cpu and processing units. It does not have any further options or functionality.

2. lshw – List Hardware

A general purpose utility, that reports detailed and brief information about multiple different hardware units such as cpu, memory, disk, usb controllers, network adapters etc. Lshw extracts the information from different /proc files.

3. hwinfo – Hardware Information

Hwinfo is another general purpose hardware probing utility that can report detailed and brief information about multiple different hardware components, and more than what lshw can report.

4. lspci – List PCI

The lspci command lists out all the pci buses and details about the devices connected to them.
The vga adapter, graphics card, network adapter, usb ports, sata controllers, etc all fall under this category.

5. lsscsi – List scsi devices

Lists out the scsi/sata devices like hard drives and optical drives.

6. lsusb – List usb buses and device details

This command shows the USB controllers and details about devices connected to them. By default brief information is printed. Use the verbose option “-v” to print detailed information about each usb port

7. Inxi

Inxi is a 10K line mega bash script that fetches hardware details from multiple different sources and commands on the system, and generates a beautiful looking report that non technical users can read easily.

8. lsblk – List block devices

List out information all block devices, which are the hard drive partitions and other storage devices like optical drives and flash drives

9. df – disk space of file systems

Reports various partitions, their mount points and the used and available space on each.

10. Pydf – Python df

An improved df version written in python, that displays colored output that looks better than df

11. fdisk

Fdisk is a utility to modify partitions on hard drives, and can be used to list out the partition information as well.

12. mount

The mount is used to mount/unmount and view mounted file systems.

13. free – Check RAM

Check the amount of used, free and total amount of RAM on system with the free command.

15. /proc files

Many of the virtual files in the /proc directory contain information about hardware and configurations. Here are some of them

CPU/Memory information

# cpu information
$ cat /proc/cpuinfo

# memory information
$ cat /proc/meminfo

Linux/kernel information

$ cat /proc/version
Linux version 3.11.0-12-generic (buildd@allspice) (gcc version 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu7) ) #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC 2013

 

 

 

Rsync with a non-standard ssh port

While doing some work on migrating accounts to a new server, I needed to use rsync over ssh. The ssh daemon on the remote server runs on a non-standard port, and all the port related options to rsync only change settings if you’re running the rsync-daemon.

After some searching, the man page of rsync  offered a solution:

rsync -avz -e "ssh -p $portNumber" /localpath user@remoteip:/remotepath

 

Rsync (Remote Sync): 10 Practical Examples of Rsync Command in Linux

Rsync (Remote Sync) is a most commonly used command for copying and synchronizing files and directories remotely as well as locally in Linux/Unix systems. With the help of rsync command you can copy and synchronize your data remotely and locally across directories, across disks and networks, perform data backups and mirroring between two Linux machines.

This article explains 10 basic and advanced usage of the rsync command to transfer your files remotely and locally in Linux based machines. You don’t need to be root user to run rsync command.

Some advantages and features of Rsync command
  1. It efficiently copies and sync files to or from a remote system.
  2. Supports copying links, devices, owners, groups and permissions.
  3. It’s faster than scp (Secure Copy) because rsync uses remote-update protocol which allows to transfer just the differences between two sets of files. First time, it copies the whole content of a file or a directory from source to destination but from next time, it copies only the changed blocks and bytes to the destination.
  4. Rsync consumes less bandwidth as it uses compression and decompression method while sending and receiving data both ends.

Continue reading Rsync (Remote Sync): 10 Practical Examples of Rsync Command in Linux