whowatch – Monitor Linux Users and Processes in Real Time

whowatch is a simple, easy-to-use interactive who-like command line program for monitoring processes and users on a Linux system. It shows who is logged on to your system and what they are doing, in a similar fashion as the w command in real-time.

It shows total number of users on the system and number of users per connection type (local, telnet, ssh and others). whowatch also shows system uptime and displays information such as user’s login name, tty, host, processes as well as the type of the connection.

In addition, you can select a particular user and view their processes tree. In the process tree mode, you can send the SIGINT and SIGKILL signals to selected process in a fun way.

In this brief article, we will explain how to install and use whowatch on Linux systems to monitor users and processes in real time in a machine.

How to Install whowatch in Linux


The program whowatch can be easily installed from the default repositories using package manager on your Linux distribution as shown.

$ sudo apt install whowatch [On Ubuntu/Debian]
$ sudo yum install whowatch [On CentOs/RHEL]
$ sudo dng install whowatch [On Fedora 22+]

Once installed, you can simply type the whowatch in the command line, you will see the following screen.

$ whowatch
Monitor Logged in Users

Monitor Logged in Users

You can view a particular user’s details, simply highlight the user (use the Up and Down arrows to navigate). Then press d key to list the user information as shown in this screenshot.

Check User Information in Linux

Check User Information in Linux

To view a users process tree, press Enter after highlighting that particular user.

Monitor User Process

Monitor User Process

To view all Linux user processes tree, press t.

Monitor Linux User Processes

Monitor Linux User Processes

You can also view Linux system information by pressing s key.

Check Linux System Information

Check Linux System Information

For more information, see the whowatch man page as shown.

$ man whowatch

You will also find these related articles useful:

  1. How to Monitor Linux Commands Executed by System Users in Real-time
  2. How to Monitor User Activity with psacct or acct Tools

That’s all! whowatch is a simple, easy-to-use interactive command line utility for monitoring processes and users on a Linux system. In this brief guide, we have explained how to install and use whowatch. Use the feedback form below to ask any questions or share your thoughts about this utility.

Why Sinclair’s Bid to Buy the Tribune Company Might Die

Sinclair Broadcasting’s proposed $3.9 billion takeover of Tribune Media, which would have expanded the conservative media company’s footprint to nearly three-fourths of American households, suddenly appears in trouble.

Today, Federal Communications Commission chair Ajit Pai effectively came out against the acquisition by proposing to refer it to a hearing with a judge. In theory, the deal could still go ahead if the judge finds no problems with the acquisition or if the decision is appealed. But mergers referred to judges, such as AT&T’s 2011 bid for T-Mobile, have a tendency to die before their hearings, due in large part to the unpredictable timelines for hearings and decisions.

“Based on a thorough review of the record, I have serious concerns about the Sinclair-Tribune transaction,” Pai said in a statement. “When the FCC confronts disputed issues like these, the Communications Act does not allow it to approve a transaction. Instead, the law requires the FCC to designate the transaction for a hearing in order to get to the bottom of those disputed issues.”

The move follows the launch of an investigation by the FCC’s inspector general, its internal watchdog, over several decisions by Pai that were widely seen as benefitting Sinclair. Neither the FCC nor Sinclair responded to a request for comment.

Sinclair announced its intention to acquire Tribune Media in May 2017, after the FCC began loosening its media ownership rules. Sinclair owns 173 TV stations and Tribune owns 42. The two companies estimate that they could reach 73 percent of US households if they were allowed to merge. Apart from concerns that Pai acted improperly, critics of the deal worried that would concentrate too much power over local television news in the hands of a single company.

The acquisition is particularly controversial because of Sinclair’s ties to the Trump administration. The company routinely requires its stations, which include local affiliates of all four major broadcast networks, to air conservative commentary by former Trump administration official Boris Epshteyn, among other right-leaning “must-run” segments. In late 2016, Politico reported that President Donald Trump’s son-in-law, Jared Kushner, claimed to have struck a deal with Sinclair for favorable news coverage. More recently, the company attracted attention when Deadspin released a video montage of local news anchors across the country reading from the same prepared script accusing other news outlets of perpetuating fake news.

The Sinclair-Tribune deal also drew opposition from other conservative media companies like Newsmax, which worried that increased consolidation would decrease competition. Newsmax joined other media companies and organizations, including Dish Network and Public Knowledge, in asking the FCC to call for more time to consider the merger. “This current transaction overturns more than three decades of bipartisan consensus and rule-making, as well as Congressional intent, while raising serious competitive concerns,” Newsmax wrote in the filing.

The FCC changed some of its media ownership rules last year to allow broadcast companies to reach a larger percentage of households nationwide, and proposed to increase the number of stations a single company can own in one market. Critics believed the moves were made specifically to benefit Sinclair and clear the way for its acquisition of Tribune Media. Sinclair would still have had to sell some of its stations to stay within the law, and the Department of Justice also reportedly called for the company to sell some stations. In his announcement Monday, Pai alluded to the possibility that Sinclair tried to skirt that requirement. “The evidence we’ve received suggests that certain station divestitures that have been proposed to the FCC would allow Sinclair to control those stations in practice, even if not in name, in violation of the law,” he said.

“When Sinclair has been forced to sell stations during previous mergers, it has routinely sold them to family and friends and then signed agreements to control the programming on those stations,” said Karl Frisch, executive director of Allied Progress, an advocacy group that opposed the merger. “The FCC is right to call out this scheme.”

Jessica Rosenworcel, the FCC’s sole Democratic commissioner, said in a statement that she voted in favor of Pai’s proposal. Bloomberg reported that Commissioner Brendan Carr did as well, securing the necessary majority to pass the order.


More Great WIRED Stories

Strikes, Boycotts, and Outages Mar Amazon Prime Day

Prime Day, which began Monday, is one of Amazon’s biggest promotions of the year, when the retailer offers deals to subscribers of its Prime service. This year, some Amazon workers in Europe are striking during Prime Day, hoping to draw attention to working conditions like proposed cuts in wages and health benefits. In solidarity, some consumers have been boycotting the company and its many subsidiaries, like Twitch and Whole Foods.

Nearly 1,800 workers went on strike on Monday in Spain, where the planned protest was first conceived as a way to fight pay cuts and restrictions on time off. But workers in Poland, Germany, Italy, France, and England are also reportedly joining the call for a transnational strike around Prime Day. The unions representing warehouse workers involved in the strike are Comisiones Obreras in Spain and Verdi services union in Germany.

Prime Day is a bit of a misnomer, as the promotion lasts for 36 hours. German workers are expected to walk out Tuesday. In a press release on its website, Verdi wrote that Amazon employees have been struggling for years with health problems from monotonous work and severe physical and mental stress. “Amazon has neglected this responsibility for years and denied its people the right to set rules in a collective agreement,” wrote spokesperson Stefanie Nutzberger.

To top it off, portions of Amazon’s website were not responding in the early hours of the promotion Monday. At 7 pm ET, website downdetector.com, which tracks outages, was reporting outages in many parts of the US.

In a statement to WIRED, the company said, “Amazon is a fair and responsible employer and as such we are committed to dialogue, which is an inseparable part of our culture. We are committed to ensuring a fair cooperation with all our employees, including positive working conditions and a caring and inclusive environment.” The company said it has provided “a safe and positive workplace with competitive pay and benefits from day one.”

In response to a question about the outages, Amazon said, “Some customers are having difficulty shopping, and we’re working to resolve this issue quickly. Many are shopping successfully—in the first hour of Prime Day in the US, customers have ordered more items compared to the first hour last year.”

Bloomberg Intelligence estimated that Amazon would generate roughly $3 billion in sales during Prime Day. Not all of that will be Amazon revenue, as some sales will be by outside merchants who sell through Amazon. In 2017, subscriptions for Prime and other services like e-books and digital video accounted for $9.7 billion in revenue, or about 5 percent of Amazon’s $178 billion in annual revenue.

The consumer boycott began on July 10, organized around the hashtag #amazonstrike on Twitter. On Monday, Game Workers Unite International, a grassroots group attempting to unionize the gaming industry, said it would boycott Twitch, the popular gaming platform that Amazon acquired in 2014, for the day in solidarity.

Amazon’s European employees have used strikes as a bargaining tool for better working conditions in the past around holidays.

This spring, reports surfaced about Amazon workers in the US who rely on food stamps and Amazon fulfillment center workers in the UK who are forced to forgo bathroom breaks and pee in bottles. On social media, Amazon’s critics lambasted the company’s lack of investment in its workers after CEO Jeff Bezos said in an interview said the best way to spend his considerable fortune was on his rocket company Blue Origin. “The only way that I can see to deploy this much financial resource is by converting my Amazon winnings into space travel,” Bezos said.


More Great WIRED Stories

Juul’s Lobbying Could Send Its Public Image Up in Smoke

Over the past year, Juul, the vaping sensation that dominates 70 percent of the US e-cigarette market, has tried to cultivate the image of decent corporate citizen that wants to play by the rules. The company is known for its legions of obsessive young users who have embraced Juul’s discrete, flash-drive-shaped e-cigarettes and pleasing nicotine pods in flavors like fruit medley and mango. When parents and school administrators, public health advocates, and regulators raised concerns, Juul insisted it only wants to help adult smokers stop smoking.

After an April inquiry from the Food and Drug Administration about its marketing, Juul pledged to spend $30 million over the next three years on youth smoking prevention and to support Tobacco 21, a national campaign aimed at raising the minimum age for tobacco and nicotine sales in the US. In June, the company vowed to stop using models in its social media ads and to work with social media companies to remove offending posts and accounts.

With other moves, however, Juul has left even allies wondering about its views. The company has declined to take a position on a controversial bill to ease FDA review of some new e-cigarette products. In May, Juul stayed out of an expensive battle over flavored tobacco in its hometown, San Francisco. Then, in June, Juul sent an email blast to consumers asking them to oppose proposed federal and local regulations to ban vapor flavors because they would make it difficult for adult smokers to switch away from cigarettes. “If flavors have been important to your switching journey, please let the FDA know,” the message said, directing consumers to a site where they could post a comment for the FDA.

Most recently, Juul last week tweeted from its corporate account that “it’s not an e-cigarette.” That confused Alex Clark, executive director of Consumer Advocates for Smoke-free Alternatives Association, an advocacy group that supports e-cigarettes and tobacco alternatives. “It has a different design, a slightly different formulation, but it’s still a vapor product,” Clark says of Juul.

Greg Conley, president of the American Vaping Association, thinks Juul’s wavering is driven by politics. “They were a little too willing to sell out the 18, 19, and 20 year old smokers by agreeing to support Tobacco 21, but they were in a unique situation with the hundreds of stories coming out” about teens using Juul, Conley says. But he thinks Juul will ultimately join the rest of the industry in opposing government regulation. “There’s a difference between supporting a policy and spending massive amounts of money” to push it, he says. Juul says it has written letters in support of local efforts to impose age requirements.

Public health advocates are also skeptical of Juul’s position on younger users. In January, Kimberlee Homer Vagadori, project director of California Youth Advocacy Network, was alarmed to learn that Juul had offered to partner with K-12 schools for a youth smoking prevention program that included focus groups with younger users. The tobacco industry began sponsoring youth prevention efforts in the 1980s as a way to forestall legislation, but studies found that they did more harm than good. “It’s just a marketing ploy,” Vagadori says.

Juul says it no longer reaches out to schools. “We are aware of the criticism and scrutiny from some regarding our efforts,” says spokesperson Victoria Davis. “But we cannot be more clear—we want to work with policymakers, lawmakers, FDA regulators, educators, and parents on youth education and prevention. We want to be part of the solution in keeping minors away from Juul.”

Juul’s moves come at crucial time for the company, founded by two Stanford alums who set out to apply their product design skills to build a cleaner, cooler vaporizer that would fit in your pocket. In December, Juul spun out of its parent company, Pax Labs, and is in the process of raising more than $1.2 billion at a $15 billion valuation, up from an estimated $350 million in 2015.

The vaping industry itself is at an inflection point. After years with no federal oversight, companies now have until 2022 to submit a costly, complicated review process with the FDA. In the meantime, states are playing catch-up with Juul’s sudden ubiquity in high school bathrooms, cracking down on e-cigarettes with taxes, indoor bans, flavor bans, and more.

‘It’s more indication that Juul is behaving like tobacco companies always have.’

Vince Willmore, the Campaign for Tobacco Free Kids

As Juul’s financial ambitions inflate, the company finds itself on the same side of regulatory battles as the tobacco industry it once hoped to disrupt. This leaves Juul shuffling, awkwardly, between its public image and private sales pitch. Now the company has to convince regulators that it wants to push away the teens that made it famous, just as it’s trying to convince investors that a highly regulated piece of hardware can scale as quickly as lines of code.

Juul declined to make executives available, but Davis, the spokesperson, says, “We are investing in our Washington, DC office because we want to support bipartisan policies to help adult smokers in their switching journey.”

For now, e-cigarettes and the vaping industry are puffing away in regulatory limbo. The FDA did not start regulating tobacco products until 2009, after Congress passed the Family Smoking Prevention and Tobacco Control Act. But it wasn’t until 2016 that the FDA officially extended its authority over newer products, like e-cigarettes. A morass of exceptions, extensions, and lawsuits followed. For now, newly regulated products like e-cigarettes are supposed to retroactively go through a complicated review process with the FDA before 2022 without knowing exactly what the standards around vaping will be.

That’s why the controversial bill on which Juul declined to take a position is key. The measure, known as the Cole-Bishop amendment, would exempt e-cigarette companies from the FDA review if they can show that a new product is “substantially equivalent” to an existing product. It was included in a House agricultural appropriations bill in May, and awaits consideration by the Senate.

Records show that Juul has spent $240,000 to lobby Congress on vaping and e-cigarette regulation since last year, including FDA rules and the appropriations bills. But when WIRED asked Juul about its position on the Cole-Bishop amendment, the company did not give a straight answer. “Juul Labs is looking to work with the FDA and Congress on establishing scientifically-valid and appropriate regulations of [electronic nicotine delivery systems or ENDS] products,” the company said in a statement.

That worries some anti-smoking advocates. “Juul’s comments are troubling. If they are serious about being part of the solution, they would support effective FDA regulation and they would be opposing efforts to weaken FDA authority and it’s troubling that they’re not opposing it,” says Vince Willmore, vice president of communications with the Campaign for Tobacco Free Kids. “It’s more indication that Juul is behaving like tobacco companies always have.”

Against this backdrop, Juul is quickly ramping up its Washington presence. In the past six months, Juul hired two well-connected former Department of Health and Human Services officials, one from George W. Bush’s administration and one from Barack Obama’s, for its new DC office. Tevi Troy, the head of the Washington office, co-authored an op-ed in 2015 about Obamacare with FDA Commissioner Scott Gottlieb.

The company is also making moves at the state level. Juul has spent $62,000 to lobby about “harm reduction” in California, it has registered a lobbyist in New York, and partnered with Iowa Attorney General Tom Miller, who made his name as a crusader against big tobacco, for its $30 million youth prevention effort.

In a recent interview with Politico, Troy, the head of Juul’s Washington office, said the company has a window this year “to establish a regulatory regime and a legislative atmosphere and a thought leader atmosphere, a kind of public health consciousness, to set the market correctly.”

“I agree with you that using a Juul is worse than doing nothing,” he said in the same interview. “Nobody who is not smoking should take up this product. Nobody who’s a kid should take up this product. I even have a little text replacement on my iPhone. I just type in a couple letters and it pops up: ‘For Adults Smokers only.’”


More Great WIRED Stories

Deprecated Linux Networking Commands and Their Replacements

In our previous article, we have covered some useful command line networking utilities for Sysadmin’s for network management, troubleshooting and debugging on Linux. We mentioned some networking commands that are still included and supported in many Linux distributions, but are now, in reality, deprecated or obsoleted and therefore should be carry out in favor of more present-day replacements.

Although these networking tools/utilities are still available in official repositories of mainstream Linux distributions, but they do not actually come pre-installed by default.

This is evident in Enterprise Linux distributions, a number of popular networking commands no longer work on RHEL/CentOS 7, while they actually work on RHEL/CentOS 6. Latest Debian and Ubuntu releases don’t include them as well.

In this article, we will share deprecated Linux networking commands and their replacements. These commands include ifconfig, netstat, arp, iwconfig, iptunnel, nameif, as well as route.


All the listed programs with exception of iwconfig are found in the net-tools package which has not been under active maintenance for so many years.

Importantly, you should keep in mind that “unmaintained software is dangerous”, it poses a great security risk to your Linux system. The modern replacement for net-tools is iproute2 – an assortment of utilities for controlling TCP/IP networking in Linux.

The following table shows the summary of the exact deprecated commands and their replacements, that you should take note of.

Linux Deprecated Commands Linux Replacement Commands
arp ip n (ip neighbor)
ifconfig ip a (ip addr), ip link, ip -s (ip -stats)
iptunnel ip tunnel
iwconfig iw
nameif ip link, ifrename
netstat ss, ip route (for netstat -r), ip -s link (for netstat -i), ip maddr (for netstat -g)
route ip r (ip route)

You will find more details about some of the replacements in these following guides.

  1. ifconfig vs ip: What’s Difference and Comparing Network Configuration
  2. 10 Useful “IP” Commands to Configure Network Interfaces

Reference: Doug Vitale Tech Blog post.
Net-tools Project Home: https://sourceforge.net/projects/net-tools/
iproutre2 Description Page: https://wiki.linuxfoundation.org/networking/iproute2

All in all, it’s good to keep these changes in mind, as most of these obsolete tools will totally be replaced sometime in the future. Old habits die hard but you have to move on. In addition, installing and using unmaintained packages on your Linux system is an insecure and dangerous practice.

Are you still stuck to using these old/deprecated commands? How are you coping with the replacements? Share your thoughts with us via the feedback form below.

Alacritty – A Fastest Terminal Emulator for Linux

Alacritty is a free open-source, fast, cross-platform terminal emulator, that uses GPU (Graphics Processing Unit) for rendering, which implements certain optimizations that are not available in many other terminal emulators in Linux.

Alacritty is focused on two goals simplicity and performance. The performance goal means, it should be speedy than any other terminal emulator available. The simplicity goal means, it doesn’t supports features such as tabs or splits (which can be easily provided by other terminal multiplexer – tmux) in Linux.

Prerequisites

Alacritty requires the most recent stable Rust compiler to install it.

Install Required Dependency Packages

1. First install Rust programming language using an rustup installer script and follow on screen instructions.

# sudo curl https://sh.rustup.rs -sSf | sh


2. Next, you need to install a few additional libraries to build Alacritty on your Linux distributions, as shown.

--------- On Ubuntu/Debian --------- # apt-get install cmake libfreetype6-dev libfontconfig1-dev xclip
--------- On CentOS/RHEL ---------
# yum install cmake freetype-devel fontconfig-devel xclip
# yum group install "Development Tools"
--------- On Fedora ---------
# dnf install cmake freetype-devel fontconfig-devel xclip
--------- On Arch Linux ---------
# pacman -S cmake freetype2 fontconfig pkg-config make xclip
--------- On openSUSE ---------
# zypper install cmake freetype-devel fontconfig-devel xclip 

Installing Alacritty Terminal Emulator in Linux

3. Once you have installed all the required packages, next clone the Alacritty source code repository and compile it using following commands.

$ cd Downloads
$ git clone https://github.com/jwilm/alacritty.git
$ cd alacritty
$ cargo build --release

4. Once the compilation process is complete, the binary will be saved in ./target/release/alacritty directory. Copy the binary to a directory in your PATH and on a dekstop, you can add the application to your system menus, as follows.

# cp target/release/alacritty /usr/local/bin
# cp Alacritty.desktop ~/.local/share/applications

5. Next install the manual pages using following command.

# gzip -c alacritty.man | sudo tee /usr/local/share/man/man1/alacritty.1.gz > /dev/null

6. To add shell completion settings to your Linux shell, do the following.

--------- On Bash Shell ---------
# cp alacritty-completions.bash ~/.alacritty
# echo "source ~/.alacritty" >> ~/.bashrc
--------- On ZSH Shell ---------
# cp alacritty-completions.zsh /usr/share/zsh/functions/Completion/X/_alacritty
--------- On FISH Shell ---------
# cp alacritty-completions.fish /usr/share/fish/vendor_completions.d/alacritty.fish

7. Finally start Alacritty in your system menu and click on it; when run for the first time, a config file will be created under $HOME/.config/alacritty/alacritty.yml, you can configure it from here.

Alacritty Terminal Emulator

Alacritty Terminal Emulator

For more information and configuration options, go to the Alacritty Github repository.

Alacritty is a cross-platform, fast, GPU accelerated terminal emulator focused on speed and performance. Although it is ready for daily usage, many features are yet to be added to it such as scroll back and more. Share your thoughts about it via the feedback form below.

Amazon Kinesis Video Streams Adds Support For HLS Output Streams

Today I’m excited to announce and demonstrate the new HTTP Live Streams (HLS) output feature for Amazon Kinesis Video Streams (KVS). If you’re not already familiar with KVS, Jeff covered the release for AWS re:Invent in 2017. In short, Amazon Kinesis Video Streams is a service for securely capturing, processing, and storing video for analytics and machine learning – from one device or millions. Customers are using Kinesis Video with machine learning algorithms to power everything from home automation and smart cities to industrial automation and security.

After iterating on customer feedback, we’ve launched a number of features in the past few months including a plugin for GStreamer, the popular open source multimedia framework, and docker containers which make it easy to start streaming video to Kinesis. We could talk about each of those features at length, but today is all about the new HLS output feature! Fair warning, there are a few pictures of my incredibly messy office in this post.

HLS output is a convenient new feature that allows customers to create HLS endpoints for their Kinesis Video Streams, convenient for building custom UIs and tools that can playback live and on-demand video. The HLS-based playback capability is fully managed, so you don’t have to build any infrastructure to transmux the incoming media. You simply create a new streaming session, up to 5 (for now), with the new GetHLSStreamingSessionURL API and you’re off to the races. The great thing about HLS is that it’s already an industry standard and really easy to leverage in existing web-players like JW Player, hls.js, VideoJS, Google’s Shaka Player, or even rendering natively in mobile apps with Android’s Exoplayer and iOS’s AV Foundation. Let’s take a quick look at the API, feel free to skip to the walk-through below as well.

Kinesis Video HLS Output API

The documentation covers this in more detail than what we can go over in the Blog but I’ll cover the broad components.

  1. Get an endpoint with the GetDataEndpoint API
  2. Use that endpoint to get an HLS streaming URL with the GetHLSStreamingSessionURL API
  3. Render the content in the HLS URL with whatever tools you want!

This is pretty easy in a Jupyter notebook with a quick bit of Python and boto3.

import boto3
STREAM_NAME = "RandallDeepLens"
kvs = boto3.client("kinesisvideo")
# Grab the endpoint from GetDataEndpoint
endpoint = kvs.get_data_endpoint( APIName="GET_HLS_STREAMING_SESSION_URL", StreamName=STREAM_NAME
)['DataEndpoint']
# Grab the HLS Stream URL from the endpoint
kvam = boto3.client("kinesis-video-archived-media", endpoint_url=endpoint)
url = kvam.get_hls_streaming_session_url( StreamName=STREAM_NAME, PlaybackMode="LIVE"
)['HLSStreamingSessionURL']

You can even visualize everything right away in Safari which can render HLS streams natively.

from IPython.display import HTML
HTML(data='<video src="{0}" autoplay="autoplay" controls="controls" width="300" height="400"></video>'.format(url)) 

We can also stream directly from a AWS DeepLens with just a bit of code:

import DeepLens_Kinesis_Video as dkv
import time
aws_access_key = "super_fake"
aws_secret_key = "even_more_fake"
region = "us-east-1"
stream_name ="RandallDeepLens"
retention = 1 #in minutes.
wait_time_sec = 60*300 #The number of seconds to stream the data
# will create the stream if it does not already exist
producer = dkv.createProducer(aws_access_key, aws_secret_key, "", region)
my_stream = producer.createStream(stream_name, retention)
my_stream.start()
time.sleep(wait_time_sec)
my_stream.stop()

How to use Kinesis Video Streams HLS Output Streams

We definitely need a Kinesis Video Stream, which we can create easily in the Kinesis Video Streams Console.

Now, we need to get some content into the stream. We have a few options here. Perhaps the easiest is the docker container. I decided to take the more adventurous route and compile the GStreamer plugin locally on my mac, following the scripts on github. Be warned, compiling this plugin takes a while and can cause your computer to transform into a space heater.

With our freshly compiled GStreamer binaries like gst-launch-1.0 and the kvssink plugin we can stream directly from my macbook’s webcam, or any other GStreamer source, into Kinesis Video Streams. I just use the kvssink output plugin and my data will wind up in the video stream. There are a few parameters to configure around this, so pay attention.

Here’s an example command that I ran to stream my macbook’s webcam to Kinesis Video Streams:

gst-launch-1.0 autovideosrc ! videoconvert \
! video/x-raw,format=I420,width=640,height=480,framerate=30/1 \
! vtenc_h264_hw allow-frame-reordering=FALSE realtime=TRUE max-keyframe-interval=45 bitrate=500 \
! h264parse \
! video/x-h264,stream-format=avc,alignment=au,width=640,height=480,framerate=30/1 \
! kvssink stream-name="BlogStream" storage-size=1024 aws-region=us-west-2 log-config=kvslog

Now that we’re streaming some data into Kinesis, I can use the getting started sample static website to test my HLS stream with a few different video players. I just fill in my AWS credentials and ask it to start playing. The GetHLSStreamingSessionURL API supports a number of parameters so you can play both on-demand segments and live streams from various timestamps.

Additional Info

Data Consumed from Kinesis Video Streams using HLS is charged $0.0119 per GB in US East (N. Virginia) and US West (Oregon) and pricing for other regions is available on the service pricing page. This feature is available now, in all regions where Kinesis Video Streams is available.

The Kinesis Video team told me they’re working hard on getting more integration with the AWS Media services, like MediaLive, which will make it easier to serve Kinesis Video Stream content to larger audiences.

As always, let us know what you you think on twitter or in the comments. I’ve had a ton of fun playing around with this feature over the past few days and I’m excited to see customers build some new tools with it!

– Randall

Microsoft Calls For Federal Regulation of Facial Recognition

Over the past year, Silicon Valley has been grappling with the way it handles our data, our elections, and our speech. Now it’s got a new concern: our faces. In just the past few weeks, critics assailed Amazon for selling facial recognition technology to local police departments, and Facebook for how it gained consent from Europeans to identify people in their photos.

Microsoft has endured its own share of criticism lately around the ethical uses of its technology, as employees protested a contract under which US Immigration and Customs Enforcement uses Microsoft’s cloud-computing service. Microsoft says that contract did not involve facial recognition. When it comes to facial analysis, a Microsoft service used by other companies has been shown to be far more accurate for white men than for women or people of color.

In an effort to help society keep pace with the rampaging development of the technology, Microsoft President Brad Smith today is publishing a blog post calling for government regulation of facial recognition. Smith doesn’t identify specific rules; rather, he suggests, among other things, that the government create a “bipartisan and expert commission” to study the issue and make recommendations.

Smith poses a series of questions such a commission should consider, including potential restrictions on law-enforcement or national-security uses of the technology; standards to prevent racial profiling; requirements that people be notified when the technology is being used, particularly in public spaces; and legal protections for people who may be misidentified. But he doesn’t detail Microsoft’s view of the answers to those questions.

“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms,” Smith writes. “Facial recognition will require the public and private sectors alike to step up – and to act.”

Like many technologies, facial recognition can be useful, or harmful. Internet users tap services from Google, Facebook, and others to identify people in photos. Apple allows users to unlock the iPhone X with their faces. Microsoft offers a similar service through Windows Hello to unlock personal computers. Uber uses Microsoft’s facial-recognition technology to confirm the identity of drivers using its app. Facial analysis can be a form of identification in offices, airports, and hotels.

But there are few rules governing use of the technology, either by police or private companies. In the blog post, Smith raises the specter of a government database of attendees at a political rally, or stores monitoring every item you browse, even those you don’t buy. Given the political gridlock in Washington, an expert commission may be a convenient way for Microsoft to appear to be responsible with little risk that the government will actually restrict its or any other company’s, use of facial-recognition technology. But Smith says such commissions have been used widely—28 times in the past decade—with some success; he points to the 9/11 commission and subsequent changes on the nation’s security agencies.

Outside the US, facial recognition technology used extensively in China, often by the government, and with few constraints. Suspected criminals have been identified in crowds using the technology, which is widely deployed in public places.

Beyond government regulation, Smith says Microsoft and other tech companies should take more responsibility for their use of the technology. That includes efforts to act transparently, reduce bias, and deploy the technology slowly and cautiously. “If we move too fast with facial recognition, we may find that people’s fundamental rights are being broken,” he writes. Smith says Microsoft is working to reduce the racial disparities in its facial-analysis software.

Concern about the ethical uses of technology is not new. But the increasing power of artificial intelligence to scan faces, drive cars, and predict crime, among other things, have given birth to research institutes, industry groups, and philanthropic programs. Microsoft in 2016 created an internal advisory committee, cosponsored by Smith, on its use of artificial intelligence more broadly. In the post, Smith says the company has turned down customer requests to deploy its technology “where we’ve concluded there are greater human rights risks.” Microsoft declined to discuss specifics of any work it has turned down.

Microsoft’s approach wins praise from Eileen Donahoe, an adjunct professor at Stanford’s Center for Democracy, Development, and the Rule of Law. “Microsoft is way ahead of the curve in thinking seriously about the ethical implications of the technology they’re developing and the human rights implications of the technology they’re developing,” she says. Donahoe says she expects the post to spark conversations at other technology companies.

Some critics have suggested that tech companies halt research on artificial intelligence, including facial recognition. But Donahoe says that’s not realistic, because others will develop the technology. “I would rather have those actors engaging with their employees, their consumers and the US government in trying to think about the possible uses of the technology, as well as the risks that come from the use of the technology,” she says.

Michael Posner, director of the NYU Stern Center for Business and Human Rights, says he welcomes Microsoft’s statement. But Posner cautions that governments themselves sometimes misuse facial-recognition technologies, and urges companies to ensure that “those who develop these technology systems are as diverse as the populations they serve.” He also urges companies to develop “clear industry standards and metrics” for use of the technology.


More Great WIRED Stories

Why Congress Needs to Revive Its Tech Support Team

Congress is finally turning its attention to Silicon Valley. And it’s not hard to understand why: Technology impinges upon every part of our civic sphere. We’ve got police using AI to determine which neighborhoods to patrol, Facebook filtering the news, and automation eroding the job market. Smart policy could help society adapt.

But to tackle these issues, congressfolk will first have to understand them. It’s cringe-inducing to have senators like Orrin Hatch seem unaware that Facebook makes money from ads. Our legislators need help. They need a gang of smart, informed nerds in their corner.

Which means it’s time to reboot the Office of Technology Assessment.

You’ve likely never heard of it, but the OTA truly rocked. It was Capitol Hill’s original brain trust on tech. Congress established the office in 1972, the year of Pong, when it realized the application of technology was becoming “extensive, pervasive, and critical.” The OTA was staffed with several hundred nonpartisan propellerheads who studied emerging science and tech. Every year they’d write numerous clear, detailed reports—What happens if Detroit gets hit with an atom bomb? What’ll be the impact of automation?—and they were on call to help any congressperson.

It worked admirably. Its reports helped save money and lives: The OTA found that expanding Medicaid to all pregnant women in poverty would lower the cost of treatment for low birth weight babies by as much as $30,000 per birth. It pointed out the huge upsides of paying for rural broadband, and of preparing for climate change. With a budget of only $20 million a year, the little agency had an outsize impact.

Alas, the OTA was doomed by the very clarity of its insight. It concluded that Reagan’s “Star Wars” missile defense wouldn’t work—which annoyed some Republicans. In 1995, when Newt Gingrich embarked on his mission of reducing government spending, the low-profile agency got the chop, at precisely the wrong time: Congress defunded its tech adviser just as life was about to be utterly transfigured by the internet, mobile phones, social networking, and AI. Nice work, guys!

Related Stories

Today, Washingtonians of different stripes are calling for a reboot. “When you drag Mark Zuckerberg in, and you want to ask the really hard questions, this would put you in a better position,” says Zach Graves, a senior fellow at the free-market think tank R Street. Democratic Federal Communications Commissioner Jessica Rosenworcel wants the OTA back too, given the whipsaw pace of new tech arrivals.

Technically, it’d be easy to restart the OTA. Congress didn’t abolish it, but merely took away its funding. This spring, US representative Bill Foster (D-Illinois) introduced a resolution to reopen the spigot.

That would still need votes though. You’d need agreement that expert consensus on scientific facts is important—and, alas, I’m not sure it’s there. Anti-science thinking is running amok in the political sphere. Some of it’s from liberals (hello, Hollywood antivaxxers!), but the lion’s share resides in right-wing orthodoxy, which is too often hostile to the idea of scientific evidence, especially if it suggests we should stop burning fossil fuels. In a saner age, the OTA would be a no-brainer. Now I’m not so sure.

Still, Foster is hopeful. In the old days, the OTA had some Republican champions, and it still could today, he tells me. “They understand the economic importance of having high-quality technical advice.”

My fingers are crossed. In 1985, OTA researchers observed: “America has become an information society.” It would be nice if we could also be an informed one.


This article appears in the July issue. Subscribe now.


More Great WIRED Stories

A Linux Sysadmin’s Guide to Network Management, Troubleshooting and Debugging

A system administrator’s routine tasks include configuring, maintaining, troubleshooting, and managing servers and networks within data centers. There are numerous tools and utilities in Linux designed for the administrative purposes.

In this article, we will review some of the most used command-line tools and utilities for network management in Linux, under different categories. We will explain some common usage examples, which will make network management much easier in Linux.

This list is equally useful to full-time network engineers.

Network Configuration, Troubleshooting and Debugging Tools

1. ifconfig Command

ifconfig is a command line interface tool for network interface configuration and also used to initialize an interfaces at system boot time. Once a server is up and running, it can be used to assign an IP Address to an interface and enable or disable the interface on demand.


It is also used to view the status IP Address, Hardware / MAC address, as well as MTU (Maximum Transmission Unit) size of the currently active interfaces. ifconfig is thus useful for debugging or performing system tuning.

Here is an example to display status of all active network interfaces.

$ ifconfig
enp1s0 Link encap:Ethernet HWaddr 28:d2:44:eb:bd:98 inet addr:192.168.0.103 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::8f0c:7825:8057:5eec/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:169854 errors:0 dropped:0 overruns:0 frame:0
TX packets:125995 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 RX bytes:174146270 (174.1 MB) TX bytes:21062129 (21.0 MB)
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:15793 errors:0 dropped:0 overruns:0 frame:0
TX packets:15793 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1 RX bytes:2898946 (2.8 MB) TX bytes:2898946 (2.8 MB)

To list all interfaces which are currently available, whether up or down, use the -a flag.

$ ifconfig -a 

To assign an IP address to an interface, use the following command.

$ sudo ifconfig eth0 192.168.56.5 netmask 255.255.255.0

To activate an network interface, type.

$ sudo ifconfig up eth0

To deactivate or shut down an network interface, type.

$ sudo ifconfig down eth0

Note: Although ifconfig is a great tool, it is now obsolete (deprecated), its replacement is ip command which is explained below.

2. IP Command

ip command is another useful command line utility for displaying and manipulating routing, network devices, interfaces. It is a replacement for ifconfig and many other networking commands. (Read our article “What’s Difference Between ifconfig and ip Command” to learn more about it.)

The following command will show the IP address and other information about an network interface.

$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 28:d2:44:eb:bd:98 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.103/24 brd 192.168.0.255 scope global dynamic enp1s0
valid_lft 5772sec preferred_lft 5772sec
inet6 fe80::8f0c:7825:8057:5eec/64 scope link valid_lft forever preferred_lft forever
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 38:b1:db:7c:78:c7 brd ff:ff:ff:ff:ff:ff
...

To temporarily assign IP Address to a specific network interface (eth0), type.

$ sudo ip addr add 192.168.56.1 dev eth0

To remove an assigned IP address from an network interface (eth0), type.

$ sudo ip addr del 192.168.56.15/24 dev eth0

To show the current neighbour table in kernel, type.

$ ip neigh
192.168.0.1 dev enp1s0 lladdr 10:fe:ed:3d:f3:82 REACHABLE

3. ifup, ifdown, and ifquery command

ifup command actives a network interface, making it available to transfer and receive data.

$ sudo ifup eth0

ifdown command disables a network interface, keeping it in a state where it cannot transfer or receive data.

$ sudo ifdown eth0

ifquery command used to parse the network interface configuration, enabling you to receive answers to query about how it is currently configured.

$ sudo ifquery eth0

4. Ethtool Command

ethtool is a command line utility for querying and modifying network interface controller parameters and device drivers. The example below shows the usage of ethtool and a command to view the parameters for the network interface.

$ sudo ethtool enp0s3
Settings for enp0s3:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
MDI-X: off (auto)
Supports Wake-on: umbg
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

5. Ping Command

ping (Packet INternet Groper) is a utility normally used for testing connectivity between two systems on a network (Local Area Network (LAN) or Wide Area Network (WAN)). It use ICMP (Internet Control Message Protocol) to communicate to nodes on a network.

To test connectivity to another node, simply provide its IP or host name, for example.

$ ping 192.168.0.103
PING 192.168.0.103 (192.168.0.103) 56(84) bytes of data.
64 bytes from 192.168.0.103: icmp_seq=1 ttl=64 time=0.191 ms
64 bytes from 192.168.0.103: icmp_seq=2 ttl=64 time=0.156 ms
64 bytes from 192.168.0.103: icmp_seq=3 ttl=64 time=0.179 ms
64 bytes from 192.168.0.103: icmp_seq=4 ttl=64 time=0.182 ms
64 bytes from 192.168.0.103: icmp_seq=5 ttl=64 time=0.207 ms
64 bytes from 192.168.0.103: icmp_seq=6 ttl=64 time=0.157 ms
^C
--- 192.168.0.103 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5099ms
rtt min/avg/max/mdev = 0.156/0.178/0.207/0.023 ms

You can also tell ping to exit after a specified number of ECHO_REQUEST packets, using the -c flag as shown.

$ ping -c 4 192.168.0.103
PING 192.168.0.103 (192.168.0.103) 56(84) bytes of data.
64 bytes from 192.168.0.103: icmp_seq=1 ttl=64 time=1.09 ms
64 bytes from 192.168.0.103: icmp_seq=2 ttl=64 time=0.157 ms
64 bytes from 192.168.0.103: icmp_seq=3 ttl=64 time=0.163 ms
64 bytes from 192.168.0.103: icmp_seq=4 ttl=64 time=0.190 ms
--- 192.168.0.103 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3029ms
rtt min/avg/max/mdev = 0.157/0.402/1.098/0.402 ms

6. Traceroute Command

Traceroute is a command line utility for tracing the full path from your local system to another network system. It prints number of hops (router IP’s) in that path you travel to reach the end server. It is an easy-to-use network troubleshooting utility after ping command.

In this example, we are tracing the route packets take from the local system to one of Google’s servers with IP address 216.58.204.46.

$ traceroute 216.58.204.46
traceroute to 216.58.204.46 (216.58.204.46), 30 hops max, 60 byte packets
1 gateway (192.168.0.1) 0.487 ms 0.277 ms 0.269 ms
2 5.5.5.215 (5.5.5.215) 1.846 ms 1.631 ms 1.553 ms
3 * * *
4 72.14.194.226 (72.14.194.226) 3.762 ms 3.683 ms 3.577 ms
5 108.170.248.179 (108.170.248.179) 4.666 ms 108.170.248.162 (108.170.248.162) 4.869 ms 108.170.248.194 (108.170.248.194) 4.245 ms
6 72.14.235.133 (72.14.235.133) 72.443 ms 209.85.241.175 (209.85.241.175) 62.738 ms 72.14.235.133 (72.14.235.133) 65.809 ms
7 66.249.94.140 (66.249.94.140) 128.726 ms 127.506 ms 209.85.248.5 (209.85.248.5) 127.330 ms
8 74.125.251.181 (74.125.251.181) 127.219 ms 108.170.236.124 (108.170.236.124) 212.544 ms 74.125.251.181 (74.125.251.181) 127.249 ms
9 216.239.49.134 (216.239.49.134) 236.906 ms 209.85.242.80 (209.85.242.80) 254.810 ms 254.735 ms
10 209.85.251.138 (209.85.251.138) 252.002 ms 216.239.43.227 (216.239.43.227) 251.975 ms 209.85.242.80 (209.85.242.80) 236.343 ms
11 216.239.43.227 (216.239.43.227) 251.452 ms 72.14.234.8 (72.14.234.8) 279.650 ms 277.492 ms
12 209.85.250.9 (209.85.250.9) 274.521 ms 274.450 ms 209.85.253.249 (209.85.253.249) 270.558 ms
13 209.85.250.9 (209.85.250.9) 269.147 ms 209.85.254.244 (209.85.254.244) 347.046 ms 209.85.250.9 (209.85.250.9) 285.265 ms
14 64.233.175.112 (64.233.175.112) 344.852 ms 216.239.57.236 (216.239.57.236) 343.786 ms 64.233.175.112 (64.233.175.112) 345.273 ms
15 108.170.246.129 (108.170.246.129) 345.054 ms 345.342 ms 64.233.175.112 (64.233.175.112) 343.706 ms
16 108.170.238.119 (108.170.238.119) 345.610 ms 108.170.246.161 (108.170.246.161) 344.726 ms 108.170.238.117 (108.170.238.117) 345.536 ms
17 lhr25s12-in-f46.1e100.net (216.58.204.46) 345.382 ms 345.031 ms 344.884 ms

7. MTR Network Diagnostic Tool

MTR is a modern command-line network diagnostic tool that combines the functionality of ping and traceroute into a single diagnostic tool. Its output is updated in real-time, by default until you exit the program by pressing q.

The easiest way of running mtr is to provide it a host name or IP address as an argument, as follows.

$ mtr google.com
OR
$ mtr 216.58.223.78
Sample Output
tecmint.com (0.0.0.0) Thu Jul 12 08:58:27 2018
First TTL: 1
Host Loss% Snt Last Avg Best Wrst StDev
1. 192.168.0.1 0.0% 41 0.5 0.6 0.4 1.7 0.2
2. 5.5.5.215 0.0% 40 1.9 1.5 0.8 7.3 1.0
3. 209.snat-111-91-120.hns.net.in 23.1% 40 1.9 2.7 1.7 10.5 1.6
4. 72.14.194.226 0.0% 40 89.1 5.2 2.2 89.1 13.7
5. 108.170.248.193 0.0% 40 3.0 4.1 2.4 52.4 7.8
6. 108.170.237.43 0.0% 40 2.9 5.3 2.5 94.1 14.4
7. bom07s10-in-f174.1e100.net 0.0% 40 2.6 6.7 2.3 79.7 16.

You can limit the number of pings to a specific value and exit mtr after those pings, using the -c flag as shown.

$ mtr -c 4 google.com

8. Route Command

route is a command line utility for displaying or manipulating the IP routing table of a Linux system. It is mainly used to configure static routes to specific hosts or networks via an interface.

You can view Kernel IP routing table by typing.

$ route
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 enp0s3
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

There are numerous commands you can use to configure routing. Here are some useful ones:

Add a default gateway to the routing table.

$ sudo route add default gw <gateway-ip>

Add a network route to the routing table.

$ sudo route add -net <network ip/cidr> gw <gateway ip> <interface>

Delete a specific route entry from the routing table.

$ sudo route del -net <network ip/cidr>

9. Nmcli Command

Nmcli is an easy-to-use, scriptable command-line tool to report network status, manage network connections, and control the NetworkManager.

To view all your network devices, type.

$ nmcli dev status
DEVICE TYPE STATE CONNECTION virbr0 bridge connected virbr0 enp0s3 ethernet connected Wired connection 1 

To check network connections on your system, type.

$ nmcli con show
Wired connection 1 bc3638ff-205a-3bbb-8845-5a4b0f7eef91 802-3-ethernet enp0s3 virbr0 00f5d53e-fd51-41d3-b069-bdfd2dde062b bridge virbr0 

To see only the active connections, add the -a flag.

$ nmcli con show -a

Network Scanning and Performance Analysis Tools

10. Netstat Command

netstat is a command line tool that displays useful information such as network connections, routing tables, interface statistics, and much more, concerning the Linux networking subsystem. It is useful for network troubleshooting and performance analysis.

Additionally, it is also a fundamental network service debugging tool used to check which programs are listening on what ports. For instance, the following command will show all TCP ports in listening mode and what programs are listening on them.

$ sudo netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:587 0.0.0.0:* LISTEN 1257/master tcp 0 0 127.0.0.1:5003 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 1015/dovecot tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 1015/dovecot tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:465 0.0.0.0:* LISTEN 1257/master tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN 1404/pdns_server tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 1064/pure-ftpd (SER tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 972/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 975/cupsd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1257/master tcp 0 0 0.0.0.0:8090 0.0.0.0:* LISTEN 636/lscpd (lscpd - tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 1015/dovecot tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 1015/dovecot tcp6 0 0 :::3306 :::* LISTEN 1053/mysqld tcp6 0 0 :::3307 :::* LISTEN 1211/mysqld tcp6 0 0 :::587 :::* LISTEN 1257/master tcp6 0 0 :::110 :::* LISTEN 1015/dovecot tcp6 0 0 :::143 :::* LISTEN 1015/dovecot tcp6 0 0 :::111 :::* LISTEN 1/systemd tcp6 0 0 :::80 :::* LISTEN 990/httpd tcp6 0 0 :::465 :::* LISTEN 1257/master tcp6 0 0 :::53 :::* LISTEN 1404/pdns_server tcp6 0 0 :::21 :::* LISTEN 1064/pure-ftpd (SER tcp6 0 0 :::22 :::* LISTEN 972/sshd tcp6 0 0 ::1:631 :::* LISTEN 975/cupsd tcp6 0 0 :::25 :::* LISTEN 1257/master tcp6 0 0 :::993 :::* LISTEN 1015/dovecot tcp6 0 0 :::995 :::* LISTEN 1015/dovecot 

To view kernel routing table, use the -r flag (which is equivalent to running route command above).

$ netstat -r
Destination Gateway Genmask Flags MSS Window irtt Iface
default gateway 0.0.0.0 UG 0 0 0 enp0s3
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

Note: Although Netstat is a great tool, it is now obsolete (deprecated), its replacement is ss command which is explained below.

11. ss Command

ss (socket statistics) is a powerful command line utility to investigate sockets. It dumps socket statistics and displays information similar to netstat. In addition, it shows more TCP and state information compared to other similar utilities.

The following example show how to list all TCP ports (sockets) that are open on a server.

$ ss -ta
State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 100 *:submission *:* LISTEN 0 128 127.0.0.1:fmpro-internal *:* LISTEN 0 100 *:pop3 *:* LISTEN 0 100 *:imap *:* LISTEN 0 128 *:sunrpc *:* LISTEN 0 100 *:urd *:* LISTEN 0 128 *:domain *:* LISTEN 0 9 *:ftp *:* LISTEN 0 128 *:ssh *:* LISTEN 0 128 127.0.0.1:ipp *:* LISTEN 0 100 *:smtp *:* LISTEN 0 128 *:8090 *:* LISTEN 0 100 *:imaps *:* LISTEN 0 100 *:pop3s *:* ESTAB 0 0 192.168.0.104:ssh 192.168.0.103:36398 ESTAB 0 0 127.0.0.1:34642 127.0.0.1:opsession-prxy ESTAB 0 0 127.0.0.1:34638 127.0.0.1:opsession-prxy ESTAB 0 0 127.0.0.1:34644 127.0.0.1:opsession-prxy ESTAB 0 0 127.0.0.1:34640 127.0.0.1:opsession-prxy LISTEN 0 80 :::mysql :::* ...

To display all active TCP connections together with their timers, run the following command.

$ ss -to

12 NC Command

NC (NetCat) also referred to as the “Network Swiss Army knife”, is a powerful utility used for almost any task related to TCP, UDP, or UNIX-domain sockets. It is used open TCP connections, listen on arbitrary TCP and UDP ports, perform port scanning plus more.

You can also use it as a simple TCP proxies, for network daemon testing, to check if remote ports are reachable and much more. Furthermore, you can employ nc together with pv command to transfer files between two computers.

The following example, will show how to scan a list of ports.

$ nc -zv server2.tecmint.lan 21 22 80 443 3000

You can also specify a range of ports as shown.

$ nc -zv server2.tecmint.lan 20-90

The following example shows how to use nc to open a TCP connection to port 5000 on server2.tecmint.lan, using port 3000 as the source port, with a timeout of 10 seconds.

$ nc -p 3000 -w 10 server2.tecmint.lan 5000 

13. Nmap Command

Nmap (Network Mapper) is a powerful and extremely versatile tool for Linux system/network administrators. It is used gather information about a single host or explore networks an entire network. Nmap is also used to perform security scans, network audit and finding open ports on remote hosts and so much more.

You can scan a host using its host name or IP address, for instance.

$ nmap google.com 
Starting Nmap 6.40 ( http://nmap.org ) at 2018-07-12 09:23 BST
Nmap scan report for google.com (172.217.166.78)
Host is up (0.0036s latency).
rDNS record for 172.217.166.78: bom05s15-in-f14.1e100.net
Not shown: 998 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 4.92 seconds

Alternatively, use an IP address as shown.

$ nmap 192.168.0.103
Starting Nmap 6.40 ( http://nmap.org ) at 2018-07-12 09:24 BST
Nmap scan report for 192.168.0.103
Host is up (0.000051s latency).
Not shown: 994 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
902/tcp open iss-realsecure
4242/tcp open vrml-multi-use
5900/tcp open vnc
8080/tcp open http-proxy
MAC Address: 28:D2:44:EB:BD:98 (Lcfc(hefei) Electronics Technology Co.)
Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds

Read our following useful articles on nmap command.

  1. How to Use Nmap Script Engine (NSE) Scripts in Linux
  2. A Practical Guide to Nmap (Network Security Scanner) in Kali Linux
  3. Find Out All Live Hosts IP Addresses Connected on Network in Linux

DNS Lookup Utilities

14. host Command

host command is a simple utility for carrying out DNS lookups, it translates host names to IP addresses and vice versa.

$ host google.com
google.com has address 172.217.166.78
google.com mail is handled by 20 alt1.aspmx.l.google.com.
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 40 alt3.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.
google.com mail is handled by 10 aspmx.l.google.com.

15. dig Command

dig (domain information groper) is also another simple DNS lookup utility, that is used to query DNS related information such as A Record, CNAME, MX Record etc, for example:

$ dig google.com
; <<>> DiG 9.9.4-RedHat-9.9.4-51.el7 <<>> google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23083
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 13, ADDITIONAL: 14
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com. IN A
;; ANSWER SECTION:
google.com. 72 IN A 172.217.166.78
;; AUTHORITY SECTION:
com. 13482 IN NS c.gtld-servers.net.
com. 13482 IN NS d.gtld-servers.net.
com. 13482 IN NS e.gtld-servers.net.
com. 13482 IN NS f.gtld-servers.net.
com. 13482 IN NS g.gtld-servers.net.
com. 13482 IN NS h.gtld-servers.net.
com. 13482 IN NS i.gtld-servers.net.
com. 13482 IN NS j.gtld-servers.net.
com. 13482 IN NS k.gtld-servers.net.
com. 13482 IN NS l.gtld-servers.net.
com. 13482 IN NS m.gtld-servers.net.
com. 13482 IN NS a.gtld-servers.net.
com. 13482 IN NS b.gtld-servers.net.
;; ADDITIONAL SECTION:
a.gtld-servers.net. 81883 IN A 192.5.6.30
b.gtld-servers.net. 3999 IN A 192.33.14.30
c.gtld-servers.net. 14876 IN A 192.26.92.30
d.gtld-servers.net. 85172 IN A 192.31.80.30
e.gtld-servers.net. 95861 IN A 192.12.94.30
f.gtld-servers.net. 78471 IN A 192.35.51.30
g.gtld-servers.net. 5217 IN A 192.42.93.30
h.gtld-servers.net. 111531 IN A 192.54.112.30
i.gtld-servers.net. 93017 IN A 192.43.172.30
j.gtld-servers.net. 93542 IN A 192.48.79.30
k.gtld-servers.net. 107218 IN A 192.52.178.30
l.gtld-servers.net. 6280 IN A 192.41.162.30
m.gtld-servers.net. 2689 IN A 192.55.83.30
;; Query time: 4 msec
;; SERVER: 192.168.0.1#53(192.168.0.1)
;; WHEN: Thu Jul 12 09:30:57 BST 2018
;; MSG SIZE rcvd: 487

16. NSLookup Command

Nslookup is also a popular command line utility to query DNS servers both interactively and non-interactively. It is used to query DNS resource records (RR). You can find out “A” record (IP address) of a domain as shown.

$ nslookup google.com
Server: 192.168.0.1
Address: 192.168.0.1#53
Non-authoritative answer:
Name: google.com
Address: 172.217.166.78

You can also perform a reverse domain lookup as shown.

$ nslookup 216.58.208.174
Server: 192.168.0.1
Address: 192.168.0.1#53
Non-authoritative answer:
174.208.58.216.in-addr.arpa name = lhr25s09-in-f14.1e100.net.
174.208.58.216.in-addr.arpa name = lhr25s09-in-f174.1e100.net.
Authoritative answers can be found from:
in-addr.arpa nameserver = e.in-addr-servers.arpa.
in-addr.arpa nameserver = f.in-addr-servers.arpa.
in-addr.arpa nameserver = a.in-addr-servers.arpa.
in-addr.arpa nameserver = b.in-addr-servers.arpa.
in-addr.arpa nameserver = c.in-addr-servers.arpa.
in-addr.arpa nameserver = d.in-addr-servers.arpa.
a.in-addr-servers.arpa internet address = 199.180.182.53
b.in-addr-servers.arpa internet address = 199.253.183.183
c.in-addr-servers.arpa internet address = 196.216.169.10
d.in-addr-servers.arpa internet address = 200.10.60.53
e.in-addr-servers.arpa internet address = 203.119.86.101
f.in-addr-servers.arpa internet address = 193.0.9.1

Linux Network Packet Analyzers

17. Tcpdump Command

Tcpdump is a very powerful and widely used command-line network sniffer. It is used to capture and analyze TCP/IP packets transmitted or received over a network on a specific interface.

To capture packets from a given interface, specify it using the -i option.

$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
09:35:40.287439 IP tecmint.com.ssh > 192.168.0.103.36398: Flags [P.], seq 4152360356:4152360552, ack 306922699, win 270, options [nop,nop,TS val 2211778668 ecr 2019055], length 196
09:35:40.287655 IP 192.168.0.103.36398 > tecmint.com.ssh: Flags [.], ack 196, win 5202, options [nop,nop,TS val 2019058 ecr 2211778668], length 0
09:35:40.288269 IP tecmint.com.54899 > gateway.domain: 43760+ PTR? 103.0.168.192.in-addr.arpa. (44)
09:35:40.333763 IP gateway.domain > tecmint.com.54899: 43760 NXDomain* 0/1/0 (94)
09:35:40.335311 IP tecmint.com.52036 > gateway.domain: 44289+ PTR? 1.0.168.192.in-addr.arpa. (42)

To capture a specific number of packets, use the -c option to enter the desired number.

$ tcpdump -c 5 -i eth1

You can also capture and save packets to a file for later analysis, use the -w flag to specify the output file.

$ tcpdump -w captured.pacs -i eth1

18. Wireshark Utility

Wireshark is a popular, powerful, versatile and easy to use tool for capturing and analyzing packets in a packet-switched network, in real-time.

You can also save data it has captured to a file for later inspection. It is used by system administrators and network engineers to monitor and inspect the packets for security and troubleshooting purposes.

Read our article “10 Tips On How to Use Wireshark to Analyze Network Packets to learn more about Wireshark”.

19. Bmon Tool

bmon is a powerful, command line based network monitoring and debugging utility for Unix-like systems, it captures networking related statistics and prints them visually in a human friendly format. It is a reliable and effective real-time bandwidth monitor and rate estimator.

Read our article “bmon – A Powerful Network Bandwidth Monitoring and Debugging Tool to learn more about bmon”.

Linux Firewall Management Tools

20. Iptables Firewall

iptables is a command line tool for configuring, maintaining, and inspecting the tables IP packet filtering and NAT ruleset. It it used to set up and manage the Linux firewall (Netfilter). It allows you to list existing packet filter rules; add or delete or modify packet filter rules; list per-rule counters of the packet filter rules.

You can learn how to use Iptables for various purposes from our simple yet comprehensive guides.

  1. Basic Guide on IPTables (Linux Firewall) Tips / Commands
  2. 25 Useful IPtable Firewall Rules Every Linux Administrator Should Know
  3. How To Setup an Iptables Firewall to Enable Remote Access to Services
  4. How to Block Ping ICMP Requests to Linux Systems

21. Firewalld

Firewalld is a powerful and dynamic daemon to manage the Linux firewall (Netfilter), just like iptables. It uses “networks zones” instead of INPUT, OUTPUT and FORWARD CHAINS in iptables. On current Linux distributions such as RHEL/CentOS 7 and Fedora 21+, iptables is actively being replaced by firewalld.

To get started with firewalld, consult these guides listed below:

  1. Useful ‘FirewallD’ Rules to Configure and Manage Firewall in Linux
  2. How to Configure ‘FirewallD’ in RHEL/CentOS 7 and Fedora 21
  3. How to Start/Stop and Enable/Disable FirewallD and Iptables Firewall in Linux
  4. Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux/Windows

Important: Iptables is still supported and can be installed with YUM package manager. However, you can’t use Firewalld and iptables at the same time on same server – you must choose one.

22. UFW (Uncomplicated Firewall)

UFW is a well known and default firewall configuration tool on Debian and Ubuntu Linux distributions. It is used top enable/disable system firewall, add/delete/modify/reset packet filtering rules and much more.

To check UFW firewall status, type.

$ sudo ufw status

If UFW firewall is not active, you can activate or enable it using the following command.

$ sudo ufw enable

To disable UFW firewall, use the following command.

$ sudo ufw disable 

Read our article “How to Setup UFW Firewall on Ubuntu and Debian” to learn more UFW).

If you want to find more information about a particular program, you can consult its man pages as shown.

$ man programs_name

That’s all for now! In this comprehensive guide, we reviewed some of the most used command-line tools and utilities for network management in Linux, under different categories, for system administrators, and equally useful to full-time network administrators/engineers.

You can share your thoughts about this guide via the comment form below. If we have missed any frequently used and important Linux networking tools/utilities or any useful related information, also let us know.