Alacritty – A Fastest Terminal Emulator for Linux

Alacritty is a free open-source, fast, cross-platform terminal emulator, that uses GPU (Graphics Processing Unit) for rendering, which implements certain optimizations that are not available in many other terminal emulators in Linux.

Alacritty is focused on two goals simplicity and performance. The performance goal means, it should be speedy than any other terminal emulator available. The simplicity goal means, it doesn’t supports features such as tabs or splits (which can be easily provided by other terminal multiplexer – tmux) in Linux.

Prerequisites

Alacritty requires the most recent stable Rust compiler to install it.

Install Required Dependency Packages

1. First install Rust programming language using an rustup installer script and follow on screen instructions.

# sudo curl https://sh.rustup.rs -sSf | sh


2. Next, you need to install a few additional libraries to build Alacritty on your Linux distributions, as shown.

--------- On Ubuntu/Debian --------- # apt-get install cmake libfreetype6-dev libfontconfig1-dev xclip
--------- On CentOS/RHEL ---------
# yum install cmake freetype-devel fontconfig-devel xclip
# yum group install "Development Tools"
--------- On Fedora ---------
# dnf install cmake freetype-devel fontconfig-devel xclip
--------- On Arch Linux ---------
# pacman -S cmake freetype2 fontconfig pkg-config make xclip
--------- On openSUSE ---------
# zypper install cmake freetype-devel fontconfig-devel xclip 

Installing Alacritty Terminal Emulator in Linux

3. Once you have installed all the required packages, next clone the Alacritty source code repository and compile it using following commands.

$ cd Downloads
$ git clone https://github.com/jwilm/alacritty.git
$ cd alacritty
$ cargo build --release

4. Once the compilation process is complete, the binary will be saved in ./target/release/alacritty directory. Copy the binary to a directory in your PATH and on a dekstop, you can add the application to your system menus, as follows.

# cp target/release/alacritty /usr/local/bin
# cp Alacritty.desktop ~/.local/share/applications

5. Next install the manual pages using following command.

# gzip -c alacritty.man | sudo tee /usr/local/share/man/man1/alacritty.1.gz > /dev/null

6. To add shell completion settings to your Linux shell, do the following.

--------- On Bash Shell ---------
# cp alacritty-completions.bash ~/.alacritty
# echo "source ~/.alacritty" >> ~/.bashrc
--------- On ZSH Shell ---------
# cp alacritty-completions.zsh /usr/share/zsh/functions/Completion/X/_alacritty
--------- On FISH Shell ---------
# cp alacritty-completions.fish /usr/share/fish/vendor_completions.d/alacritty.fish

7. Finally start Alacritty in your system menu and click on it; when run for the first time, a config file will be created under $HOME/.config/alacritty/alacritty.yml, you can configure it from here.

Alacritty Terminal Emulator

Alacritty Terminal Emulator

For more information and configuration options, go to the Alacritty Github repository.

Alacritty is a cross-platform, fast, GPU accelerated terminal emulator focused on speed and performance. Although it is ready for daily usage, many features are yet to be added to it such as scroll back and more. Share your thoughts about it via the feedback form below.

Amazon Kinesis Video Streams Adds Support For HLS Output Streams

Today I’m excited to announce and demonstrate the new HTTP Live Streams (HLS) output feature for Amazon Kinesis Video Streams (KVS). If you’re not already familiar with KVS, Jeff covered the release for AWS re:Invent in 2017. In short, Amazon Kinesis Video Streams is a service for securely capturing, processing, and storing video for analytics and machine learning – from one device or millions. Customers are using Kinesis Video with machine learning algorithms to power everything from home automation and smart cities to industrial automation and security.

After iterating on customer feedback, we’ve launched a number of features in the past few months including a plugin for GStreamer, the popular open source multimedia framework, and docker containers which make it easy to start streaming video to Kinesis. We could talk about each of those features at length, but today is all about the new HLS output feature! Fair warning, there are a few pictures of my incredibly messy office in this post.

HLS output is a convenient new feature that allows customers to create HLS endpoints for their Kinesis Video Streams, convenient for building custom UIs and tools that can playback live and on-demand video. The HLS-based playback capability is fully managed, so you don’t have to build any infrastructure to transmux the incoming media. You simply create a new streaming session, up to 5 (for now), with the new GetHLSStreamingSessionURL API and you’re off to the races. The great thing about HLS is that it’s already an industry standard and really easy to leverage in existing web-players like JW Player, hls.js, VideoJS, Google’s Shaka Player, or even rendering natively in mobile apps with Android’s Exoplayer and iOS’s AV Foundation. Let’s take a quick look at the API, feel free to skip to the walk-through below as well.

Kinesis Video HLS Output API

The documentation covers this in more detail than what we can go over in the Blog but I’ll cover the broad components.

  1. Get an endpoint with the GetDataEndpoint API
  2. Use that endpoint to get an HLS streaming URL with the GetHLSStreamingSessionURL API
  3. Render the content in the HLS URL with whatever tools you want!

This is pretty easy in a Jupyter notebook with a quick bit of Python and boto3.

import boto3
STREAM_NAME = "RandallDeepLens"
kvs = boto3.client("kinesisvideo")
# Grab the endpoint from GetDataEndpoint
endpoint = kvs.get_data_endpoint( APIName="GET_HLS_STREAMING_SESSION_URL", StreamName=STREAM_NAME
)['DataEndpoint']
# Grab the HLS Stream URL from the endpoint
kvam = boto3.client("kinesis-video-archived-media", endpoint_url=endpoint)
url = kvam.get_hls_streaming_session_url( StreamName=STREAM_NAME, PlaybackMode="LIVE"
)['HLSStreamingSessionURL']

You can even visualize everything right away in Safari which can render HLS streams natively.

from IPython.display import HTML
HTML(data='<video src="{0}" autoplay="autoplay" controls="controls" width="300" height="400"></video>'.format(url)) 

We can also stream directly from a AWS DeepLens with just a bit of code:

import DeepLens_Kinesis_Video as dkv
import time
aws_access_key = "super_fake"
aws_secret_key = "even_more_fake"
region = "us-east-1"
stream_name ="RandallDeepLens"
retention = 1 #in minutes.
wait_time_sec = 60*300 #The number of seconds to stream the data
# will create the stream if it does not already exist
producer = dkv.createProducer(aws_access_key, aws_secret_key, "", region)
my_stream = producer.createStream(stream_name, retention)
my_stream.start()
time.sleep(wait_time_sec)
my_stream.stop()

How to use Kinesis Video Streams HLS Output Streams

We definitely need a Kinesis Video Stream, which we can create easily in the Kinesis Video Streams Console.

Now, we need to get some content into the stream. We have a few options here. Perhaps the easiest is the docker container. I decided to take the more adventurous route and compile the GStreamer plugin locally on my mac, following the scripts on github. Be warned, compiling this plugin takes a while and can cause your computer to transform into a space heater.

With our freshly compiled GStreamer binaries like gst-launch-1.0 and the kvssink plugin we can stream directly from my macbook’s webcam, or any other GStreamer source, into Kinesis Video Streams. I just use the kvssink output plugin and my data will wind up in the video stream. There are a few parameters to configure around this, so pay attention.

Here’s an example command that I ran to stream my macbook’s webcam to Kinesis Video Streams:

gst-launch-1.0 autovideosrc ! videoconvert \
! video/x-raw,format=I420,width=640,height=480,framerate=30/1 \
! vtenc_h264_hw allow-frame-reordering=FALSE realtime=TRUE max-keyframe-interval=45 bitrate=500 \
! h264parse \
! video/x-h264,stream-format=avc,alignment=au,width=640,height=480,framerate=30/1 \
! kvssink stream-name="BlogStream" storage-size=1024 aws-region=us-west-2 log-config=kvslog

Now that we’re streaming some data into Kinesis, I can use the getting started sample static website to test my HLS stream with a few different video players. I just fill in my AWS credentials and ask it to start playing. The GetHLSStreamingSessionURL API supports a number of parameters so you can play both on-demand segments and live streams from various timestamps.

Additional Info

Data Consumed from Kinesis Video Streams using HLS is charged $0.0119 per GB in US East (N. Virginia) and US West (Oregon) and pricing for other regions is available on the service pricing page. This feature is available now, in all regions where Kinesis Video Streams is available.

The Kinesis Video team told me they’re working hard on getting more integration with the AWS Media services, like MediaLive, which will make it easier to serve Kinesis Video Stream content to larger audiences.

As always, let us know what you you think on twitter or in the comments. I’ve had a ton of fun playing around with this feature over the past few days and I’m excited to see customers build some new tools with it!

– Randall

Microsoft Calls For Federal Regulation of Facial Recognition

Over the past year, Silicon Valley has been grappling with the way it handles our data, our elections, and our speech. Now it’s got a new concern: our faces. In just the past few weeks, critics assailed Amazon for selling facial recognition technology to local police departments, and Facebook for how it gained consent from Europeans to identify people in their photos.

Microsoft has endured its own share of criticism lately around the ethical uses of its technology, as employees protested a contract under which US Immigration and Customs Enforcement uses Microsoft’s cloud-computing service. Microsoft says that contract did not involve facial recognition. When it comes to facial analysis, a Microsoft service used by other companies has been shown to be far more accurate for white men than for women or people of color.

In an effort to help society keep pace with the rampaging development of the technology, Microsoft President Brad Smith today is publishing a blog post calling for government regulation of facial recognition. Smith doesn’t identify specific rules; rather, he suggests, among other things, that the government create a “bipartisan and expert commission” to study the issue and make recommendations.

Smith poses a series of questions such a commission should consider, including potential restrictions on law-enforcement or national-security uses of the technology; standards to prevent racial profiling; requirements that people be notified when the technology is being used, particularly in public spaces; and legal protections for people who may be misidentified. But he doesn’t detail Microsoft’s view of the answers to those questions.

“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms,” Smith writes. “Facial recognition will require the public and private sectors alike to step up – and to act.”

Like many technologies, facial recognition can be useful, or harmful. Internet users tap services from Google, Facebook, and others to identify people in photos. Apple allows users to unlock the iPhone X with their faces. Microsoft offers a similar service through Windows Hello to unlock personal computers. Uber uses Microsoft’s facial-recognition technology to confirm the identity of drivers using its app. Facial analysis can be a form of identification in offices, airports, and hotels.

But there are few rules governing use of the technology, either by police or private companies. In the blog post, Smith raises the specter of a government database of attendees at a political rally, or stores monitoring every item you browse, even those you don’t buy. Given the political gridlock in Washington, an expert commission may be a convenient way for Microsoft to appear to be responsible with little risk that the government will actually restrict its or any other company’s, use of facial-recognition technology. But Smith says such commissions have been used widely—28 times in the past decade—with some success; he points to the 9/11 commission and subsequent changes on the nation’s security agencies.

Outside the US, facial recognition technology used extensively in China, often by the government, and with few constraints. Suspected criminals have been identified in crowds using the technology, which is widely deployed in public places.

Beyond government regulation, Smith says Microsoft and other tech companies should take more responsibility for their use of the technology. That includes efforts to act transparently, reduce bias, and deploy the technology slowly and cautiously. “If we move too fast with facial recognition, we may find that people’s fundamental rights are being broken,” he writes. Smith says Microsoft is working to reduce the racial disparities in its facial-analysis software.

Concern about the ethical uses of technology is not new. But the increasing power of artificial intelligence to scan faces, drive cars, and predict crime, among other things, have given birth to research institutes, industry groups, and philanthropic programs. Microsoft in 2016 created an internal advisory committee, cosponsored by Smith, on its use of artificial intelligence more broadly. In the post, Smith says the company has turned down customer requests to deploy its technology “where we’ve concluded there are greater human rights risks.” Microsoft declined to discuss specifics of any work it has turned down.

Microsoft’s approach wins praise from Eileen Donahoe, an adjunct professor at Stanford’s Center for Democracy, Development, and the Rule of Law. “Microsoft is way ahead of the curve in thinking seriously about the ethical implications of the technology they’re developing and the human rights implications of the technology they’re developing,” she says. Donahoe says she expects the post to spark conversations at other technology companies.

Some critics have suggested that tech companies halt research on artificial intelligence, including facial recognition. But Donahoe says that’s not realistic, because others will develop the technology. “I would rather have those actors engaging with their employees, their consumers and the US government in trying to think about the possible uses of the technology, as well as the risks that come from the use of the technology,” she says.

Michael Posner, director of the NYU Stern Center for Business and Human Rights, says he welcomes Microsoft’s statement. But Posner cautions that governments themselves sometimes misuse facial-recognition technologies, and urges companies to ensure that “those who develop these technology systems are as diverse as the populations they serve.” He also urges companies to develop “clear industry standards and metrics” for use of the technology.


More Great WIRED Stories

Why Congress Needs to Revive Its Tech Support Team

Congress is finally turning its attention to Silicon Valley. And it’s not hard to understand why: Technology impinges upon every part of our civic sphere. We’ve got police using AI to determine which neighborhoods to patrol, Facebook filtering the news, and automation eroding the job market. Smart policy could help society adapt.

But to tackle these issues, congressfolk will first have to understand them. It’s cringe-inducing to have senators like Orrin Hatch seem unaware that Facebook makes money from ads. Our legislators need help. They need a gang of smart, informed nerds in their corner.

Which means it’s time to reboot the Office of Technology Assessment.

You’ve likely never heard of it, but the OTA truly rocked. It was Capitol Hill’s original brain trust on tech. Congress established the office in 1972, the year of Pong, when it realized the application of technology was becoming “extensive, pervasive, and critical.” The OTA was staffed with several hundred nonpartisan propellerheads who studied emerging science and tech. Every year they’d write numerous clear, detailed reports—What happens if Detroit gets hit with an atom bomb? What’ll be the impact of automation?—and they were on call to help any congressperson.

It worked admirably. Its reports helped save money and lives: The OTA found that expanding Medicaid to all pregnant women in poverty would lower the cost of treatment for low birth weight babies by as much as $30,000 per birth. It pointed out the huge upsides of paying for rural broadband, and of preparing for climate change. With a budget of only $20 million a year, the little agency had an outsize impact.

Alas, the OTA was doomed by the very clarity of its insight. It concluded that Reagan’s “Star Wars” missile defense wouldn’t work—which annoyed some Republicans. In 1995, when Newt Gingrich embarked on his mission of reducing government spending, the low-profile agency got the chop, at precisely the wrong time: Congress defunded its tech adviser just as life was about to be utterly transfigured by the internet, mobile phones, social networking, and AI. Nice work, guys!

Related Stories

Today, Washingtonians of different stripes are calling for a reboot. “When you drag Mark Zuckerberg in, and you want to ask the really hard questions, this would put you in a better position,” says Zach Graves, a senior fellow at the free-market think tank R Street. Democratic Federal Communications Commissioner Jessica Rosenworcel wants the OTA back too, given the whipsaw pace of new tech arrivals.

Technically, it’d be easy to restart the OTA. Congress didn’t abolish it, but merely took away its funding. This spring, US representative Bill Foster (D-Illinois) introduced a resolution to reopen the spigot.

That would still need votes though. You’d need agreement that expert consensus on scientific facts is important—and, alas, I’m not sure it’s there. Anti-science thinking is running amok in the political sphere. Some of it’s from liberals (hello, Hollywood antivaxxers!), but the lion’s share resides in right-wing orthodoxy, which is too often hostile to the idea of scientific evidence, especially if it suggests we should stop burning fossil fuels. In a saner age, the OTA would be a no-brainer. Now I’m not so sure.

Still, Foster is hopeful. In the old days, the OTA had some Republican champions, and it still could today, he tells me. “They understand the economic importance of having high-quality technical advice.”

My fingers are crossed. In 1985, OTA researchers observed: “America has become an information society.” It would be nice if we could also be an informed one.


This article appears in the July issue. Subscribe now.


More Great WIRED Stories

A Linux Sysadmin’s Guide to Network Management, Troubleshooting and Debugging

A system administrator’s routine tasks include configuring, maintaining, troubleshooting, and managing servers and networks within data centers. There are numerous tools and utilities in Linux designed for the administrative purposes.

In this article, we will review some of the most used command-line tools and utilities for network management in Linux, under different categories. We will explain some common usage examples, which will make network management much easier in Linux.

This list is equally useful to full-time network engineers.

Network Configuration, Troubleshooting and Debugging Tools

1. ifconfig Command

ifconfig is a command line interface tool for network interface configuration and also used to initialize an interfaces at system boot time. Once a server is up and running, it can be used to assign an IP Address to an interface and enable or disable the interface on demand.


It is also used to view the status IP Address, Hardware / MAC address, as well as MTU (Maximum Transmission Unit) size of the currently active interfaces. ifconfig is thus useful for debugging or performing system tuning.

Here is an example to display status of all active network interfaces.

$ ifconfig
enp1s0 Link encap:Ethernet HWaddr 28:d2:44:eb:bd:98 inet addr:192.168.0.103 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::8f0c:7825:8057:5eec/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:169854 errors:0 dropped:0 overruns:0 frame:0
TX packets:125995 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000 RX bytes:174146270 (174.1 MB) TX bytes:21062129 (21.0 MB)
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:15793 errors:0 dropped:0 overruns:0 frame:0
TX packets:15793 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1 RX bytes:2898946 (2.8 MB) TX bytes:2898946 (2.8 MB)

To list all interfaces which are currently available, whether up or down, use the -a flag.

$ ifconfig -a 

To assign an IP address to an interface, use the following command.

$ sudo ifconfig eth0 192.168.56.5 netmask 255.255.255.0

To activate an network interface, type.

$ sudo ifconfig up eth0

To deactivate or shut down an network interface, type.

$ sudo ifconfig down eth0

Note: Although ifconfig is a great tool, it is now obsolete (deprecated), its replacement is ip command which is explained below.

2. IP Command

ip command is another useful command line utility for displaying and manipulating routing, network devices, interfaces. It is a replacement for ifconfig and many other networking commands. (Read our article “What’s Difference Between ifconfig and ip Command” to learn more about it.)

The following command will show the IP address and other information about an network interface.

$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 28:d2:44:eb:bd:98 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.103/24 brd 192.168.0.255 scope global dynamic enp1s0
valid_lft 5772sec preferred_lft 5772sec
inet6 fe80::8f0c:7825:8057:5eec/64 scope link valid_lft forever preferred_lft forever
3: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 38:b1:db:7c:78:c7 brd ff:ff:ff:ff:ff:ff
...

To temporarily assign IP Address to a specific network interface (eth0), type.

$ sudo ip addr add 192.168.56.1 dev eth0

To remove an assigned IP address from an network interface (eth0), type.

$ sudo ip addr del 192.168.56.15/24 dev eth0

To show the current neighbour table in kernel, type.

$ ip neigh
192.168.0.1 dev enp1s0 lladdr 10:fe:ed:3d:f3:82 REACHABLE

3. ifup, ifdown, and ifquery command

ifup command actives a network interface, making it available to transfer and receive data.

$ sudo ifup eth0

ifdown command disables a network interface, keeping it in a state where it cannot transfer or receive data.

$ sudo ifdown eth0

ifquery command used to parse the network interface configuration, enabling you to receive answers to query about how it is currently configured.

$ sudo ifquery eth0

4. Ethtool Command

ethtool is a command line utility for querying and modifying network interface controller parameters and device drivers. The example below shows the usage of ethtool and a command to view the parameters for the network interface.

$ sudo ethtool enp0s3
Settings for enp0s3:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
MDI-X: off (auto)
Supports Wake-on: umbg
Wake-on: d
Current message level: 0x00000007 (7)
drv probe link
Link detected: yes

5. Ping Command

ping (Packet INternet Groper) is a utility normally used for testing connectivity between two systems on a network (Local Area Network (LAN) or Wide Area Network (WAN)). It use ICMP (Internet Control Message Protocol) to communicate to nodes on a network.

To test connectivity to another node, simply provide its IP or host name, for example.

$ ping 192.168.0.103
PING 192.168.0.103 (192.168.0.103) 56(84) bytes of data.
64 bytes from 192.168.0.103: icmp_seq=1 ttl=64 time=0.191 ms
64 bytes from 192.168.0.103: icmp_seq=2 ttl=64 time=0.156 ms
64 bytes from 192.168.0.103: icmp_seq=3 ttl=64 time=0.179 ms
64 bytes from 192.168.0.103: icmp_seq=4 ttl=64 time=0.182 ms
64 bytes from 192.168.0.103: icmp_seq=5 ttl=64 time=0.207 ms
64 bytes from 192.168.0.103: icmp_seq=6 ttl=64 time=0.157 ms
^C
--- 192.168.0.103 ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5099ms
rtt min/avg/max/mdev = 0.156/0.178/0.207/0.023 ms

You can also tell ping to exit after a specified number of ECHO_REQUEST packets, using the -c flag as shown.

$ ping -c 4 192.168.0.103
PING 192.168.0.103 (192.168.0.103) 56(84) bytes of data.
64 bytes from 192.168.0.103: icmp_seq=1 ttl=64 time=1.09 ms
64 bytes from 192.168.0.103: icmp_seq=2 ttl=64 time=0.157 ms
64 bytes from 192.168.0.103: icmp_seq=3 ttl=64 time=0.163 ms
64 bytes from 192.168.0.103: icmp_seq=4 ttl=64 time=0.190 ms
--- 192.168.0.103 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3029ms
rtt min/avg/max/mdev = 0.157/0.402/1.098/0.402 ms

6. Traceroute Command

Traceroute is a command line utility for tracing the full path from your local system to another network system. It prints number of hops (router IP’s) in that path you travel to reach the end server. It is an easy-to-use network troubleshooting utility after ping command.

In this example, we are tracing the route packets take from the local system to one of Google’s servers with IP address 216.58.204.46.

$ traceroute 216.58.204.46
traceroute to 216.58.204.46 (216.58.204.46), 30 hops max, 60 byte packets
1 gateway (192.168.0.1) 0.487 ms 0.277 ms 0.269 ms
2 5.5.5.215 (5.5.5.215) 1.846 ms 1.631 ms 1.553 ms
3 * * *
4 72.14.194.226 (72.14.194.226) 3.762 ms 3.683 ms 3.577 ms
5 108.170.248.179 (108.170.248.179) 4.666 ms 108.170.248.162 (108.170.248.162) 4.869 ms 108.170.248.194 (108.170.248.194) 4.245 ms
6 72.14.235.133 (72.14.235.133) 72.443 ms 209.85.241.175 (209.85.241.175) 62.738 ms 72.14.235.133 (72.14.235.133) 65.809 ms
7 66.249.94.140 (66.249.94.140) 128.726 ms 127.506 ms 209.85.248.5 (209.85.248.5) 127.330 ms
8 74.125.251.181 (74.125.251.181) 127.219 ms 108.170.236.124 (108.170.236.124) 212.544 ms 74.125.251.181 (74.125.251.181) 127.249 ms
9 216.239.49.134 (216.239.49.134) 236.906 ms 209.85.242.80 (209.85.242.80) 254.810 ms 254.735 ms
10 209.85.251.138 (209.85.251.138) 252.002 ms 216.239.43.227 (216.239.43.227) 251.975 ms 209.85.242.80 (209.85.242.80) 236.343 ms
11 216.239.43.227 (216.239.43.227) 251.452 ms 72.14.234.8 (72.14.234.8) 279.650 ms 277.492 ms
12 209.85.250.9 (209.85.250.9) 274.521 ms 274.450 ms 209.85.253.249 (209.85.253.249) 270.558 ms
13 209.85.250.9 (209.85.250.9) 269.147 ms 209.85.254.244 (209.85.254.244) 347.046 ms 209.85.250.9 (209.85.250.9) 285.265 ms
14 64.233.175.112 (64.233.175.112) 344.852 ms 216.239.57.236 (216.239.57.236) 343.786 ms 64.233.175.112 (64.233.175.112) 345.273 ms
15 108.170.246.129 (108.170.246.129) 345.054 ms 345.342 ms 64.233.175.112 (64.233.175.112) 343.706 ms
16 108.170.238.119 (108.170.238.119) 345.610 ms 108.170.246.161 (108.170.246.161) 344.726 ms 108.170.238.117 (108.170.238.117) 345.536 ms
17 lhr25s12-in-f46.1e100.net (216.58.204.46) 345.382 ms 345.031 ms 344.884 ms

7. MTR Network Diagnostic Tool

MTR is a modern command-line network diagnostic tool that combines the functionality of ping and traceroute into a single diagnostic tool. Its output is updated in real-time, by default until you exit the program by pressing q.

The easiest way of running mtr is to provide it a host name or IP address as an argument, as follows.

$ mtr google.com
OR
$ mtr 216.58.223.78
Sample Output
tecmint.com (0.0.0.0) Thu Jul 12 08:58:27 2018
First TTL: 1
Host Loss% Snt Last Avg Best Wrst StDev
1. 192.168.0.1 0.0% 41 0.5 0.6 0.4 1.7 0.2
2. 5.5.5.215 0.0% 40 1.9 1.5 0.8 7.3 1.0
3. 209.snat-111-91-120.hns.net.in 23.1% 40 1.9 2.7 1.7 10.5 1.6
4. 72.14.194.226 0.0% 40 89.1 5.2 2.2 89.1 13.7
5. 108.170.248.193 0.0% 40 3.0 4.1 2.4 52.4 7.8
6. 108.170.237.43 0.0% 40 2.9 5.3 2.5 94.1 14.4
7. bom07s10-in-f174.1e100.net 0.0% 40 2.6 6.7 2.3 79.7 16.

You can limit the number of pings to a specific value and exit mtr after those pings, using the -c flag as shown.

$ mtr -c 4 google.com

8. Route Command

route is a command line utility for displaying or manipulating the IP routing table of a Linux system. It is mainly used to configure static routes to specific hosts or networks via an interface.

You can view Kernel IP routing table by typing.

$ route
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 enp0s3
192.168.0.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

There are numerous commands you can use to configure routing. Here are some useful ones:

Add a default gateway to the routing table.

$ sudo route add default gw <gateway-ip>

Add a network route to the routing table.

$ sudo route add -net <network ip/cidr> gw <gateway ip> <interface>

Delete a specific route entry from the routing table.

$ sudo route del -net <network ip/cidr>

9. Nmcli Command

Nmcli is an easy-to-use, scriptable command-line tool to report network status, manage network connections, and control the NetworkManager.

To view all your network devices, type.

$ nmcli dev status
DEVICE TYPE STATE CONNECTION virbr0 bridge connected virbr0 enp0s3 ethernet connected Wired connection 1 

To check network connections on your system, type.

$ nmcli con show
Wired connection 1 bc3638ff-205a-3bbb-8845-5a4b0f7eef91 802-3-ethernet enp0s3 virbr0 00f5d53e-fd51-41d3-b069-bdfd2dde062b bridge virbr0 

To see only the active connections, add the -a flag.

$ nmcli con show -a

Network Scanning and Performance Analysis Tools

10. Netstat Command

netstat is a command line tool that displays useful information such as network connections, routing tables, interface statistics, and much more, concerning the Linux networking subsystem. It is useful for network troubleshooting and performance analysis.

Additionally, it is also a fundamental network service debugging tool used to check which programs are listening on what ports. For instance, the following command will show all TCP ports in listening mode and what programs are listening on them.

$ sudo netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:587 0.0.0.0:* LISTEN 1257/master tcp 0 0 127.0.0.1:5003 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN 1015/dovecot tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 1015/dovecot tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:465 0.0.0.0:* LISTEN 1257/master tcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN 1404/pdns_server tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 1064/pure-ftpd (SER tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 972/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 975/cupsd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1257/master tcp 0 0 0.0.0.0:8090 0.0.0.0:* LISTEN 636/lscpd (lscpd - tcp 0 0 0.0.0.0:993 0.0.0.0:* LISTEN 1015/dovecot tcp 0 0 0.0.0.0:995 0.0.0.0:* LISTEN 1015/dovecot tcp6 0 0 :::3306 :::* LISTEN 1053/mysqld tcp6 0 0 :::3307 :::* LISTEN 1211/mysqld tcp6 0 0 :::587 :::* LISTEN 1257/master tcp6 0 0 :::110 :::* LISTEN 1015/dovecot tcp6 0 0 :::143 :::* LISTEN 1015/dovecot tcp6 0 0 :::111 :::* LISTEN 1/systemd tcp6 0 0 :::80 :::* LISTEN 990/httpd tcp6 0 0 :::465 :::* LISTEN 1257/master tcp6 0 0 :::53 :::* LISTEN 1404/pdns_server tcp6 0 0 :::21 :::* LISTEN 1064/pure-ftpd (SER tcp6 0 0 :::22 :::* LISTEN 972/sshd tcp6 0 0 ::1:631 :::* LISTEN 975/cupsd tcp6 0 0 :::25 :::* LISTEN 1257/master tcp6 0 0 :::993 :::* LISTEN 1015/dovecot tcp6 0 0 :::995 :::* LISTEN 1015/dovecot 

To view kernel routing table, use the -r flag (which is equivalent to running route command above).

$ netstat -r
Destination Gateway Genmask Flags MSS Window irtt Iface
default gateway 0.0.0.0 UG 0 0 0 enp0s3
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0

Note: Although Netstat is a great tool, it is now obsolete (deprecated), its replacement is ss command which is explained below.

11. ss Command

ss (socket statistics) is a powerful command line utility to investigate sockets. It dumps socket statistics and displays information similar to netstat. In addition, it shows more TCP and state information compared to other similar utilities.

The following example show how to list all TCP ports (sockets) that are open on a server.

$ ss -ta
State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 100 *:submission *:* LISTEN 0 128 127.0.0.1:fmpro-internal *:* LISTEN 0 100 *:pop3 *:* LISTEN 0 100 *:imap *:* LISTEN 0 128 *:sunrpc *:* LISTEN 0 100 *:urd *:* LISTEN 0 128 *:domain *:* LISTEN 0 9 *:ftp *:* LISTEN 0 128 *:ssh *:* LISTEN 0 128 127.0.0.1:ipp *:* LISTEN 0 100 *:smtp *:* LISTEN 0 128 *:8090 *:* LISTEN 0 100 *:imaps *:* LISTEN 0 100 *:pop3s *:* ESTAB 0 0 192.168.0.104:ssh 192.168.0.103:36398 ESTAB 0 0 127.0.0.1:34642 127.0.0.1:opsession-prxy ESTAB 0 0 127.0.0.1:34638 127.0.0.1:opsession-prxy ESTAB 0 0 127.0.0.1:34644 127.0.0.1:opsession-prxy ESTAB 0 0 127.0.0.1:34640 127.0.0.1:opsession-prxy LISTEN 0 80 :::mysql :::* ...

To display all active TCP connections together with their timers, run the following command.

$ ss -to

12 NC Command

NC (NetCat) also referred to as the “Network Swiss Army knife”, is a powerful utility used for almost any task related to TCP, UDP, or UNIX-domain sockets. It is used open TCP connections, listen on arbitrary TCP and UDP ports, perform port scanning plus more.

You can also use it as a simple TCP proxies, for network daemon testing, to check if remote ports are reachable and much more. Furthermore, you can employ nc together with pv command to transfer files between two computers.

The following example, will show how to scan a list of ports.

$ nc -zv server2.tecmint.lan 21 22 80 443 3000

You can also specify a range of ports as shown.

$ nc -zv server2.tecmint.lan 20-90

The following example shows how to use nc to open a TCP connection to port 5000 on server2.tecmint.lan, using port 3000 as the source port, with a timeout of 10 seconds.

$ nc -p 3000 -w 10 server2.tecmint.lan 5000 

13. Nmap Command

Nmap (Network Mapper) is a powerful and extremely versatile tool for Linux system/network administrators. It is used gather information about a single host or explore networks an entire network. Nmap is also used to perform security scans, network audit and finding open ports on remote hosts and so much more.

You can scan a host using its host name or IP address, for instance.

$ nmap google.com 
Starting Nmap 6.40 ( http://nmap.org ) at 2018-07-12 09:23 BST
Nmap scan report for google.com (172.217.166.78)
Host is up (0.0036s latency).
rDNS record for 172.217.166.78: bom05s15-in-f14.1e100.net
Not shown: 998 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 4.92 seconds

Alternatively, use an IP address as shown.

$ nmap 192.168.0.103
Starting Nmap 6.40 ( http://nmap.org ) at 2018-07-12 09:24 BST
Nmap scan report for 192.168.0.103
Host is up (0.000051s latency).
Not shown: 994 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
902/tcp open iss-realsecure
4242/tcp open vrml-multi-use
5900/tcp open vnc
8080/tcp open http-proxy
MAC Address: 28:D2:44:EB:BD:98 (Lcfc(hefei) Electronics Technology Co.)
Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds

Read our following useful articles on nmap command.

  1. How to Use Nmap Script Engine (NSE) Scripts in Linux
  2. A Practical Guide to Nmap (Network Security Scanner) in Kali Linux
  3. Find Out All Live Hosts IP Addresses Connected on Network in Linux

DNS Lookup Utilities

14. host Command

host command is a simple utility for carrying out DNS lookups, it translates host names to IP addresses and vice versa.

$ host google.com
google.com has address 172.217.166.78
google.com mail is handled by 20 alt1.aspmx.l.google.com.
google.com mail is handled by 30 alt2.aspmx.l.google.com.
google.com mail is handled by 40 alt3.aspmx.l.google.com.
google.com mail is handled by 50 alt4.aspmx.l.google.com.
google.com mail is handled by 10 aspmx.l.google.com.

15. dig Command

dig (domain information groper) is also another simple DNS lookup utility, that is used to query DNS related information such as A Record, CNAME, MX Record etc, for example:

$ dig google.com
; <<>> DiG 9.9.4-RedHat-9.9.4-51.el7 <<>> google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23083
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 13, ADDITIONAL: 14
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com. IN A
;; ANSWER SECTION:
google.com. 72 IN A 172.217.166.78
;; AUTHORITY SECTION:
com. 13482 IN NS c.gtld-servers.net.
com. 13482 IN NS d.gtld-servers.net.
com. 13482 IN NS e.gtld-servers.net.
com. 13482 IN NS f.gtld-servers.net.
com. 13482 IN NS g.gtld-servers.net.
com. 13482 IN NS h.gtld-servers.net.
com. 13482 IN NS i.gtld-servers.net.
com. 13482 IN NS j.gtld-servers.net.
com. 13482 IN NS k.gtld-servers.net.
com. 13482 IN NS l.gtld-servers.net.
com. 13482 IN NS m.gtld-servers.net.
com. 13482 IN NS a.gtld-servers.net.
com. 13482 IN NS b.gtld-servers.net.
;; ADDITIONAL SECTION:
a.gtld-servers.net. 81883 IN A 192.5.6.30
b.gtld-servers.net. 3999 IN A 192.33.14.30
c.gtld-servers.net. 14876 IN A 192.26.92.30
d.gtld-servers.net. 85172 IN A 192.31.80.30
e.gtld-servers.net. 95861 IN A 192.12.94.30
f.gtld-servers.net. 78471 IN A 192.35.51.30
g.gtld-servers.net. 5217 IN A 192.42.93.30
h.gtld-servers.net. 111531 IN A 192.54.112.30
i.gtld-servers.net. 93017 IN A 192.43.172.30
j.gtld-servers.net. 93542 IN A 192.48.79.30
k.gtld-servers.net. 107218 IN A 192.52.178.30
l.gtld-servers.net. 6280 IN A 192.41.162.30
m.gtld-servers.net. 2689 IN A 192.55.83.30
;; Query time: 4 msec
;; SERVER: 192.168.0.1#53(192.168.0.1)
;; WHEN: Thu Jul 12 09:30:57 BST 2018
;; MSG SIZE rcvd: 487

16. NSLookup Command

Nslookup is also a popular command line utility to query DNS servers both interactively and non-interactively. It is used to query DNS resource records (RR). You can find out “A” record (IP address) of a domain as shown.

$ nslookup google.com
Server: 192.168.0.1
Address: 192.168.0.1#53
Non-authoritative answer:
Name: google.com
Address: 172.217.166.78

You can also perform a reverse domain lookup as shown.

$ nslookup 216.58.208.174
Server: 192.168.0.1
Address: 192.168.0.1#53
Non-authoritative answer:
174.208.58.216.in-addr.arpa name = lhr25s09-in-f14.1e100.net.
174.208.58.216.in-addr.arpa name = lhr25s09-in-f174.1e100.net.
Authoritative answers can be found from:
in-addr.arpa nameserver = e.in-addr-servers.arpa.
in-addr.arpa nameserver = f.in-addr-servers.arpa.
in-addr.arpa nameserver = a.in-addr-servers.arpa.
in-addr.arpa nameserver = b.in-addr-servers.arpa.
in-addr.arpa nameserver = c.in-addr-servers.arpa.
in-addr.arpa nameserver = d.in-addr-servers.arpa.
a.in-addr-servers.arpa internet address = 199.180.182.53
b.in-addr-servers.arpa internet address = 199.253.183.183
c.in-addr-servers.arpa internet address = 196.216.169.10
d.in-addr-servers.arpa internet address = 200.10.60.53
e.in-addr-servers.arpa internet address = 203.119.86.101
f.in-addr-servers.arpa internet address = 193.0.9.1

Linux Network Packet Analyzers

17. Tcpdump Command

Tcpdump is a very powerful and widely used command-line network sniffer. It is used to capture and analyze TCP/IP packets transmitted or received over a network on a specific interface.

To capture packets from a given interface, specify it using the -i option.

$ tcpdump -i eth1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on enp0s3, link-type EN10MB (Ethernet), capture size 262144 bytes
09:35:40.287439 IP tecmint.com.ssh > 192.168.0.103.36398: Flags [P.], seq 4152360356:4152360552, ack 306922699, win 270, options [nop,nop,TS val 2211778668 ecr 2019055], length 196
09:35:40.287655 IP 192.168.0.103.36398 > tecmint.com.ssh: Flags [.], ack 196, win 5202, options [nop,nop,TS val 2019058 ecr 2211778668], length 0
09:35:40.288269 IP tecmint.com.54899 > gateway.domain: 43760+ PTR? 103.0.168.192.in-addr.arpa. (44)
09:35:40.333763 IP gateway.domain > tecmint.com.54899: 43760 NXDomain* 0/1/0 (94)
09:35:40.335311 IP tecmint.com.52036 > gateway.domain: 44289+ PTR? 1.0.168.192.in-addr.arpa. (42)

To capture a specific number of packets, use the -c option to enter the desired number.

$ tcpdump -c 5 -i eth1

You can also capture and save packets to a file for later analysis, use the -w flag to specify the output file.

$ tcpdump -w captured.pacs -i eth1

18. Wireshark Utility

Wireshark is a popular, powerful, versatile and easy to use tool for capturing and analyzing packets in a packet-switched network, in real-time.

You can also save data it has captured to a file for later inspection. It is used by system administrators and network engineers to monitor and inspect the packets for security and troubleshooting purposes.

Read our article “10 Tips On How to Use Wireshark to Analyze Network Packets to learn more about Wireshark”.

19. Bmon Tool

bmon is a powerful, command line based network monitoring and debugging utility for Unix-like systems, it captures networking related statistics and prints them visually in a human friendly format. It is a reliable and effective real-time bandwidth monitor and rate estimator.

Read our article “bmon – A Powerful Network Bandwidth Monitoring and Debugging Tool to learn more about bmon”.

Linux Firewall Management Tools

20. Iptables Firewall

iptables is a command line tool for configuring, maintaining, and inspecting the tables IP packet filtering and NAT ruleset. It it used to set up and manage the Linux firewall (Netfilter). It allows you to list existing packet filter rules; add or delete or modify packet filter rules; list per-rule counters of the packet filter rules.

You can learn how to use Iptables for various purposes from our simple yet comprehensive guides.

  1. Basic Guide on IPTables (Linux Firewall) Tips / Commands
  2. 25 Useful IPtable Firewall Rules Every Linux Administrator Should Know
  3. How To Setup an Iptables Firewall to Enable Remote Access to Services
  4. How to Block Ping ICMP Requests to Linux Systems

21. Firewalld

Firewalld is a powerful and dynamic daemon to manage the Linux firewall (Netfilter), just like iptables. It uses “networks zones” instead of INPUT, OUTPUT and FORWARD CHAINS in iptables. On current Linux distributions such as RHEL/CentOS 7 and Fedora 21+, iptables is actively being replaced by firewalld.

To get started with firewalld, consult these guides listed below:

  1. Useful ‘FirewallD’ Rules to Configure and Manage Firewall in Linux
  2. How to Configure ‘FirewallD’ in RHEL/CentOS 7 and Fedora 21
  3. How to Start/Stop and Enable/Disable FirewallD and Iptables Firewall in Linux
  4. Setting Up Samba and Configure FirewallD and SELinux to Allow File Sharing on Linux/Windows

Important: Iptables is still supported and can be installed with YUM package manager. However, you can’t use Firewalld and iptables at the same time on same server – you must choose one.

22. UFW (Uncomplicated Firewall)

UFW is a well known and default firewall configuration tool on Debian and Ubuntu Linux distributions. It is used top enable/disable system firewall, add/delete/modify/reset packet filtering rules and much more.

To check UFW firewall status, type.

$ sudo ufw status

If UFW firewall is not active, you can activate or enable it using the following command.

$ sudo ufw enable

To disable UFW firewall, use the following command.

$ sudo ufw disable 

Read our article “How to Setup UFW Firewall on Ubuntu and Debian” to learn more UFW).

If you want to find more information about a particular program, you can consult its man pages as shown.

$ man programs_name

That’s all for now! In this comprehensive guide, we reviewed some of the most used command-line tools and utilities for network management in Linux, under different categories, for system administrators, and equally useful to full-time network administrators/engineers.

You can share your thoughts about this guide via the comment form below. If we have missed any frequently used and important Linux networking tools/utilities or any useful related information, also let us know.

New – Lifecycle Management for Amazon EBS Snapshots

It is always interesting to zoom in on the history of a single AWS service or feature and watch how it has evolved over time in response to customer feedback. For example, Amazon Elastic Block Store (EBS) launched a decade ago and has been gaining more features and functionality every since. Here are a few of the most significant announcements:

  • August 2008 – We launched EBS in production form, with support for volumes of up to 1 TB and snapshots to S3.
  • September 2010 – We gave you the ability to Tag EBS Volumes.
  • August 2012 – We launched Provisioned IOPS for EBS volumes, allowing you to dial in the level of performance that you need.
  • June 2014 – We gave you the ability to create SSD-backed EBS volumes.
  • March 2015 – We gave you the ability to create EBS volumes of up to 16 TB and 20,000 IOPS.
  • April 2016 – We gave you New cold storage and throughput options.
  • June 2016 – We gave you the power to create Cross-account copies of encrypted EBS snapshots.
  • February 2017 – We launched Elastic Volumes, allowing you to adjust the size, performance, and volume type of an active, mounted EBS volume.
  • December 2017 – We gave you the ability to create SSD-backed volumes that deliver up to 32,000 IOPS.
  • May 2017 – We launched Cost allocation for EBS snapshots so that you can assign costs to projects, departments, or other entities.
  • April 2018 – We gave you the ability to Tag EBS snapshots on creation and to Use resource-level permissions to implement stronger security policies.
  • May 2018 – We announced that encrypted EBS snapshots are now stored incrementally, resulting in a performance improvement and cost savings.

Several of the items that I chose to highlight above make EBS snapshots more useful and more flexible. As you may already know, it is easy to create snapshots. Each snapshot is a point-in-time copy of the blocks that have changed since the previous snapshot, with automatic management to ensure that only the data unique to a snapshot is removed when it is deleted. This incremental model reduces your costs and minimizes the time needed to create a snapshot.

Because snapshots are so easy to create and use, our customers create a lot of them, and make great use of tags to categorize, organize, and manage them. Going back to my list, you can see that we have added multiple tagging features over the years.

Lifecycle Management – The Amazon Data Lifecycle Manager
We want to make it even easier for you to create, use, and benefit from EBS snapshots! Today we are launching Amazon Data Lifecycle Manager to automate the creation, retention, and deletion of Amazon EBS volume snapshots. Instead of creating snapshots manually and deleting them in the same way (or building a tool to do it for you), you simply create a policy, indicating (via tags) which volumes are to be snapshotted, set a retention model, fill in a few other details, and let Data Lifecycle Manager do the rest. Data Lifecycle Manager is powered by tags, so you should start by setting up a clear and comprehensive tagging model for your organization (refer to the links above to learn more).

It turns out that many of our customers have invested in tools to automate the creation of snapshots, but have skimped on the retention and deletion. Sooner or later they receive a surprisingly large AWS bill and find that their scripts are not working as expected. The Data Lifecycle Manager should help them to save money and to be able to rest assured that their snapshots are being managed as expected.

Creating and Using a Lifecycle Policy
Data Lifecycle Manager uses lifecycle policies to figure out when to run, which volumes to snapshot, and how long to keep the snapshots around. You can create the policies in the AWS Management Console, from the AWS Command Line Interface (CLI) or via the Data Lifecycle Manager APIs; I’ll use the Console today. Here are my EBS volumes, all suitably tagged with a department:

I access the Lifecycle Manager from the Elastic Block Store section of the menu:

Then I click Create Snapshot Lifecycle Policy to proceed:

Then I create my first policy:

I use tags to specify the volumes that the policy applies to. If I specify multiple tags, then the policy applies to volumes that have any of the tags:

I can create snapshots at 12 or 24 hour intervals, and I can specify the desired snapshot time. Snapshot creation will start no more than an hour after this time, with completion based on the size of the volume and the degree of change since the last snapshot.

I can use the built-in default IAM role or I can create one of my own. If I use my own role, I need to enable the EC2 snapshot operations and all of the DLM (Data Lifecycle Manager) operations; read the docs to learn more.

Newly created snapshots will be tagged with the aws:dlm:lifecycle-policy-id and  aws:dlm:lifecycle-schedule-name automatically; I can also specify up to 50 additional key/value pairs for each policy:

I can see all of my policies at a glance:

I took a short break and came back to find that the first set of snapshots had been created, as expected (I configured the console to show the two tags created on the snapshots):

Things to Know
Here are a couple of things to keep in mind when you start to use Data Lifecycle Manager to automate your snapshot management:

Data Consistency – Snapshots will contain the data from all completed I/O operations, also known as crash consistent.

Pricing – You can create and use Data Lifecyle Manager policies at no charge; you pay the usual storage charges for the EBS snapshots that it creates.

Availability – Data Lifecycle Manager is available in the US East (N. Virginia), US West (Oregon), and EU (Ireland) Regions.

Tags and Policies – If a volume has more than one tag and the tags match multiple policies, each policy will create a separate snapshot and both policies will govern the retention. No two policies can specify the same key/value pair for a tag.

Programmatic Access – You can create and manage policies programmatically! Take a look at the CreateLifecyclePolicy, GetLifecyclePolicies, and UpdateLifeCyclePolicy functions to get started. You can also write an AWS Lambda function in response to the createSnapshot event.

Error Handling – Data Lifecycle Manager generates a “DLM Policy State Change” event if a policy enters the error state.

In the Works – As you might have guessed from the name, we plan to add support for additional AWS data sources over time. We also plan to support policies that will let you do weekly and monthly snapshots, and also expect to give you additional scheduling flexibility.

— Jeff;

Elon Musk’s Flint Water Plan Misses the Point

Wednesday afternoon, on the heels of his belated effort to rescue a youth soccer team from a Thai cave with a tiny submarine, Elon Musk promised to fix another seemingly intractable problem. “Please consider this a commitment that I will fund fixing the water in any house in Flint that has water contamination above FDA levels,” Musk wrote in a tweet. “No kidding.”

You can nitpick pieces of this—the EPA, not the FDA, determines how many parts per billion of lead is safe in drinking water—or dismiss it as just another manifestation of Musk’s itinerant savior complex. But know that Flint, at least, welcomes Musk’s help. Just maybe not the version that’s on offer.

Which, in fairness, continues to evolve. Musk went on to invite residents to tweet their water quality test results to him—no takers yet, it seems—and said he would send someone over to install a water filter. When a reporter suggested that many Flint houses have safe water already, Musk pivoted to organizing “a weekend in Flint to add filters” to the remaining houses that lack them.

‘There are many people in Flint, I think it’s safe to say, who are never going to trust tap water again.’

Benjamin Pauli, Kettering University

Flint does need help, but filters are one thing it already has plenty of; the city distributes those and water testing kits, for free, at City Hall, and will continue to until Flint’s remaining 14,000 damaged lead and galvanized water service pipes have been fully replaced. And even then, slapping a filter on a kitchen faucet doesn’t address the deep-seated problems still felt by the Flint community four years after its crisis began.

“We had a lot of things damaged as a result of the corrosive water,” says Flint Mayor Karen Weaver, who offered in a tweet Wednesday to talk through her city’s “specific needs” with Musk. “This is about reestablishing trust, and rebuilding trust. While filters have been helpful, we still need access to bottled water. People need to see all new pipes going in. That’s how you’re going to reestablish trust. And we know that’s what the residents deserve.”

Musk took her up on it, suggesting he’d call on Friday. Weaver says her office and Musk’s are still sorting out schedules, but preliminary conversations have been promising.

Filtering Down

It’s worth spending more time talking about those filters, not because they demonstrate Musk’s lack of familiarity with Flint’s current situation, but because they underscore the city’s deeper challenges.

First, it’s important to note that Flint’s drinking water has met federal standards for contaminants for at least a year. “From every objective measure that is out there, Flint’s water is like any other US city with old lead pipes,” says Siddhartha Roy, who works on the Virginia Tech research team that helped shed light on the Flint water crisis and has tracked it ever since. Water from old lead pipes still isn’t ideal, obviously, and makes filters a necessity. But even then, Flint residents remain understandably wary.

“There are many people in Flint, I think it’s safe to say, who are never going to trust tap water again under any circumstances,” says Benjamin Pauli, a social scientist at Flint’s Kettering University, who has been involved in clean water activism efforts. “It’s true that the filters solve a lead problem at point of use, but there are lots of other issues with the filters.”

Not all residents know how to install and maintain them, for one. A March survey of 2,000 residents by Flint News showed that 15 percent of respondents didn’t have a filter, while over a third weren’t confident in their ability to change the filter at the appropriate time.

And then there’s what Roy calls the “big trust gap” that makes Flint activists and residents suspicious of even working filters. That’s because they effectively get lead out of the water at a specific tap, but don’t clear away bacteria. For a city that suffered a deadly spike in Legionnaires’ disease in 2016, which has been linked to corrosive water from the Flint River, that causes understandable unease. But Roy notes that the current bacteria found in Flint’s filters has not been shown to be harmful. And anyone who does have concerns can follow a few simple steps to minimize bacterial buildup.

“We do have concerns about filter use, and maintenance, and education around the filters. Everybody is not comfortable with that. Seniors are especially not comfortable with the filters,” says Weaver, who notes that the city does have Community Outreach and Resident Education that visits homes to help remediate any filter issues that arise.

Which again should sound familiar to anyone who read Musk’s tweets. What he proposes to accomplish in a barnstorming weekend has been an available resource for years. Better, then, to focus on what Flint really needs.

Bottle It Up

In April, the state of Michigan stopped providing free bottled water to Flint. For a city that still doesn’t trust its taps, the impact can’t be overstated.

“The bottled water is necessary as a short-term intervention for a long-term, structural water system problem,” says Pastor Monica Villarreal, who has helped organize community-based efforts to provide clean water resources in Flint. “The water crisis is going to affect this city from generation to generation. And when you look at it from that perspective, two, three, maybe even four years of bottled water is not much.”

Community aid stations that were once open daily to distribute bottled water now operate just three times a week. And in the absence of state support, Flint increasingly has to rely on private donors; Weaver says the Detroit Police Department recently brought in a fresh supply.

So if Elon Musk—or anyone else—wants to help Flint, start with bottled water, which residents will continue to depend on until every last lead and galvanized line gets replaced. “Bottled water is really the life and death issue,” Villarreal says.

‘That was one of the fears of the residents, that attention would go away, and we have not been made whole.’

Flint Mayor Karen Weaver

And if you want to think bigger, plenty of options remain. “One issue that residents have been raising from very early on is that corrosive water from the river didn’t just damage service lines and water mains, it also damaged the plumbing within people’s homes,” says Kettering’s Pauli. “And not just pipes but fixtures, and also appliances that use water. That would include washing machines, and dishwashers, and hot water heaters.”

Scale it up again, to billionaire proportions. “We want to look at the bigger infrastructure issues in the city as well,” Weaver says. “It’s about reestablishing trust. You have to be confident in the water again.” One way to accomplish that? Get more contractors on the ground replacing service lines; get a three-year replacement plan finished by the end of 2018. And then, Weaver says, look at investment in the community. Instead of—or in addition to—giving people water, how can you help get them back to work?

Those are the types of questions Elon Musk can expect on his call with the mayor. But no matter what comes of it, even expressing interest in the first place has accomplished something invaluable: Reminding people that Flint still exists, and still needs help.

“We’re glad to have the attention. That was one of the fears of the residents, that attention would go away, and we have not been made whole,” Weaver says. “We want everybody watching, because what happened to Flint should never happen to any place again.”


More Great WIRED Stories

It Just Got Easier for the FCC to Ignore Your Complaints

It may soon be harder to get the Federal Communications Commission to listen to your complaints about billing, privacy, or other issues with telecommunications carriers like AT&T and Verizon.

Today, the agency approved changes to its complaint system that critics say will undermine the agency’s ability to review and act on the complaints it receives.

On Wednesday, The Washington Post reported that the controversial changes had been dropped from the proposal, but the commission voted 3–1 along party lines to approve it with the changes intact.

“I believe we should be doing everything within our power to make it easier for consumers to file complaints and seek redress,” Jessica Rosenworcel, the FCC’s lone Democratic commissioner, said during today’s meeting. “This decision utterly fails that test.”

The FCC has two complaint systems. Formal complaints cost $225 to file and work a bit like a court proceeding. The informal complaint system is free. According to the FCC website, the agency doesn’t work to resolve individual informal complaints, but reviews them for trends or patterns that can lead to investigations or actions against carriers.

The changes approved today mostly deal with formal complaints about utility poles. But they include small changes to the informal complaint system that critics say will have an outsized impact on how the agency handles complaints.

At issue is the removal of the words “review and disposition” from the informal complaint rules. The term “disposition” means “resolution.”

In a letter on Tuesday, two Democrats in the House of Representatives argued that under the revised rule, FCC staffers would forward consumer complaints to the targeted company, and advise to file a formal complaint, for $225, if they’re not satisfied with the company’s response.

An FCC spokesman told WIRED Wednesday the change to the informal complaint process was only intended to clarify that the agency doesn’t act on individual complaints.

But critics worry that by removing the reference to review and disposition, FCC staff will no longer have the authority to review and act on informal complaints.

“Now the FCC can ignore informal complaints completely if it wants to,” says Gigi Sohn, a former FCC lawyer who is now a fellow at the Georgetown Law Institute for Technology Law and Policy. “This FCC’s contempt for the public it is legally mandated to serve is remarkable.”


More Great WIRED Stories

Judd Legum’s ‘Popular Information’ Is a Politics Newsletter for Everyone

One of the few things people agree on in 2018 is that the news industry is broken. The old business models don’t work. Meanwhile, audiences feel overwhelmed and underserved: According to a recent Pew Research Center survey, seven in 10 Americans say they are exhausted by the news. The consensus stops with the diagnosis, though; when it comes to prescribing a treatment, everyone has different ideas.

To Judd Legum, editor in chief and founder of left-leaning political news website ThinkProgress, the two biggest problems are ads and social media. Digital ads aren’t sustainable as a business model for online publications and they create incentives for clickbait and other poor-quality journalism. Social media is a firehouse of information and leave readers and outlets alike at the whim of algorithms. This is especially worrisome to Legum right now, given the upcoming midterm elections and the need for voters to be informed on the issues.

“People need to make more intentional choices and to regain power over what news they read,” says Legum. “There’s something fundamentally broken about news delivery as a process. The power is too concentrated. I’ve felt more and more strongly that I wanted to start something new that could circumvent the system.”

Today, Legum is joining a small but growing group of journalists and readers who think one way to fix this is through a good old-fashioned email newsletter. And he is going all in. After 13 years at the helm of ThinkProgress–a site that garners around 10 million unique visitors a month–he’s leaving the 40-person newsroom he runs to launch a paid political newsletter called “Popular Information”, which he will write himself. Starting July 23, Legum will publish “Popular Information” four days a week. He says it will be a mix of deep reporting and analysis, focused on national issues with a progressive lens.

The benefits to both journalist and reader of a direct-to-inbox newsletter are clear: there’s no middleman between reader and writer, no algorithm deciding what you see and what you don’t. And it’s a relationship built on trust—something that the media needs to rebuild with Americans after years of declining public opinion. Readers explicitly opt in to receive newsletters, with the expectation that they will deliver something of value. “It’s intimate to come into somebody’s inbox every day. Email is a more intimate medium than just publishing on the web,” says Jay Rosen, professor of journalism at NYU’s Arthur L Carter Journalism Institute.

‘People need to make more intentional choices and to regain power over what news they read.’

Judd Legum

That’s part of what is so appealing to Legum, who came up as a blogger in the early aughts, when loyal readers visited and often commented on their favorite blogs every day. Once social media rendered that behavior obsolete, Twitter became the place for writers and readers to have a direct relationship, but that introduces a host of new problems.

“Twitter is very ephemeral,” Legum says, adding that most of what he tweets is in reaction to something immediate. “What I’m trying to do with the newsletter is provide some perspective and organization for people who might have a real job during the day. This is for people who are feeling overwhelmed.”

And he’s hoping a good number of his readers will pay for that curation. “Popular Information” will be free for everyone for the first six to eight weeks in order to gain an audience; after that, the Monday edition will be free, and the other three days accessible only to paying members. Luckily, the overhead will be low. Legum will work out of his small apartment in Washington DC and has enough money saved to live off for a little while he builds up his subscriber base.

“There’s a hustle to it,” says Legum. If he succeeds, he might expand “Popular Information” to have a staff larger than one. Even if he does there are downsides to the paid model: cost of entry makes information inaccessible to some.

“All the idealism of journalism is that you can equip the public with information so that it knows what’s going on in its world. So there is an element of all subscription products that is in a sense anti-public,” Rosen says.

It’s a tension that besets any paywall, and it’s something Legum has considered. The name of his newsletter comes from a line James Madison wrote in a letter in 1822: “A popular Government, without popular information, or the means of acquiring it, is but a Prologue to a Farce or a Tragedy; or, perhaps both.” To make “Popular Information” as accessible as possible, Legum plans to keep the subscription cost low. Though he hasn’t decided exactly how much yet, it’ll be less than $10 a month.

Newsletters have long been a way for media outlets to directly reach their audiences, for free or for a price. So why doesn’t Legum just launch “Popular Information” as part of ThinkProgress?

“ThinkProgress is a full-time job. We produce about 25 pieces per day and have three dozen staffers. So, in my view, I don’t have time to do this and my current job. I need to be able to focus my attention on this so I can do it right,” he says. Think Progress managing editor Tara Culp-Ressler will take over his duties until a new EIC can be found. She told WIRED in a statement: “The ThinkProgress team is grateful for the newsroom that Judd built. Obviously we’re sad to see him go, but we’re excited to watch his next chapter unfold.”

Legum’s also ready for something new, and sees a dearth of low-cost, high-quality newsletters focused on politics for a general audience, even as newsletter-first publications have taken off.

In recent years some have gained massive audiences, like Gwyneth Paltrow’s Goop, which has morphed into a lifestyle brand, or The Skimm, which aggregates news from across the web and this year raised $12 million from the likes of Google. The model Legum plans to follow most closely comes from tech analyst Ben Thompson, whose daily newsletter Stratechery costs $10 a month or $100 a year, and is required reading for many people interested in tech.

But the biggest political newsletters right now come from news organizations like Axios and Politico. Legum notes that Axios’ morning and evening newsletters are sponsored by Wall Street–Goldman Sachs, Bank of America. Politico’s Pro subscription, which includes much more than newsletters, is so expensive that even with only 20,000 subscribers it accounted for half the company’s revenue in 2017; at the time, a five-person subscription started at $8,000 a year. Its free newsletter, Playbook, grows out of that insider perspective, in Legum’s opinion, treating politics like a spectator sport for elites rather than something that affects people’s lives. He hopes by offering something without corporate money, paid instead in small amounts by individual stakeholders who want to read what he has to say, that “Popular Information” will act as a guide to politics that matter.

The other thing Legum’s counting on to pull this off is a streamlined back-end. Whereas Thompson, who launched Stratechery in 2014, had to cobble together the means to produce his newsletter himself, Legum has Substack, a startup founded last year by Hamish McKenzie, a former journalist, and Chris Best and Jairaj Sethi, both formerly of messaging app Kik. Early on, they consulted with Thompson and other newsletter producers and recall hearing over and over that half of their time was spent renewing subscriptions and managing the newsletter.

Substack deals with all of that, taking payments, distributing the newsletter to people’s inboxes, renewing subscriptions, and making sure everything works technically. In exchange, it takes a 10 percent cut for newsletters that charge subscribers (for everyone else, publishing is free while the service is in beta).

“We were thinking about how bad incentives for online advertising have sort of broken the media,” says Best, Substack’s CEO. “They incentivize clickbait and cheap outrage in a way that’s dissatisfying for everybody. We’re caught in this bad equilibrium where everybody has to write clickbait stuff.”

Substack graduated out of YCombinator last winter and has raised $2 million in funding. Earlier this week, Best and McKenzie told Nieman Lab that across its hundreds of existing newsletters it has hit 11,000 paid subscribers, who are paying an average of just under $80 a year. And approximately 40 newsletter creators are making what Best and McKenzie told Nieman Lab was “meaningful money”–though “meaningful” can mean different things to different people.

“I don’t really have any expectations on money except I’m going to put my full effort into this and see what I can make of it,” Legum says. “Whether I succeed or not I think depends on whether it ends up being good.”

One challenge facing Legum, and any other newsletter creator, is that at some point people will hit a limit on how many newsletters they want to receive and are willing to pay for. “Popular Information” will be competing for your money with all other paid publications, like newspapers and websites like WIRED, which has a paywall. For now, Legum hopes he’s getting in to the political newsletter game at a time when people are hungry for in-depth information, and interested in receiving it from someone who doesn’t have a corporate sponsor. He also has the benefit of a loyal readership at ThinkProgress who he hopes will sign up, credibility working in and covering politics for 15 years, and 280,000 Twitter followers.

As Rosen notes, the first hurdle to a business model like this is to get anyone to sign up. Having an already established audience certainly helps. So far Substack’s biggest hits are written by well-known writers such as Rolling Stone’s Matt Taibbi and Slate’s Daniel Ortberg. Taibbi has teamed up with an anonymous drug-dealing friend to write a fictional work in newsletter installments, and Ortberg writes a quirky humor newsletter called the “Shatner Chatner”. “Popular Information” will be the first political dispatch for the company.

And though Legum will be a bit busy in the weeks and months to come, he promises to keep tweeting.


More Great WIRED Stories

AWS Storage Gateway Recap – SMB Support, RefreshCache Event, and More

To borrow my own words, the AWS Storage Gateway is a service that includes a multi-protocol storage appliance that fits in between your existing application and the AWS Cloud. Your applications see the gateway as a file system, a local disk volume, or a Virtual Tape Library, depending on how it was configured.

Today I would like to share a few recent updates to the File Gateway configuration of the Storage Gateway, and also show you how they come together to enable some new processing models. First, the most recent updates:

SMB Support – The File Gateway already supports access from clients that speak NFS (versions 3 and 4.1 are supported). Last month we added support for the Server Message Block (SMB) protocol. This allows Windows applications that communicate using v2 or v3 of SMB to store files as objects in S3 through the gateway, enabling hybrid cloud use cases such as backup, content distribution, and processing of machine learning and big data workloads. You can control access to the gateway using your existing on-premises Active Directory (AD) domain or a cloud-based domain hosted in AWS Directory Service, or you can use authenticated guest access. To learn more about this update, read AWS Storage Gateway Adds SMB Support to Store and Access Objects in Amazon S3 Buckets.

Cross-Account Permissions – Some of our customers run their gateways in one AWS account and configure them to upload to an S3 bucket owned by another account. This allows them to track departmental storage and retrieval costs using chargeback and showback models. In order to simplify this important use case, you can configure the gateway to provide the bucket owner with full permissions. This avoids a pain point which could arise if the bucket owner was unable to see the objects. To learn how to set this up, read Using a File Share for Cross-Account Access.

Requester Pays – Bucket owners are responsible for storage costs. Owners pay for data transfer costs by default, but also have the option to have the requester pay. To support this use case, the File Gateway now supports S3’s Requester Pays Buckets. Data collectors and aggregators can use this feature to share data with research organizations such as universities and labs without incurring the costs of access themselves. File Gateway provides file based access to the S3 objects, caches recently accessed data locally, helping requesters reduce latency and costs. To learn more, read about Creating an NFS File Share and Creating an SMB File Share.

File Upload Notification – The gateway caches files locally, and uploads them to a designated S3 bucket in the background. Late last year we gave you the ability to request notification (in the form of a CloudWatch Event) when new files have been uploaded. You can use this to initiate cloud-based processing or to implement advanced logging. To learn more, read Getting File Upload Notification and study the NotifyWhenUploaded function.

Cache Refresh Event – You have long had the ability to use the RefreshCache function to make sure that the gateway is aware of objects that have been added, removed, or replaced in the bucket. The new Storage Gateway Cache Refresh Event lets you know that the cache is now in sync with S3, and can be used as a signal to initiate local processing. To learn more, read Getting Refresh Cache Notification.

Hybrid Processing Using File Gateway
You can use the File Upload Notification and Cache Refresh to automate some of your routine hybrid process tasks!

Let’s say that you run a geographically distributed office or retail business, with locations all over the world. Raw data (metrics, cash register receipts, or time sheets) is collected at each location, and then uploaded to S3 using a File Gateway hosted at each location. As the data arrives, you use the File Upload Notifications to process each S3 object, perhaps using an AWS Lambda function that invokes Amazon Athena to run a stock set of queries against each one. The data arrives over the course of a couple of hours, and results accumulate in another bucket. At the end of the reporting period, the intermediate results are processed, custom reports are generated for each branch location, and then stored in another bucket (this bucket, as it turns out, is also associated with a gateway, and each gateway will have cached copies of the prior versions of the reports). After you generate your reports, you can refresh each of the gateway caches, wait for the corresponding notifications, and then send an email to the branch managers to tell them that their new report is available.

Here’s a video (and presentation) with more information about this processing model:

Now Available
All of the features listed above are available now and you can start using them today in all regions where Storage Gateway is available.

— Jeff;