DeepLens Challenge #1 Starts Today – Use Machine Learning to Drive Inclusion

Are you ready to develop and show off your machine learning skills in a way that has a positive impact on the world? If so, get your hands on an AWS DeepLens video camera and join the AWS DeepLens Challenge!

About the Challenge
Working together with our friends at Intel, we are launching the first in a series of eight themed challenges today, all centered around improving the world in some way. Each challenge will run for two weeks and is designed to help you to get some hands-on experience with machine learning.

We will announce a fresh challenge every two weeks on the AWS Machine Learning Blog. Each challenge will have a real-world theme, a technical focus, a sample project, and a subject matter expert. You have 12 days to invent and implement a DeepLens project that resonates with the theme, and to submit a short, compelling video (four minutes or less) to represent and summarize your work.

We’re looking for cool submissions that resonate with the theme and that make great use of DeepLens. We will watch all of the videos and then share the most intriguing ones.

Challenge #1 – Inclusivity Challenge
The first challenge was inspired by the Special Olympics, which took place in Seattle last week. We invite you to use your DeepLens to create a project that drives inclusion, overcomes barriers, and strengthens the bonds between people of all abilities. You could gauge the physical accessibility of buildings, provide audio guidance using Polly for people with impaired sight, or create educational projects for children with learning disabilities. Any project that supports this theme is welcome.

For each project that meets the entry criteria we will make a donation of $249 (the retail price of an AWS DeepLens) to the Northwest Center, a non-profit organization based in Seattle. This organization works to advance equal opportunities for children and adults of all abilities and we are happy to be able to help them to further their mission. Your work will directly benefit this very worthwhile goal!

As an example of what we are looking for, ASLens is a project created by Chris Coombs of Melbourne, Australia. It recognizes and understands American Sign Language (ASL) and plays the audio for each letter. Chris used Amazon SageMaker and Polly to implement ASLens (you can watch the video, learn more and read the code).

To learn more, visit the DeepLens Challenge page. Entries for the first challenge are due by midnight (PT) on July 22nd and I can’t wait to see what you come up with!

— Jeff;

PS – The DeepLens Resources page is your gateway to tutorial videos, documentation, blog posts, and other helpful information.

Brett Kavanaugh on the Supreme Court Could Be Trouble for Tech

President Donald Trump has chosen Washington DC Circuit Court Judge Brett Kavanaugh to fill Justice Anthony Kennedy’s seat on the Supreme Court. The decision, which Trump announced Monday night, is likely to face opposition not only from Democrats in Congress but also from leaders within the tech industry who oppose Kavanaugh’s perspective on issues related to privacy and net neutrality.

A former clerk for Justice Kennedy, the 53-year-old judge also once worked under independent counsel Kenneth Starr, whose investigation led to the impeachment of President Bill Clinton. Later, Kavanaugh served as White House staff secretary under President George W. Bush. As predicted, he is a solidly conservative pick, whose nomination to the DC Circuit Appeals Court was put on hold for three years over concerns he was too partisan. But President Trump denied the inherently political nature of his pick. “What matters is not a judge’s political views,” he said, “but whether they can set aside those views to do what the law and the constitution require.”

Left-leaning groups including Planned Parenthood and the Democratic National Committee rushed to scrutinize Kavanaugh’s record of opposition to the Affordable Care Act and abortion rights, including a recent case in which Kavanaugh opposed an undocumented teenager’s request for an abortion while she was in detention. But it’s Judge Kavanaugh’s less discussed decisions that will likely rankle the tech industry.

In May of 2017, Kavanaugh argued that net neutrality violates internet service providers’ First Amendment rights in a dissent to a DC Circuit Court decision regarding the Federal Communication Commission’s 2015 order upholding net neutrality. The dissent hinges on a case from the 1990s called Turner Broadcasting v. FCC, which established that cable companies were protected by the First Amendment, just as newspaper publishers and pamphleteers were. “Just like cable operators, Internet service providers deliver content to consumers. Internet service providers may not necessarily generate much content of their own, but they may decide what content they will transmit, just as cable operators decide what content they will transmit,” Kavanaugh wrote. “Deciding whether and how to transmit ESPN and deciding whether and how to transmit ESPN.com are not meaningfully different for First Amendment purposes.”

‘Kavanaugh’s opposition to regulating internet service providers could close the book on net neutrality protections for a generation.’

Kavanaugh argued that just because internet service providers don’t currently make editorial decisions about what does and doesn’t flow over their pipes doesn’t mean they don’t have the right to. “That would be akin to arguing that people lose the right to vote if they sit out a few elections,” he wrote. “Or citizens lose the right to protest if they have not protested before.”

According to Gigi Sohn, who served as counselor to former FCC chairman Tom Wheeler and is now a distinguished fellow at Georgetown Law Institute for Technology Law & Policy, this perspective represents the “fringe of First Amendment jurisprudence.”

“For 85 years, the First Amendment rights of network operators like ISPs, broadcasters, and cable operators have always been balanced with the rights of the public,” Sohn says. “Kavanaugh’s ascension to the bench could start the mainstreaming of a legal theory that would all but eviscerate the public’s rights with regard to networks that use public rights of way, and by law are required to serve the public.”

The FCC has already killed net neutrality for the time being, reversing Obama-era rules that would have prevented internet service providers from speeding up or slowing down service however they chose. But lawsuits both in support of net neutrality and in opposition to it are already making their way through the courts. If the Supreme Court took them up, Kavanaugh’s opposition to regulating internet service providers could close the book on net neutrality protections for a generation.

Despite his consistently conservative pedigree, Kavanaugh’s nomination could also run afoul of the libertarian wing of the Republican Party, which has opposed government surveillance programs. In September of 2010, he dissented from the DC court’s decision not to revisit a ruling that found that police violated a suspect’s Fourth Amendment rights by using a GPS device to track his car without a warrant. Kavanaugh argued that the decision ignored precedent laid out in a 1983 case called United States v. Knotts. That case found that the government did violate a man’s Fourth Amendment rights by using a radio transmitter to track his movements because “[a] person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”

Kavanaugh argued that the same should go for GPS trackers. “The reasonable expectation of privacy as to a person’s movements on the highway is, as concluded in Knotts, zero,” Kavanaugh wrote. The Supreme Court later upheld the DC circuit’s ruling. In his opinion, Judge Antonin Scalia wrote that the government had violated the suspect’s Fourth Amendment rights because the police “physically occupied private property for the purpose of obtaining information.”

Kavanaugh also later defended the National Security Agency’s bulk collection of phone records in a concurring opinion in November of 2015, writing that “the Government’s metadata collection program is entirely consistent with the Fourth Amendment.” The opinion offered a broad interpretation of the state’s right to search and seizure. “The Fourth Amendment allows governmental searches and seizures without individualized suspicion when the Government demonstrates a sufficient ‘special need’ – that is, a need beyond the normal need for law enforcement – that outweighs the intrusion on individual liberty,” Kavanaugh wrote. “Examples include drug testing of students, roadblocks to detect drunk drivers, border checkpoints, and security screening at airports.”

Recently, the Supreme Court has appeared more eager to protect Americans’ digital property from unreasonable search, not just their physical property. In a 5-4 decision in Carpenter v. The United States last month, the court ruled that warrantless search and seizure of cell-site records does violate the Fourth Amendment. And yet, Chief Justice John Roberts was careful to point out that the court’s opinion “does not consider other collection techniques involving foreign affairs or national security.”

At a time when the Trump administration has taken extreme measures to crack down on both illegal and legal immigration in the name of national security, the question of where Kavanaugh would draw the line on government surveillance warrants closer inspection during what is sure to be a knockdown, drag-out fight over his confirmation this fall.


More Great WIRED Stories

How Microsoft’s AI Could Help Prevent Natural Disasters

On May 27, a deluge dumped more than 6 inches of rain in less than three hours on Ellicott City, Maryland, killing one person and transforming Main Street into what looked like Class V river rapids, with cars tossed about like rubber ducks. The National Weather Service put the probability of such a storm at once in 1,000 years. Yet, “it’s the second time it’s happened in the last three years,” says Jeff Allenby, director of conservation technology for Chesapeake Conservancy, an environmental group.

Floods are nothing new in Ellicott City, located where two tributaries join the Patapsco River. But Allenby says the floods are getting worse, as development covers what used to be the “natural sponge of a forest” with paved surfaces, rooftops, and lawns. Just days before the May 27 flood, the US Department of Homeland Security selected Ellicott City—on the basis of its 2016 flood—for a pilot program to deliver better flood warnings to residents via automated sensors.

Recently, Allenby developed another tool to help predict, plan, and prepare for future floods: a first-of-its-kind, high-resolution map showing what’s on the ground—buildings, pavement, trees, lawns—across 100,000 square miles from upstate New York to southern Virginia that drain into Chesapeake Bay. The map, generated from aerial imagery with the help of artificial intelligence, shows objects as small as 3 feet square, roughly 1,000 times more precise than the maps that flood planners previously used. To understand the difference, imagine trying to identify an Uber driver on a crowded city street using a map that can only display objects the size of a Walmart.

Creating the map consumed a year and cost $3.5 million, with help from Microsoft and the University of Vermont. Allenby’s team pored over aerial imagery, road maps, and zoning charts to establish rules, classify objects, and scrub errors. “As soon as we finished the first data set,” Allenby says, “everyone started asking ‘when are you going to do it again?’” to keep the map fresh.

Enter AI. Microsoft helped Allenby’s team train its AI for Earth algorithms to identify objects on its own. Even with a robust data set, training the algorithms wasn’t easy. The effort required regular “pixel peeping”—manually zooming in on objects to verify and amend the automated results. With each pass, the algorithm improved its ability to recognize waterways, trees, fields, roads, and buildings. As relevant new data become available, Chesapeake Conservancy plans to use its AI to refresh the map more frequently and easily than the initial labor-intensive multi-million dollar effort.

Now, Microsoft is making the tool available more widely. For $42, anyone can run 200 million aerial images through Microsoft’s AI for Earth platform and generate a high-resolution land-cover map of the entire US in 10 minutes. The results won’t be as precise in other parts of the country where the algorithm has not been trained on local conditions—a redwood tree or saguaro cactus looks nothing like a willow oak.

A map of land use around Ellicott City, Maryland, built with the help of artificial intelligence (left) offers far more detail than its predecessor (right).

Chesapeake Conservancy

To a society obsessed with location and mapping services—where the physical world unfolds in the digital every day—the accomplishment may not seem groundbreaking. Until recently, though, neither the high-resolution data nor the AI smarts existed to make such maps cost-effective for environmental purposes, especially for nonprofit conservation organizations. With Microsoft’s offer, AI on a planetary scale is about to become a commodity.

Detailed, up-to-date information is paramount when it comes to designing stormwater management systems, Allenby says. “Looking at these systems with the power of AI can start to show when a watershed” is more likely to flood, he says. The Center for Watershed Protection, a nonprofit based in Ellicott City, reported in a 2001 study that when 10 percent of natural land gets developed, stream health declines and it begins to lose its ability to manage runoff. At 20 percent, runoff doubles, compared with undeveloped land. Allenby notes that paved surfaces and rooftops in Ellicott City reached 19 percent in recent years.

Allenby says the more detailed map will enable planners to keep up with land-use changes and plan drainage systems that can accommodate more water. Eventually, the map will offer “live dashboards” and automated alerts to serve as a warning system when new development threatens to overwhelm stormwater management capacity. The Urban Forestry Administration in Washington, DC, has used the new map to determine where to plant trees by searching the district for areas without tree cover where standing water accumulates. Earlier this year, Chesapeake Conservancy began working with conservation groups in Iowa and Arizona to develop training sets for the algorithms specific to those landscapes.

The combination of high-resolution imaging and sensor technologies, AI, and cloud computing is giving conservationists deeper insight into the health of the planet. The result is a near-real-time readout of Earth’s vital signs, firing off alerts and alarms whenever the ailing patient takes a turn for the worse.

Others are applying these techniques around the world. Global Forest Watch (GFW), a conservation organization established by World Resources Institute, began offering monthly and weekly deforestation alerts in 2016, powered by AI algorithms developed by Orbital Insight. The algorithms analyze satellite imagery as it’s refreshed to detect “patterns that may indicate impending deforestation,” according to the organization’s website. Using GFW’s mobile app, Forest Watcher, volunteers and forest rangers take to the trees to verify the automated alerts in places like the Leuser Ecosystem in Indonesia, which calls itself “the last place on Earth where orangutans, rhinos, elephants and tigers are found together in the wild.”

The new conservation formula is also spilling into the oceans. On June 4, Paul Allen Philanthropies revealed a partnership with the Carnegie Institution of Science, the University of Queensland, the Hawaii Institute of Marine Biology, and the private satellite company Planet to map all of the world’s coral reefs by 2020. As Andrew Zolli, a Planet vice president, explains: For the first time in history, “new tools are up to the [planetary] level of the problem.”

By the end of 2017, Planet deployed nearly 200 satellites, forming a necklace around the globe that images the entire Earth every day down to 3-meter resolution. That’s trillions of pixels raining down daily, which could never be transformed into useful maps without AI algorithms trained to interpret them. The partnership leverages the Carnegie Institution’s computer-vision tools and the University of Queensland’s data on local conditions, including coral, algae, sand, and rocks.

“Today, we have no idea of the geography, rate, and frequency of global bleaching events,” explains Greg Asner, a scientist at Carnegie’s Department of Global Ecology. Based on what is known, scientists project that more than 90 percent of the world’s reefs, which sustain 25 percent of marine life, will be extinct by 2050. Lauren Kickham, impact director for Paul Allen Philanthropies, expects the partnership will bring the world’s coral crisis into clear view and enable scientists to track their health on a daily basis.

In a separate coral reef project, also being conducted with Planet and the Carnegie Institution, The Nature Conservancy is leveraging Carnegie’s computer vision AI to develop a high-resolution map of the shallow waters of the Caribbean basin. “By learning how these systems live and how they adapt, maybe not our generation, but maybe the next will be able to bring them back,” says Luis Solorzano, The Nature Conservancy’s Caribbean Coral Reef project lead.

Mapping services are hardly new to conservation. Geographic Information Systems have been a staple in the conservation toolkit for years, providing interactive maps to facilitate environmental monitoring, regulatory enforcement, and preservation planning. But, mapping services are only as good as the underlying data, which can be expensive to acquire and maintain. As a result, many conservationists resort to what’s freely available, like the 30-meter-resolution images supplied by the United States Geological Survey.

Ellicott City and the Chesapeake watershed demonstrate the challenges of responding to a changing climate and the impacts of human activity. Since the 1950s, the bay’s oyster reefs have declined by more than 80 percent. Biologists discovered one of the planet’s first marine dead zones in Chesapeake Bay in the 1970s. Blue crab populations plunged in the 1990s. The sea level has risen more than a foot since 1895, and, according to a 2017 National Oceanic and Atmospheric Administration (NOAA) report, may rise as much as 6 feet by the end of this century.

Allenby joined the Chesapeake Conservancy in 2012 when technology companies provided a grant to explore the ways in which technology could help inform conservation. Allenby sought ways to deploy technology to help land managers, like those in Ellicott City, improve upon the dated 30-meter-resolution images that FEMA also uses for flood planning and preparation.

In 2015, Allenby connected with the University of Vermont—nationally recognized experts in generating county-level high-resolution land-cover maps—seeking a partner on a bigger project. They secured funding from a consortium of state and local governments, and nonprofit groups in 2016. The year-long effort involved integrating data from such disparate sources as aerial imagery, road maps, and zoning charts. As the data set came together, a Conservancy board member introduced Allenby to Microsoft, which was eager to demonstrate how its AI and cloud computing could be leveraged to support conservation.

“It’s been the frustration of my life to see what we’re capable of, yet how far behind we are in understanding basic information about the health of our planet,” says Lucas Joppa, Microsoft’s chief environmental scientist, who oversees AI for Earth. “And to see that those individuals on the front line solving society’s problems, like environmental sustainability, are often in organizations with the least resources to take advantage of the technologies that are being put out there.”

The ultimate question, however, is whether the diagnoses offered by these AI-powered land-cover maps will arrive in time to help cure the problems caused by man.


More Great WIRED Stories

How to Install MongoDB on Ubuntu 18.04

MongoDB is an open-source, modern document database management system designed for high performance data persistence, high availability, as well as automatic scaling, based on the state-of-the-art technology of NoSQL. Under MongoDB, a record is a document, which is a data structure that comprises of field and value pairs (MongoDB documents are comparable to JSON objects).

Because it provides high performance and great scalability features, it is being used for building modern applications that require powerful, mission-critical and high-availability databases.

In this article, we will explain how to install MongoDB, manage its service and setup basic authentication on Ubuntu 18.04.

Important: You should note that the developers of MongoDB only offer packages for 64-bit LTS (long-term support) Ubuntu releases such as 14.04 LTS (trusty), 16.04 LTS (xenial), and so on.


Read Also: How to Install MongoDB on Ubuntu 16.04/14.04 LTS

Step 1: Installing MongoDB on Ubuntu 18.04

1. Ubuntu’s official software package repositories comes with the latest version of MongoDB, and can be easily installed using the APT package manager.

First update the system software package cache to have the most latest version of the repository listings.

$ sudo apt update

2. Next, install MongoDB package that includes several other packages such as mongo-tools, mongodb-clients, mongodb-server and mongodb-server-core.

$ sudo apt install mongodb

3. Once you have successfully installed it, the MongoDB service will start automatically via systemd and the process listens on port 27017. You can verify its status using the systemctl command as shown.

$ sudo systemctl status mongodb
Check Mongodb Status

Check Mongodb Status

Step 2: Managing the MongoDB Service

4. The MongoDB installation comes as a systemd service and can be easily manageable via a standard systemd commands as shown.

To stop running MongoDB service, run the following command.

$ sudo systemctl stop mongodb 

To start a MongoDB service, type the following command.

$ sudo systemctl start mongodb

To restart a MongoDB service, type the following command.

$ sudo systemctl restart mongodb 

To disable automatically started MongoDB service, type the following command.

$ sudo systemctl disable mongodb 

To enable again MongoDB service, type the following command.

$ sudo systemctl enable mongodb 

Step 3: Enable Remote MongoDB Access on Firewall

5. By default MongoDB runs on port 27017, to allow access from everywhere you can use.

$ sudo ufw allow 27017

But enabling access to MongoDB from everywhere gives unrestricted access to the database data. So, it’s better to give access to specific IP address location to default MongoDB’s port using following command.

$ sudo ufw allow from your_server_IP/32 to any port 27017 $ sudo ufw status

6. By default the port 27017 is listens on the local address 127.0.0.1 only. To allow remote MongoDB connections, you need to add your server IP address to /etc/mongodb.conf configuration file as shown.

bind_ip = 127.0.0.1,your_server_ip
#port = 27017

Save the file, exit the editor, and restart MongoDB.

$ sudo systemctl restart mongodb

Step 4: Create MongoDB Database Root User and Password

7. By default MongoDB comes with user authentication disabled, its therefore started without access control. To launch the mongo shell, run the following command.

$ mongo 

8. Once you have connected to the mongo shell, you can list all available databases with the following command.

> show dbs

9. To enable access control on your MongoDB deployment to enforce authentication; requiring users to identify themselves every time they connect to the database server.

MongoDB uses the Salted Challenge Response Authentication Mechanism (SCRAM) authentication mechanism by default. Using SCRAM, MongoDB verifies the supplied user credentials against the user’s name, password and authentication database (the database in which the user was created, and together with the user’s name, serves to identify the user).

You need to create a user administrator (analogous to root user under MySQL/MariaDB) in the admin database. This user can administrate user and roles such as create users, grant or revoke roles from users, and create or modify customs roles.

First switch to the admin database, then create the root user using following commands.

> use admin > db.createUser({user:"root", pwd:"[email protected]!#@%$admin1", roles:[{role:"root", db:"admin"}]})
Create MongoDB Root User

Create MongoDB Root User

Now exit the mongo shell to enable authentication as explained next.

10. The mongodb instance was started without the --auth command line option. You need to enable authentication of users by editing /lib/systemd/system/mongod.service file, first open the file for editing like so.

$ sudo vim /lib/systemd/system/mongodb.service 

Under the [Service] config section, find the parameter ExecStart.

ExecStart=/usr/bin/mongod --unixSocketPrefix=${SOCKETPATH} --config ${CONF} $DAEMON_OPTS

Change it to the following:

ExecStart=/usr/bin/mongod --auth --unixSocketPrefix=${SOCKETPATH} --config ${CONF} $DAEMON_OPTS
Enable Authentication in MongoDB

Enable Authentication in MongoDB

Save the file and exit it.

11. 8. After making changes to configuration file, run ‘systemctl daemon-reload‘ to reload units and restart the MongoDB service and check its status as follows.

$ systemctl daemon-reload
$ sudo systemctl restart mongodb $ sudo systemctl status mongodb 
Verify MongoDB Authentication

Verify MongoDB Authentication

12. Now when you try to connect to mongodb, you must authenticate yourself as a MongoDB user. For example:

$ mongo -u "root" -p --authenticationDatabase "admin"
Connect to MongoDB as Root User

Connect to MongoDB as Root User

Note: It is not recommended to enter your password on the command-line because it will be stored in the shell history file and can be viewed later on by an attacker.

That’s all! MongoDB is an open-source, modern No-SQL database management system that provides high performance, high availability, and automatic scaling.

In this article, we have explained how to install and get started with MongoDB in Ubuntu 18.04. If you have any queries, use the comment form below to reach us.

YouTube Debuts Plan to Promote and Fund ‘Authoritative’ News

Following a year in which YouTube has repeatedly promoted conspiracy-theory videos during breaking news events like the shootings in Parkland, Florida, and Las Vegas, the company announced on Monday a slew of new features it hopes will make news on the platform more reliable and less susceptible to manipulation. The company is also investing $25 million in grants to news organizations looking to expand their video operations, as part of a larger, $300 million program sponsored by YouTube’s sister company, Google.

According to YouTube executives, the goal is to identify authoritative news sources, bring those videos to the top of users’ feeds, and support quality journalism with tools and funding that will help news organizations more effectively reach their audiences. The challenge is deciding what constitutes authority when the public seems more divided than ever on which news sources to trust—or whether to trust the traditional news industry at all.

YouTube

Among the many changes YouTube announced Monday are substantive tweaks to the tools it uses to recommend news-related videos. In the coming weeks, YouTube will start to display an information panel above videos about developing stories, which will include a link to an article that Google News deems to be most relevant and authoritative on the subject. The move is meant to help prevent hastily recorded hoax videos from rising to the top of YouTube’s recommendations. And yet, Google News hardly has a spotless record when it comes to promoting authoritative content. Following the 2016 election, the tool surfaced a WordPress blog falsely claiming Donald Trump won the popular vote as one of the top results for the term “final election results.”

YouTube is also expanding a feature, currently available in 17 countries, that shows up on the homepage during breaking news events. This section of the homepage will only surface videos from sources YouTube considers authoritative. The same goes for the videos that YouTube recommends viewers watch next.

These changes attempt to address the problem of misinformation online without adding more human moderators. With some 450 hours of video going up on YouTube every minute, “human curation isn’t really a viable solution,” Neal Mohan, YouTube’s chief product officer, told reporters Monday.

Traditionally, YouTube’s algorithm has prioritized a user’s personal viewing history, as well as the context of the video that user is currently watching, when deciding what videos to surface next. That can be problematic because, as researchers have found, once you watch one conspiracy-theory video claiming that the student survivors of the Parkland shooting are crisis actors, YouTube may recommend you watch even more. With this change, the company is trying to interrupt that downward spiral. It’s important to note, though, that YouTube is applying that standard only to breaking news and developing stories. For all other videos that users find on YouTube, the recommendation engine will work the old-fashioned way, which, YouTube executives acknowledge, may well turn up content that people find objectionable.

“There are going to be counter points of view, and there’s going to be [videos] where people who have a conspiratorial opinion are going to express them,” Mohan says. “What I think we can do is, instead of telling users what to think, give them as much information as possible, so that they can make those decisions themselves.”

To that end, YouTube is also beginning to implement its previously announced partnerships with Wikipedia and Encyclopedia Brittanica, which it will use to fact-check more evergreen conspiracy theories about, say, the moon landing or the Bermuda Triangle. Those videos will now feature an information panel with context from either Encyclopedia Brittanica or Wikipedia. For the moment, though, these panels are being applied only to a small subset of videos that, Mohan says, “tend to be accompanied by misinformation,” meaning they’re hardly a cure-all for the vast quantities of new and less predictable misinformation being uploaded to YouTube every day.

Eradicating that content isn’t the goal for YouTube, anyway. After all, merely spreading falsehoods isn’t against the platform’s policies, unless those falsehoods are considered to be hate speech or harassment. That’s one reason why known propagandists like Alex Jones of Infowars have managed to build wildly successful channels on the back of conspiracy theories that carefully adhere to YouTube’s terms. As it walks the fine line between openness, profitability, and living up to its responsibility to the public, YouTube is less focused on getting rid of the hoaxers than it is on trying to elevate journalism it considers valuable.

That’s one reason it’s giving $25 million in grants to newsrooms that are investing in online video capabilities. That’s a small amount for the multibillion-dollar company, but YouTube’s executives say it could grow in time. The funding is part of the so-called Google News Initiative, a three-year, $300 million fund aimed at strengthening and lifting up quality journalism, which Google announced in March. The hope is that this funding can help news organizations build more robust video operations to compete with the amateurs who might like to mislead their audiences. YouTube has also formed a working group of newsrooms that will help the company develop new products for journalists. “We’re doing this because, while we see the news industry changing, the importance of news is not,” says Robert Kyncl, YouTube’s chief business officer.

Still, questions remain about how this experiment will play out in practice. Identifying which news outlets are authoritative is hard enough in the United States, where people can subsist on completely different media diets according to their politics. Among the news organizations that YouTube highlighted in the announcement as authoritative were CNN and Fox News; the former is routinely rejected by President Trump as “fake news,” the latter is among the least trusted news sources among Democratic voters. This bifurcation of the media poses a challenge for all tech platforms, not just YouTube, that resist taking a stand on what constitutes truth. In attempting to satisfy people all across the political spectrum—and do it on a global scale—they risk landing themselves smack in the center of the same ideological battles they helped foment.


More Great WIRED Stories

How an App Could Give Some Gig Workers a Safety Net

The gig economy has a problem. Freelancing is increasingly common, but it’s still difficult and costly to access benefits without a 9-to-5 job. For the lowest-paid workers, it can be close to impossible.

In the past few years, many have seized on the idea of “portable benefits”: insurance and paid time off not bound to a single employer. In 2015, dozens of academics, entrepreneurs, and CEOs—including the cofounders and CEOs of Lyft, Handy, and Instacart—signed a manifesto calling for such a system. Last year, Senator Mark Warner (D-Virginia) introduced legislation that would offer grants to states, cities, and community groups to create pilot programs of portable benefits. In February, representatives in Washington state reintroduced a bill to create a state portable benefits system; soon after, Uber CEO Dara Khosrowshahi cosigned a public letter affirming the need for such a system. But for all the talk, there’s been little action.

That’s beginning to change. Since March, Fair Care Labs—the innovation arm of the National Domestic Workers Alliance, which organizes and advocates for domestic workers—has been quietly testing a portable benefits tool, developed with the help of a grant from Google.org. Domestic workers have long grappled with many of the issues plaguing the gig economy today. The Fair Labor Standards Act of 1938, which created the right to a minimum wage and overtime pay, initially excluded domestic workers; in the 1970s, the law expanded to include some domestic workers, but it still excludes babysitters and companions to the elderly. Domestic workers, like all independent contractors, cannot unionize or bargain collectively.

Over the past several years, states including California, New York, and Massachusetts have enacted laws granting domestic workers rights to overtime pay and paid time off; however, those laws are challenging to enforce, and there are still few federal protections. According to a 2017 study conducted in part by the NDWA, 23 percent of domestic workers are paid below their state’s minimum wage, and 70 percent earn less than $13 per hour. Like workers in the gig economy, most domestic workers are paid by multiple employers, none of whom is incentivized to offer benefits. In other words, the workforce is the perfect proving ground for a portable benefits system that could have broader applications.

Fair Care Labs’ tool, dubbed Alia, is initially designed for use by house cleaners, who typically work for a number of clients. Alia pools voluntary contributions from those clients, who each contribute at least $5 per cleaning; each cleaner can then use her pool of funds to redeem various benefits. Fair Care Labs has partnered with insurance company Colonial Life to offer life insurance, disability insurance, and accident and critical illness insurance. Workers can also redeem paid time off, at $120 per day.

In developing Alia, project lead Sam Witherbee spoke with dozens of cleaners, some who worked independently and others who worked through platforms such as Handy and Homejoy (before it shut down in 2015). They shared their stories about living without basic benefits like paid time off. He also spoke with people who hire cleaners—and learned that for the most part, they wanted to do the right thing. They just didn’t know where to begin. “If you make it easy, they’ll jump on it,” says Palak Shah, the founding director of Fair Care Labs and social innovations director at NDWA.

Workers using Alia describe the relief of having some sort of safety net, if small. Instead of continuing to work when they’re sick or delaying medical care, even workers without savings can take time off and see a doctor. “I wanted to have a backup plan, if something ever happened to me,” says Olivia Mejia, who has worked as a cleaner for 10 years and supports three children. With Alia in place, Mejia says, she was able to attend her daughter’s high school graduation this spring, which conflicted with her work schedule. In the past, she would have had to weigh the costs of missing a milestone event or missing a day’s pay.

Beyond domestic workers, such a tool could be used by any worker who receives income from multiple sources and does not have a primary employer that offers benefits. Indeed, even some clients of cleaners have found themselves eyeing the tool with their own affairs in mind. “I belong to a kind of professional class where I can afford to charge enough money to pay for” benefits, says Gretchen Hildebran, a freelance documentary filmmaker who contributes to a domestic worker’s benefits fund through Alia. “But it is very precarious, and it’s actually a huge amount of work to constantly be figuring them out for myself from month to month. To have something that was more stable and long-term would be amazing. I feel like it should be standard practice.”

Alia does not solve all the challenges faced by nontraditional workers. It notably does not offer health insurance, beyond critical illness insurance; the NDWA hopes to add a health-insurance option, but it’s proved a difficult nut to crack, and there hasn’t been huge demand for it from workers so far. And then there’s the fact that clients don’t have to contribute to the system. “A mandatory system would be better,” says Libby Reder, a fellow for the Aspen Institute’s Future of Work Initiative. She says requiring contributions would create “a lot more certainty and sustainability.”

A federal law of that sort may be a long way off, given the current Republican-dominated Congress. The bill reintroduced in Washington state this winter would mandate employer contributions to a portable benefits system, but it is stuck in committee. Similarly, Warner’s attempt to fund pilot programs for portable benefits has been stalled since last year, though it recently won two bipartisan cosponsors.

A tool like Alia could be significant for freelancers beyond just those working as home cleaners—“basically anyone working in different arrangements from the traditional 9-to-5 single employer,” Shah says. A freelance filmmaker like Hildebran could sign up clients to contribute an extra amount per project; an Uber or Lyft driver could theoretically sign up passengers mid-ride. Alia’s mere existence “makes it more difficult for people to say, ‘Ah, well, we just can’t figure out how to do it,’” says Elaine Waxman, a senior fellow in the Income and Benefits Policy Center at the Urban Institute.


More Great WIRED Stories

CBM – Shows Network Bandwidth in Ubuntu

CBM (Color Bandwidth Meter) is a simple tool that shows the current network traffic on all connected devices in colors in Ubuntu Linux. It is used to monitor network bandwidth. It shows the network interface, bytes received, bytes transmitted and total bytes.

Read Also: iftop – A Real Time Linux Network Bandwidth Monitoring Tool

In this article, we will show you how to install and use cbm network bandwidth monitoring tool in Ubuntu and its derivative such as Linux Mint.

How to Install CBM Network Monitoring Tool in Ubuntu

This cbm network bandwidth monitoring tool is available to install from the default Ubuntu repositories using the APT package manager as shown.

$ sudo apt install cbm


Once you have installed cbm, you can start the program using the following command.

$ cbm 
Ubuntu Network Bandwidth Monitoring

Ubuntu Network Bandwidth Monitoring

While cbm is running, you can control its behavior with the following keys:

  • Up/Down – arrows keys to select an interface to show details about.
  • b – Switch between bits per second and bytes per second.
  • + – increase the update delay by 100ms.
  • -- – decrease the update delay by 100ms.
  • q – exit from the program.

If you are having any network connection issues, check out MTR – a network diagnostic tool for Linux. It combines the functionality of commonly used traceroute and ping programs into a single diagnostics tool.

However, to monitor multiple hosts on a network, you need robust network monitoring tools such as the ones listed below:

    1. How to Install Nagios 4 in Ubuntu
    2. LibreNMS – A Fully Featured Network Monitoring Tool for Linux
    3. Monitorix – A Lightweight System and Network Monitoring Tool for Linux
    4. Install Cacti (Network Monitoring) on RHEL/CentOS 7.x/6.x/5.x and Fedora 24-12
    5. Install Munin (Network Monitoring) in RHEL, CentOS and Fedora

That’s it. In this article, we have explained how to install and use cbm network bandwidth monitoring tool in Ubuntu and its derivative such as Linux Mint. Share your thoughts about cbm via the command form below.

Smarty Template Engine – {literal} – Built-in Functions

{literal}

{literal} tags allow a block of data to be taken literally. This is typically used around Javascript or stylesheet blocks where {curly braces} would interfere with the template delimiter syntax. Anything within {literal}{/literal} tags is not interpreted, but displayed as-is. If you need template tags embedded in a {literal} block, consider using {ldelim}{rdelim} to escape the individual delimiters instead.

Example 7.25. {literal} tags

{literal}
<script type="text/javascript">
<!--
  function isblank(field) {
    if (field.value == '')
      { return false; }
    else
      {
      document.loginform.submit();
      return true;
    }
  }
// -->
</script>
{/literal}

Example 7.26. Javascript function example

<script language="JavaScript" type="text/javascript">
{literal}
function myJsFunction(name, ip){
   alert("The server name\n" + name + "\n" + ip);
}
{/literal}
</script>
<a href="javascript:myJsFunction('{$smarty.server.SERVER_NAME}','{$smarty.server.SERVER_ADDR}')">Click here for the Server Info</a>
 

Example 7.27. Some css style in a template

{* included this style .. as an experiment *}
<style type="text/css">
{literal}
/* this is an intersting idea for this section */
.madIdea{
    border: 3px outset #ffffff;
    margin: 2 3 4 5px;
    background-color: #001122;
}
{/literal}
</style>
<div class="madIdea">With smarty you can embed CSS in the template</div>

Immigration Fight Shows Silicon Valley Must Stop Feigning Neutrality

Last month, the Trump administration announced that it would halt its policy of separating young asylum-seekers from their parents. For those Americans angered by their government’s cruel treatment of children as young as a few months old, this was a hard-fought victory. It came only after relentless lobbying of Congress; after the defection and shocking testimony of Department of Homeland Security contractors; after a torrent of heartbreaking images and videos and the work of a legion of activists, who shut down ICE facilities and even chased senior Trump officials from restaurants.

WIRED OPINION

ABOUT

Emerson T. Brooking (@etbrooking) is a Washington, D.C.-based writer. Peter Warren Singer (@peterwsinger) is strategist at New America. They are the authors of LikeWar: The Weaponization of Social Media, to be published in October 2018.

The sinew that bound these efforts together was social media. More specifically, it was Twitter. Although only about one in five Americans use the fast-moving, foul-mouthed platform, it has become the cornerstone of modern US politics. It is where journalists gather facts and where the president puts his brain. It is where stories gather viral momentum before breaking out into the mainstream. Increasingly, it is also a battlefield, where competing armies of activists battle it out in “like wars,” seeking to define a contentious issue one hashtag at a time.

But Twitter also has administrators: a small group of real and fallible human beings. And this is where the trouble starts. In their efforts to disrupt the world, the masters of Silicon Valley are finding it harder and harder to stand apart from the politics of it.

Two incidents of Twitter policy-making stand out amid the fierce online lobbying effort against forcible family separation. The first came when software developer Sam Lavigne created a database of 1,500 ICE agents, drawn from publicly available data on LinkedIn, as well as a Twitter bot to push their personal information out to the world. Lavigne’s project was quickly banned for “doxing”—the sharing of an unwilling party’s personal information.

The second incident came when journalists at the left-leaning Splinter news organization acquired and published the cell phone number of Stephen Miller, a senior White House advisor and gleeful foe of immigration. The journalistic outlet’s Twitter account was promptly deactivated by administrators, effectively put in “Twitter jail.” As other Twitter users shared or retweeted the number, their accounts were also deactivated.

Soon enough, user accounts were being deactivated for simply sharing a link to the Splinter story—the kind of escalation typically used to block the spread of terrorist propaganda. Eventually, users were deactivated for merely noting the deactivation of other users. In an ironic twist, alt-right activists—many previously banned from Twitter for their embrace of violent white nationalism—returned to the platform long enough to help hunt down and report the offending users.

Neither of these events meant much for the millions-strong struggle to end the Trump administration’s internment of children. But to those of us who study Silicon Valley’s growing role in politics, they signal a great deal. They mark the most prominent occasions that Twitter—a service born from the progressive, free-speech ideals of early internet culture—has used its power to stymie activists on the left. That it comes during protests against 21st-century internment camps makes it all the more striking.

Although the founders of Twitter and all such services claim to administer their platforms as impartial observers, this was never really true. This small club of Silicon Valley titans has rapidly accumulated so much political power that any decision they make about the content that transits their platforms—even the absence of a decision—has a clear social impact. History would have taken a different course if Facebook had not hesitated to police viral falsehoods and Russian disinformation offensives until after the 2016 election, or if YouTube had not taken years to seriously study how its algorithms steered users toward terrorist content.

And when Twitter leaps to vigorously safeguard the privacy of government agents and high-level administration officials—the exact kind of protection it has been slow or unwilling to extend to journalists under similar threat—that decision also carries weight. It joins a pattern in which Twitter has prostrated itself to placate far-right media personalities, or looked past its own rules to justify playing host to the toxic tirades of the 45th president. Through these choices, a platform built to empower the crowd is increasingly becoming a sanctuary for the powerful.

Over the past five years, events have forced the traditionally apolitical titans of Silicon Valley to reckon again and again with their burgeoning political responsibilities. First was the terrorist use of their platforms, which saw carefree engineers sitting down to awkward meetings with senior US diplomats and military leaders as they discussed the particulars of beheading videos. Next was the election of Donald Trump amid an internet-empowered Russian disinformation operation, which showed that Silicon Valley platforms could be effectively weaponized against the nation of their birth. Third was the deadly 2017 white-nationalist rally at Charlottesville, fomented by social media, which shifted how the companies saw hate speech virtually overnight.

Right now, a fourth such revolution is brewing. From the outside, it is being driven by left-leaning activists who are horrified by the increasingly cruel policies of the Trump administration and who are using technology to fight back. From within, it is being driven by tech employees protesting their companies’ business with arms of the US government whose practices they abhor. And in the middle stand the administrators of Twitter and other platforms, who would like to do nothing so much as buckle down and weather the storm.

If the recent history of Silicon Valley and the Trump administration are any guide, it won’t work. Already, Wikipedia editors are debating whether the military holding facilities for families of asylum-seekers can better be described as “internment” or “concentration” camps. Soon enough, there will come a moment when the stakes are ratcheted even higher—when one too many immigrants die fleeing the US border patrol or tragedy strikes one of America’s new 100-degree tent-city internment camps—and the social media giants see themselves swept up in the protests and facing a moment of profound moral clarity. They will either aid the activists, taking a direct hand in political protests, or they will double down on their role as “neutral” platforms. Each course of action will represent a clear choice. Each will favor one side over the other.

On June 19, as anger over US-administered internment camps reached a fever pitch, Jack Dorsey, cofounder and CEO of Twitter, tapped out a simple question to his 4.2 million followers. “What are the highest impact ways to help?” he asked.

But Dorsey and his peers already know the answer. The real question is whether they are willing to accept the consequences. They hold the reins of the most influential communications systems on Earth. Through actions as small as featuring fundraising links on the homepages of their users to as large as fundamental shifts in their algorithms, they tilt the balance of our politics every day.

American government is in a sorry state. It will get worse. It is time for these “neutral” social media platforms, never particularly neutral to begin with, to cast aside their excuses and consider the greater good in how they govern their own digital empires.

WIRED Opinion publishes pieces written by outside contributors and represents a wide range of viewpoints. Read more opinions here.

More Great WIRED Stories

The Court Case that Enabled Today’s Toxic Internet

There once was a legendary troll, and from its hideout beneath an overpass of the information superhighway, it prodded into existence the internet we know, love, and increasingly loathe.

That troll, Ken ZZ03, struck in 1995. But to make sense of the profound aftereffects—and why Big Tech is finally reckoning with this part of its history—you have to look back even further.

In 1990, an online newsletter called Rumorville accused a competitor, Skuttlebutt, of being a “scam.” Skuttlebutt sued the online service provider that hosted Rumorville, CompuServe, for publishing false, damaging statements. A judge ruled that CompuServe was not responsible for content that it simply distributed.

A few years later, in the forums of another service provider—remember Prodigy?—an anonymous user called the firm Stratton Oakmont “a cult of brokers who either lie for a living or get fired.” Unlike CompuServe, Prodigy had tried to monitor its message boards. For that reason, when Stratton Oakmont sued, the court held that Prodigy was responsible.

The Feds needed an official policy. Tech lobbyists, who considered the Prodigy decision unreasonably restrictive, pushed lawmakers to adopt the CompuServe standard. They succeeded, and then some: Section 230 of the Communications Decency Act, passed in 1996, states that platforms are not liable for the content they host—even when, like Good Samaritans, they try to intervene. Ken ZZ03 would be its first test.

Days after the 1995 Oklahoma City bombing, Ken ZZ03 posted ads on an AOL message board for T-shirts celebrating the tragedy (“Visit Oklahoma … It’s a BLAST!!!”). To order, the ads said, call Kenneth Zeran, whose phone number was included.

Zeran was a Seattle-based TV producer and artist, and he had nothing to do with the ads. (Ken ZZ03’s motives and identity remain unknown.) Yet tons of people called to berate and threaten him, to the point that police were notified. Zeran asked AOL to take down the messages. AOL demurred. Zeran sued in 1996; a decision was reached in 1997. The judge, invoking Section 230, sided with AOL.

Ask many web scholars and they’ll tell you that Section 230 in general, and the Zeran case in particular, created the modern internet. CompuServe, Prodigy, and AOL became Google, Facebook, and Twitter, companies that have for years relied on Section 230 as a legal shield against claims of publishing abusive content.

Yet the law never could have anticipated the unchecked growth of Big Tech.

In the mid-’90s, AOL was just a bunch of guys “in an office park behind a Cadillac dealership” in suburban Virginia, said their then-lead attorney, Randall Boe, in a recent interview. “We had no idea what was to come.”

CompuServe’s attorney, Robert Hamilton, believes his winning argument was wildly misunderstood by the authors of Section 230, who gave platforms absolute immunity. “It was only a matter of time,” Hamilton says, before Congress would have to make amendments.

In March, Congress passed the first reform of Section 230 in 22 years, saying platforms can be found liable, but only if their users are participating in sex trafficking. Senator Ron Wyden of Oregon, who coauthored Section 230, didn’t support that particular bill but argued nonetheless that tech companies have failed to honor the spirit of the law. “In years of hiding behind their shields … too many companies have become bloated and uninterested in the larger good,” he said. Indeed, under Section 230, it’s fine for tech companies to act like Good Samaritans—they simply forget to.

As for Kenneth Zeran, he doesn’t think about the AOL case much these days. But, he says, “I always felt that I was correct—and that history would show that I was right.”


Michael Fitzgerald is a writer and editor based in New York.

This article appears in the July issue. Subscribe now.


More Great WIRED Stories