Uber and Lyft’s Never-Ending Quest to Crush Price Comparison Apps

For nearly as long as there have been ride-sharing services like Uber and Lyft, there have been apps that help riders compare fares and travel times. These aggregator apps allow riders to survey all the services in an area and check prices and wait times—an efficient version of what many do already. There are always fresh versions of these apps popping up. The newest one, Bellhop, officially launched in New York this week.

Bellhop allows prospective riders to compare 17 services offered by four companies—Uber, Lyft, Juno, and Curb—in New York, with plans to add more services and expand to more cities soon. “There are too many ride-sharing apps and you don’t have transparency to make decisions,” CEO and cofounder Payam Safa told me—and he’s right. Pull Bellhop up on a Tuesday morning in July, and it will tell you that the cheapest way to get to the New York public library from my home on the Upper West Side is Lyft’s carpool product. The fastest is Juno, as there’s a car just one minute away.

Figuring that out on my own would take minutes of toggling back and forth between ride-share apps (and likely drumming up my fare in the meantime). With Bellhop, those calculations took less than a minute.

Figuring that out on my own would take several minutes of toggling back and forth between ride-share apps. With Bellhop, those calculations took less than a minute.

Bellhop is just the most recent service to try and forge this problem into a business opportunity. Whipster, which was started by a Florida IT consultant and aggregates bikeshares and public transportation options as well as ride shares, launched officially last February. The oldest and most established is the Boston-based team behind RideGuru, which began as a taxi fare finder in 2006, three years before Uber launched. Add to that a list of abandoned attempts, ghost apps, and failed startups that includes: PriceRide, Ride Fair, Ridescout, Urbanhail, and Corral Rides, among others. (Corral Rides switched strategies, relaunching as a carpool app called Hitch that sold to Lyft in 2014.)


Several of the startups that publish these apps, including both Bellhop and RideGuru, attempt to make money by striking deals with the ride-sharing companies to promote their services in exchange for affiliate fees, the same way that hotels pay Kayak or Expedia when a prospective traveler books through the platform. Most of the apps reach their estimates through algorithms that factor in published rates, distance, and time traveled. Many also rely on the programming tools, or APIs, that ride-sharing apps like Uber and Lyft make available to developers. But accurately predicting prices has become trickier in the past year, according to RideGuru founder and CEO Ippei Takahashi, as Uber has rolled out a new upfront pricing structure. The companies also offer promotions and discounts for which aggregators can’t account. (For example: Uber has given me a 50% discount on my first 10 rides this week, so Bellhop’s estimates for the service are wrong.)

None of these apps have successfully transformed into the Kayak of ride sharing, but their continued emergence points to one of the ride-share industries most significant challenges. Ride-sharing companies aim to compete on brand and service, offering better experiences and increasingly more thorough transportation options, including bikes, scooters and even rental cars. But to build their networks, they have competed on fares, offering riders ever cheaper prices in an effort to get them hooked. This price-cutting cycle has conditioned riders to look for the cheapest ride options.

Ride-sharing companies aim to compete on brand and service, offering better experiences and more transportation options. But to build their networks, they have competed on fares.

As private companies funded by loads of venture capital, Uber and Lyft can still afford to slash prices. But this time is coming to a close. As both companies gear up for initial public offerings in the next year, each must focus on becoming profitable. At the same time, they’re under increasing pressure to pay their drivers better. New York City regulators are considering establishing pay rules for drivers of Uber and other ride-hailing apps that would significantly increase their wages. “As [Uber] shifts toward an IPO, they have to charge riders more and/or pay drivers less in order to become profitable,” says Harry Campbell, author of the ride-share blog The Rideshare Guy, and an adviser to Bellhop.

So, it’s no surprise that the large ride-sharing companies don’t much care for these apps. They encourage the price checking cycle that the larger ride-sharing services wish to eradicate. For the most part, Uber and Lyft appear to ignore them. But as individual apps have become popular, ride-share companies can threaten to withhold access to their developer tools. An Uber spokesperson pointed me to the company’s developer terms of service, which forbid using APIs for price comparison.

Uber used this argument when it threatened to shut down Ride Fair in the summer of 2017, by demanding Ride Fair remove the service from its comparison app. A year earlier, Uber similarly threatened to restrict its tools for a group of Harvard Business School entrepreneurs after they launched Urbanhail; they cried fowl, arguing that Uber’s stance was anticompetitive.

While Uber never officially followed through on these threats, neither group elected to continue developing their apps. When I reached Phillip Wall, one of the two developers behind Ride Fair, he said he hadn’t paid the annual $100 fee to make it available through Apple, but you can find it in the Android store. “Part of me is happy knowing there’s a few thousand people who get some use out of it,” he wrote in an email.

But smaller ride-hailing apps embrace aggregator apps as an opportunity to spread the word about their services. And industry leaders in the United States might be more willing to embrace it in international markets where they still have smaller footprints and need to figure out how to expand. That’s why Campbell has signed on to Bellhop as an adviser. “There’s a huge incentive for competitors to partner directly with an app like this, especially internationally,” he says. It will get their service in front of riders’ eyes.

Ultimately, the reason these apps often don’t succeed has more to do with riders than the ride-sharing companies. One New York rider, Daniel Greenberg, downloaded Bellhop in the spring, before it had officially launched. “I’m a sucker for trying everything in the space,” he messaged me. He liked it, but he very quickly stopped using it. “Every time, Lyft was cheaper.” For now, Greenberg’s committed to Lyft—at least until the next app comes along, pointing him in the direction of even better deals.

More Great WIRED Stories

Scout_Realtime – Monitor Server and Process Metrics in Linux

In the past, we’ve covered lots of command-line based tools for monitoring Linux performance, such as top, htop, atop, glances and more, and a number of web based tools such as cockpit, pydash, linux-dash, just to mention but a few. You can also run glances in web server mode to monitor remote servers. But all that aside, we have discovered yet another simple server monitoring tool that we would like to share with you, called Scout_Realtime.

Scout_Realtime is a simple, easy-to-use web based tool for monitoring Linux server metrics in real-time, in a top-like fashion. It shows you smooth-flowing charts about metrics gathered from the CPU, memory, disk, network, and processes (top 10), in real-time.

Real Time Linux Server Process Monitoring

Real Time Linux Server Process Monitoring

In this article, we will show you how install scout_realtime monitoring tool on Linux systems to monitor a remote server.

Installing Scout_Realtime Monitoring Tool in Linux

1. To install scout_realtime on your Linux server, you must have Ruby 1.9.3+ installed on your server using following command.

$ sudo apt-get install rubygems [On Debian/Ubuntu]
$ sudo yum -y install rubygems-devel [On RHEL/CentOS]
$ sudo dnf -y install rubygems-devel [On Fedora 22+]

2. Once you have installed Ruby on your Linux system, now you can install scout_realtime package using the following command.

$ sudo gem install scout_realtime

3. After successfully installing scout_realtime package, next, you need to start the scout_realtime daemon which will collect server metrics in real-time as shown.

$ scout_realtime
Start Scout Realtime on Server

Start Scout Realtime on Server

4. Now that the scout_realtime daemon is running on your Linux server that you want to monitor remotely on port 5555. If you are running a firewall, you need to open the port 5555 which scout_realtime listens on, in the firewall to allow requests to it.

---------- On Debian/Ubuntu ----------
$ sudo ufw allow 27017 $sudo ufw reload ---------- On RHEL/CentOS 6.x ----------
$ sudo iptables -A INPUT -p tcp --dport 5555 -j ACCEPT $ sudo service iptables restart
---------- On RHEL/CentOS 7.x ----------
$ sudo firewall-cmd --permanent --add-port=5555/tcp $ sudo firewall-cmd reload 

5. Now from any other machine, open a web browser and use the URL below to access the scout_realtime to monitor your remote Linux server performance.

http://localhost:5555 OR
ScoutRealtime Linux Server Process Monitoring

ScoutRealtime Linux Server Process Monitoring

6. By default, scout_realtime logs are written in .scout/scout_realtime.log on the system, that you can view using cat command.

$ cat .scout/scout_realtime.log

7. To stop the scout_realtime daemon, run the following command.

$ scout_realtime stop

8. To uninstall scout_realtime from the system, run the following command.

$ gem uninstall scout_realtime

For more information, check out Scout_realtime Github repository.

It’s that simple! Scout_realtime is a simple yet useful tool for monitoring Linux server metrics in real-time in a top-like fashion. You can ask any questions or give us your feedback in the comments about this article.

Ex-Apple Employee Accused of Stealing Self-Driving-Car Tech

Federal prosecutors have charged a former Apple employee with stealing trade secrets related to Apple’s autonomous vehicle program.

Xiaolang Zhang allegedly worked on Apple’s secretive self-driving-car project. Zhang left Apple in April saying he was going to work for a Chinese electric vehicle company called Xpeng Motors. He is accused of copying more than 40 GB of Apple intellectual property to his wife’s laptop before leaving the company, according to court documents. The documents do not accuse Xpeng Motors of wrongdoing.

In a statement, Xpeng said it was “stunned and outraged” by the charges against Zhang, who had joined the company in May. Xpeng said it conducted an investigation, advised by the law firm Morrison and Foerster, and “very quickly thereafter, terminated Xiaolang’s employment for cause.” Apple did not respond to a request for comment.

Apple has been reported to be developing self-driving-vehicle technology for several years, though the company has been tight-lipped about its plans and ambitions. Bloomberg reported in January that the company had registered 27 self-driving test vehicles with California’s Department of Motor Vehicles. CEO Tim Cook acknowledged last year that the company was developing autonomous-car technology; that followed reports that Apple had given up on plans to build its own car.

Zhang’s arrest comes amid growing tension between the US and China, largely around the treatment of intellectual property. China requires foreign tech firms that want to do business in the country to partner with domestic companies and share or license their intellectual property with those partners. Many companies believe that Chinese companies use this process, called technology transfer, to steal their trade secrets. The Trump administration’s tariffs are in part a response to this practice, but China has thus far refused to end it.

The Zhang case highlights other ways that Chinese companies could still get their hands on US companies’ IP.

According to the court documents, Zhang was hired by Apple to work on the autonomous vehicle project in 2015. In April, Zhang took paternity leave and traveled to China. On April 30, he informed Apple that he was resigning from the company and planned to return to China to be closer to his family and work for Xpeng Motors.

During an internal investigation, Apple discovered that Zhang had downloaded “copious” pages of information, including Apple IP from company databases, in the days before his resignation and, against company policy, had taken Apple property during his paternity leave.

Zhang admitted to Apple’s security staff that he had taken home a Linux server belonging to Apple and had transferred data to his wife’s laptop. He said he wanted to review the data for his own education, hoping it would help him secure another job within Apple. He returned the server and brought the laptop into Apple for examination.

Zhang was arrested July 7 at Mineta San Jose International Airport. He had booked a flight to Hangzhou, China.

According to its website, Xpeng Motors was founded in 2014 and is headquartered in Guangzhou, China. It received an investment from Chinese ecommerce giant Alibaba last year, according to Tech in Asia, as well as Huawei, Xiaomi, and Foxconn, according to PanDaily. The company announced a car with “self-parking” and other autonomous car features at the Consumer Electronics Show in January.

UPDATE, July 10, 11PM: This article was updated to include a statement from Xpeng.

More Great WIRED Stories

UK Regulators May Fine Facebook Over Cambridge Analytica

The Information Commissioner’s Office in the UK has announced its intention to fine Facebook more than $600,000 for its “lack of transparency and security issues” related to third party data harvesting. The ICO is also taking steps toward bringing criminal action against SCL Elections, the now-defunct parent company of the political consulting firm Cambridge Analytica, which harvested the data of millions of Americans without their knowledge before the 2016 election.

The announcements come as part of the ICO’s sweeping investigation into data privacy violations, which began in March following a wave of news reports about Cambridge Analytica’s misdeeds. The ICO went public with its initial findings on Tuesday, but noted that the investigation is still ongoing. As part of the probe, the ICO’s team of 40 investigators seized the servers of Cambridge Analytica and have undertaken a transatlantic search to determine how data was used both in the Brexit referendum campaign and the United States presidential election. The initial report includes a slew of regulatory actions the ICO plans to take against a variety of key players, from Facebook and Cambridge Analytica to major data brokers, political campaigns, and the academic institutions that develop data targeting methodology.

UK Information Commissioner Elizabeth Denham


“New technologies that use data analytics to micro-target people give campaign groups the ability to connect with individual voters. But this cannot be at the expense of transparency, fairness and compliance with the law,” Information Commissioner Elizabeth Denham wrote in a statement.

According to the report, the ICO believes Facebook may have violated the UK’s Data Protection Act, which gives UK residents control over their data and requires companies to receive explicit consent from users before collecting that data. Facebook now has until later this month to respond to the ICO’s notice of intent to fine the company, after which point the ICO will decide whether to go forward with the fine. Of course, a fine of less than $1 million isn’t much of a punishment for a company like Facebook, which is valued at more than $584 billion.

“We will consider carefully any representations Facebook may wish to make before finalising our views,” the ICO wrote in a summary of the report.

In a statement, Facebook’s chief privacy officer Erin Egan acknowledged that Facebook “should have done more to investigate claims about Cambridge Analytica and take action in 2015.”

“We have been working closely with the ICO in their investigation of Cambridge Analytica, just as we have with authorities in the US and other countries,” Egan wrote, adding that the company will respond to the ICO soon.

The Commissioner’s office has also set its sights on Cambridge Analytica and its parent company, SCL Elections, which are undergoing insolvency proceedings in the UK and bankruptcy proceedings in the US. In May, the ICO ordered SCL Elections to hand over all the data it had collected on an American academic named David Carroll. In January of 2017, Carroll requested his data from Cambridge Analytica under the UK data protection law. The response he received included predictions about his political beliefs but few details about the data powering those predictions. In March of this year, a day before the Cambridge Analytica story made front-page headlines around the world, Carroll filed a legal claim against the company. Months later, the ICO followed with its enforcement action, but SCL Elections never complied. Now the ICO says it is “taking steps with a view to bringing a criminal prosecution against SCL Elections.”

Carroll’s lawyer, Ravi Naik, says the decision is “expected.” “The enforcement notice was clear on its terms and we expected nothing less considering SCL’s failure to comply,” he wrote. “The report also vindicates David’s case, confirming that his action has been pivotal to their findings. We continue to push for disclosure and are confident that we will get answers to questions the world wants resolved.”

Naik says he is currently exploring claims against Facebook “on behalf of a class of individuals.”

‘New technologies that use data analytics to micro-target people give campaign groups the ability to connect with individual voters. But this cannot be at the expense of transparency, fairness and compliance with the law.’

Information Commissioner Elizabeth Denham

While the investigation may have kicked off with an inquiry into Cambridge Analytica, its scope has grown as the investigators try to map out the circuitous route data sometimes takes as it moves between the academic, political, and commercial spaces. One key area of inquiry for the ICO is Cambridge University’s Psychometrics Centre, where the methodology that undergirds Cambridge Analytica’s approach to data targeting originated. As the director of the Centre recently told WIRED, researchers there had been collecting Facebook data for academic purposes, using personality profiling apps. That work fueled research that showed how much sensitive information could be gleaned by Facebook likes. Facebook supported the research—that is, until 2015, when news stories revealed that another Cambridge professor named Aleksandr Kogan was using a personality app to collect Facebook data, and then sold the data to Cambridge Analytica.

Facebook has since suspended all of the apps associated with the Centre, pending an investigation of its operations. Now, the ICO says it will conduct an audit of the department and investigate whether Cambridge University has “sufficient systems and processes in place” to ensure academic data is properly protected.

Meanwhile, the ICO continues to investigate the use of data in the UK’s vote to leave the European Union, a decision that is now causing disarray in the upper echelons of British government. In particular, the ICO is investigating a former SCL employee’s claims that the Leave.EU campaign received data from a company called Eldon Insurance and used Eldon’s call center staff to make calls on behalf of Leave.EU.

The ICO is also taking action against a Canadian firm called AggregateIQ, which worked with senator Ted Cruz’s presidential campaign as well as the UK’s Vote Leave campaign. The ICO says it’s found that AggregateIQ has access to British citizen data that it “should not continue to hold.” It’s now investigating whether Vote Leave transferred voter data outside the country, and ordering AggregateIQ to cease processing that data.

What makes the ICO’s investigation more thorough than similar investigations in the United States is that it focuses not just on Cambridge Analytica but on the broader data marketplace. It plans on auditing credit reference companies in the UK and intends to take action against one data broker in particular called Emma’s Diary. It’s also issuing letters to political parties throughout the country, warning of the risks of working with data brokers who may not have received proper consent. Finally, the ICO has developed a list of 10 recommendations for the British government, including the creation of a code of practice under the Data Protection Act that dictates how data can be used in political campaigns.

“Fines and prosecutions punish the bad actors, but my real goal is to effect change and restore trust and confidence in our democratic system,” Denham said in a statement.

The fact that the UK already offers its citizens some core data protections gives the ICO’s investigation teeth. In the United States, no such protections exist.

More Great WIRED Stories

AWS re:Invent 2018 is Coming – Are You Ready?

As I write this, there are just 138 days until re:Invent 2018. My colleagues on the events team are going all-out to make sure that you, our customer, will have the best possible experience in Las Vegas. After meeting with them, I decided to write this post so that you can have a better understanding of what we have in store, know what to expect, and have time to plan and to prepare.

Dealing with Scale
We started out by talking about some of the challenges that come with scale. Approximately 43,000 people (AWS customers, partners, members of the press, industry analysts, and AWS employees) attended in 2017 and we are expecting an even larger crowd this year. We are applying many of the scaling principles and best practices that apply to cloud architectures to the physical, logistical, and communication challenges that are part-and-parcel of an event that is this large and complex.

We want to make it easier for you to move from place to place, while also reducing the need for you to do so! Here’s what we are doing:

Campus Shuttle – In 2017, hundreds of buses traveled on routes that took them to a series of re:Invent venues. This added a lot of latency to the system and we were not happy about that. In 2018, we are expanding the fleet and replacing the multi-stop routes with a larger set of point-to-point connections, along with additional pick-up and drop-off points at each venue. You will be one hop away from wherever you need to go.

Ride Sharing – We are partnering with Lyft and Uber (both powered by AWS) to give you another transportation option (download the apps now to be prepared). We are partnering with the Las Vegas Monorail and the taxi companies, and are also working on a teleportation service, but do not expect it to be ready in time.

Session Access – We are setting up a robust overflow system that spans multiple re:Invent venues, and are also making sure that the most popular sessions are repeated in more than one venue.

Improved Mobile App – The re:Invent mobile app will be more lively and location-aware. It will help you to find sessions with open seats, tell you what is happening around you, and keep you informed of shuttle and other transportation options.

Something for Everyone
We want to make sure that re:Invent is a warm and welcoming place for every attendee, with business and social events that we hope are progressive and inclusive. Here’s just some of what we have in store:

You can also take advantage of our mother’s rooms, gender-neutral restrooms, and reflection rooms. Check out the community page to learn more!

Getting Ready
Now it is your turn! Here are some suggestions to help you to prepare for re:Invent:

  • Register – Registration is now open! Every year I get email from people I have not talked to in years, begging me for last-minute access after re:Invent sells out. While it is always good to hear from them, I cannot always help, even if we were in first grade together.
  • Watch – We’re producing a series of How to re:Invent webinars to help you get the most from re:Invent. Watch What’s New and Breakout Content Secret Sauce ASAP, and stay tuned for more.
  • Plan – The session catalog is now live! View the session catalog to see the initial list of technical sessions. Decide on the topics of interest to you and to your colleagues, and choose your breakout sessions, taking care to pay attention to the locations. There will be over 2,000 sessions so choose with care and make this a team effort.
  • Pay Attention – We are putting a lot of effort into preparatory content – this blog post, the webinars, and more. Watch, listen, and learn!
  • Train – Get to work on your cardio! You can easily walk 10 or more miles per day, so bring good shoes and arrive in peak condition.

Partners and Sponsors
Participating sponsors are a core part of the learning, networking, and after hours activities at re:Invent.

For APN Partners, re:Invent is the single largest opportunity to interact with AWS customers, delivering both business development and product differentiation. If you are interested in becoming a re:Invent sponsor, read the re:Invent Sponsorship Prospectus.

For re:Invent attendees, I urge you to take time to meet with Sponsoring APN Partners in both the Venetian and Aria Expo halls. Sponsors offer diverse skills, Competencies, services and expertise to help attendees solve a variety of different business challenges. Check out the list of re:Invent Sponsors to learn more.

See You There
Once you are on site, be sure to take advantage of all that re:Invent has to offer.

If you are not sure where to go or what to do next, we’ll have some specially trained content experts to guide you.

I am counting down the days, gearing up to crank out a ton of blog posts for re:Invent, and looking forward to saying hello to friends new and old.

— Jeff;

PS – We will be adding new sessions to the session catalog over the summer, so be sure to check back every week!

DeepLens Challenge #1 Starts Today – Use Machine Learning to Drive Inclusion

Are you ready to develop and show off your machine learning skills in a way that has a positive impact on the world? If so, get your hands on an AWS DeepLens video camera and join the AWS DeepLens Challenge!

About the Challenge
Working together with our friends at Intel, we are launching the first in a series of eight themed challenges today, all centered around improving the world in some way. Each challenge will run for two weeks and is designed to help you to get some hands-on experience with machine learning.

We will announce a fresh challenge every two weeks on the AWS Machine Learning Blog. Each challenge will have a real-world theme, a technical focus, a sample project, and a subject matter expert. You have 12 days to invent and implement a DeepLens project that resonates with the theme, and to submit a short, compelling video (four minutes or less) to represent and summarize your work.

We’re looking for cool submissions that resonate with the theme and that make great use of DeepLens. We will watch all of the videos and then share the most intriguing ones.

Challenge #1 – Inclusivity Challenge
The first challenge was inspired by the Special Olympics, which took place in Seattle last week. We invite you to use your DeepLens to create a project that drives inclusion, overcomes barriers, and strengthens the bonds between people of all abilities. You could gauge the physical accessibility of buildings, provide audio guidance using Polly for people with impaired sight, or create educational projects for children with learning disabilities. Any project that supports this theme is welcome.

For each project that meets the entry criteria we will make a donation of $249 (the retail price of an AWS DeepLens) to the Northwest Center, a non-profit organization based in Seattle. This organization works to advance equal opportunities for children and adults of all abilities and we are happy to be able to help them to further their mission. Your work will directly benefit this very worthwhile goal!

As an example of what we are looking for, ASLens is a project created by Chris Coombs of Melbourne, Australia. It recognizes and understands American Sign Language (ASL) and plays the audio for each letter. Chris used Amazon SageMaker and Polly to implement ASLens (you can watch the video, learn more and read the code).

To learn more, visit the DeepLens Challenge page. Entries for the first challenge are due by midnight (PT) on July 22nd and I can’t wait to see what you come up with!

— Jeff;

PS – The DeepLens Resources page is your gateway to tutorial videos, documentation, blog posts, and other helpful information.

Brett Kavanaugh on the Supreme Court Could Be Trouble for Tech

President Donald Trump has chosen Washington DC Circuit Court Judge Brett Kavanaugh to fill Justice Anthony Kennedy’s seat on the Supreme Court. The decision, which Trump announced Monday night, is likely to face opposition not only from Democrats in Congress but also from leaders within the tech industry who oppose Kavanaugh’s perspective on issues related to privacy and net neutrality.

A former clerk for Justice Kennedy, the 53-year-old judge also once worked under independent counsel Kenneth Starr, whose investigation led to the impeachment of President Bill Clinton. Later, Kavanaugh served as White House staff secretary under President George W. Bush. As predicted, he is a solidly conservative pick, whose nomination to the DC Circuit Appeals Court was put on hold for three years over concerns he was too partisan. But President Trump denied the inherently political nature of his pick. “What matters is not a judge’s political views,” he said, “but whether they can set aside those views to do what the law and the constitution require.”

Left-leaning groups including Planned Parenthood and the Democratic National Committee rushed to scrutinize Kavanaugh’s record of opposition to the Affordable Care Act and abortion rights, including a recent case in which Kavanaugh opposed an undocumented teenager’s request for an abortion while she was in detention. But it’s Judge Kavanaugh’s less discussed decisions that will likely rankle the tech industry.

In May of 2017, Kavanaugh argued that net neutrality violates internet service providers’ First Amendment rights in a dissent to a DC Circuit Court decision regarding the Federal Communication Commission’s 2015 order upholding net neutrality. The dissent hinges on a case from the 1990s called Turner Broadcasting v. FCC, which established that cable companies were protected by the First Amendment, just as newspaper publishers and pamphleteers were. “Just like cable operators, Internet service providers deliver content to consumers. Internet service providers may not necessarily generate much content of their own, but they may decide what content they will transmit, just as cable operators decide what content they will transmit,” Kavanaugh wrote. “Deciding whether and how to transmit ESPN and deciding whether and how to transmit ESPN.com are not meaningfully different for First Amendment purposes.”

‘Kavanaugh’s opposition to regulating internet service providers could close the book on net neutrality protections for a generation.’

Kavanaugh argued that just because internet service providers don’t currently make editorial decisions about what does and doesn’t flow over their pipes doesn’t mean they don’t have the right to. “That would be akin to arguing that people lose the right to vote if they sit out a few elections,” he wrote. “Or citizens lose the right to protest if they have not protested before.”

According to Gigi Sohn, who served as counselor to former FCC chairman Tom Wheeler and is now a distinguished fellow at Georgetown Law Institute for Technology Law & Policy, this perspective represents the “fringe of First Amendment jurisprudence.”

“For 85 years, the First Amendment rights of network operators like ISPs, broadcasters, and cable operators have always been balanced with the rights of the public,” Sohn says. “Kavanaugh’s ascension to the bench could start the mainstreaming of a legal theory that would all but eviscerate the public’s rights with regard to networks that use public rights of way, and by law are required to serve the public.”

The FCC has already killed net neutrality for the time being, reversing Obama-era rules that would have prevented internet service providers from speeding up or slowing down service however they chose. But lawsuits both in support of net neutrality and in opposition to it are already making their way through the courts. If the Supreme Court took them up, Kavanaugh’s opposition to regulating internet service providers could close the book on net neutrality protections for a generation.

Despite his consistently conservative pedigree, Kavanaugh’s nomination could also run afoul of the libertarian wing of the Republican Party, which has opposed government surveillance programs. In September of 2010, he dissented from the DC court’s decision not to revisit a ruling that found that police violated a suspect’s Fourth Amendment rights by using a GPS device to track his car without a warrant. Kavanaugh argued that the decision ignored precedent laid out in a 1983 case called United States v. Knotts. That case found that the government did violate a man’s Fourth Amendment rights by using a radio transmitter to track his movements because “[a] person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”

Kavanaugh argued that the same should go for GPS trackers. “The reasonable expectation of privacy as to a person’s movements on the highway is, as concluded in Knotts, zero,” Kavanaugh wrote. The Supreme Court later upheld the DC circuit’s ruling. In his opinion, Judge Antonin Scalia wrote that the government had violated the suspect’s Fourth Amendment rights because the police “physically occupied private property for the purpose of obtaining information.”

Kavanaugh also later defended the National Security Agency’s bulk collection of phone records in a concurring opinion in November of 2015, writing that “the Government’s metadata collection program is entirely consistent with the Fourth Amendment.” The opinion offered a broad interpretation of the state’s right to search and seizure. “The Fourth Amendment allows governmental searches and seizures without individualized suspicion when the Government demonstrates a sufficient ‘special need’ – that is, a need beyond the normal need for law enforcement – that outweighs the intrusion on individual liberty,” Kavanaugh wrote. “Examples include drug testing of students, roadblocks to detect drunk drivers, border checkpoints, and security screening at airports.”

Recently, the Supreme Court has appeared more eager to protect Americans’ digital property from unreasonable search, not just their physical property. In a 5-4 decision in Carpenter v. The United States last month, the court ruled that warrantless search and seizure of cell-site records does violate the Fourth Amendment. And yet, Chief Justice John Roberts was careful to point out that the court’s opinion “does not consider other collection techniques involving foreign affairs or national security.”

At a time when the Trump administration has taken extreme measures to crack down on both illegal and legal immigration in the name of national security, the question of where Kavanaugh would draw the line on government surveillance warrants closer inspection during what is sure to be a knockdown, drag-out fight over his confirmation this fall.

More Great WIRED Stories

How Microsoft’s AI Could Help Prevent Natural Disasters

On May 27, a deluge dumped more than 6 inches of rain in less than three hours on Ellicott City, Maryland, killing one person and transforming Main Street into what looked like Class V river rapids, with cars tossed about like rubber ducks. The National Weather Service put the probability of such a storm at once in 1,000 years. Yet, “it’s the second time it’s happened in the last three years,” says Jeff Allenby, director of conservation technology for Chesapeake Conservancy, an environmental group.

Floods are nothing new in Ellicott City, located where two tributaries join the Patapsco River. But Allenby says the floods are getting worse, as development covers what used to be the “natural sponge of a forest” with paved surfaces, rooftops, and lawns. Just days before the May 27 flood, the US Department of Homeland Security selected Ellicott City—on the basis of its 2016 flood—for a pilot program to deliver better flood warnings to residents via automated sensors.

Recently, Allenby developed another tool to help predict, plan, and prepare for future floods: a first-of-its-kind, high-resolution map showing what’s on the ground—buildings, pavement, trees, lawns—across 100,000 square miles from upstate New York to southern Virginia that drain into Chesapeake Bay. The map, generated from aerial imagery with the help of artificial intelligence, shows objects as small as 3 feet square, roughly 1,000 times more precise than the maps that flood planners previously used. To understand the difference, imagine trying to identify an Uber driver on a crowded city street using a map that can only display objects the size of a Walmart.

Creating the map consumed a year and cost $3.5 million, with help from Microsoft and the University of Vermont. Allenby’s team pored over aerial imagery, road maps, and zoning charts to establish rules, classify objects, and scrub errors. “As soon as we finished the first data set,” Allenby says, “everyone started asking ‘when are you going to do it again?’” to keep the map fresh.

Enter AI. Microsoft helped Allenby’s team train its AI for Earth algorithms to identify objects on its own. Even with a robust data set, training the algorithms wasn’t easy. The effort required regular “pixel peeping”—manually zooming in on objects to verify and amend the automated results. With each pass, the algorithm improved its ability to recognize waterways, trees, fields, roads, and buildings. As relevant new data become available, Chesapeake Conservancy plans to use its AI to refresh the map more frequently and easily than the initial labor-intensive multi-million dollar effort.

Now, Microsoft is making the tool available more widely. For $42, anyone can run 200 million aerial images through Microsoft’s AI for Earth platform and generate a high-resolution land-cover map of the entire US in 10 minutes. The results won’t be as precise in other parts of the country where the algorithm has not been trained on local conditions—a redwood tree or saguaro cactus looks nothing like a willow oak.

A map of land use around Ellicott City, Maryland, built with the help of artificial intelligence (left) offers far more detail than its predecessor (right).

Chesapeake Conservancy

To a society obsessed with location and mapping services—where the physical world unfolds in the digital every day—the accomplishment may not seem groundbreaking. Until recently, though, neither the high-resolution data nor the AI smarts existed to make such maps cost-effective for environmental purposes, especially for nonprofit conservation organizations. With Microsoft’s offer, AI on a planetary scale is about to become a commodity.

Detailed, up-to-date information is paramount when it comes to designing stormwater management systems, Allenby says. “Looking at these systems with the power of AI can start to show when a watershed” is more likely to flood, he says. The Center for Watershed Protection, a nonprofit based in Ellicott City, reported in a 2001 study that when 10 percent of natural land gets developed, stream health declines and it begins to lose its ability to manage runoff. At 20 percent, runoff doubles, compared with undeveloped land. Allenby notes that paved surfaces and rooftops in Ellicott City reached 19 percent in recent years.

Allenby says the more detailed map will enable planners to keep up with land-use changes and plan drainage systems that can accommodate more water. Eventually, the map will offer “live dashboards” and automated alerts to serve as a warning system when new development threatens to overwhelm stormwater management capacity. The Urban Forestry Administration in Washington, DC, has used the new map to determine where to plant trees by searching the district for areas without tree cover where standing water accumulates. Earlier this year, Chesapeake Conservancy began working with conservation groups in Iowa and Arizona to develop training sets for the algorithms specific to those landscapes.

The combination of high-resolution imaging and sensor technologies, AI, and cloud computing is giving conservationists deeper insight into the health of the planet. The result is a near-real-time readout of Earth’s vital signs, firing off alerts and alarms whenever the ailing patient takes a turn for the worse.

Others are applying these techniques around the world. Global Forest Watch (GFW), a conservation organization established by World Resources Institute, began offering monthly and weekly deforestation alerts in 2016, powered by AI algorithms developed by Orbital Insight. The algorithms analyze satellite imagery as it’s refreshed to detect “patterns that may indicate impending deforestation,” according to the organization’s website. Using GFW’s mobile app, Forest Watcher, volunteers and forest rangers take to the trees to verify the automated alerts in places like the Leuser Ecosystem in Indonesia, which calls itself “the last place on Earth where orangutans, rhinos, elephants and tigers are found together in the wild.”

The new conservation formula is also spilling into the oceans. On June 4, Paul Allen Philanthropies revealed a partnership with the Carnegie Institution of Science, the University of Queensland, the Hawaii Institute of Marine Biology, and the private satellite company Planet to map all of the world’s coral reefs by 2020. As Andrew Zolli, a Planet vice president, explains: For the first time in history, “new tools are up to the [planetary] level of the problem.”

By the end of 2017, Planet deployed nearly 200 satellites, forming a necklace around the globe that images the entire Earth every day down to 3-meter resolution. That’s trillions of pixels raining down daily, which could never be transformed into useful maps without AI algorithms trained to interpret them. The partnership leverages the Carnegie Institution’s computer-vision tools and the University of Queensland’s data on local conditions, including coral, algae, sand, and rocks.

“Today, we have no idea of the geography, rate, and frequency of global bleaching events,” explains Greg Asner, a scientist at Carnegie’s Department of Global Ecology. Based on what is known, scientists project that more than 90 percent of the world’s reefs, which sustain 25 percent of marine life, will be extinct by 2050. Lauren Kickham, impact director for Paul Allen Philanthropies, expects the partnership will bring the world’s coral crisis into clear view and enable scientists to track their health on a daily basis.

In a separate coral reef project, also being conducted with Planet and the Carnegie Institution, The Nature Conservancy is leveraging Carnegie’s computer vision AI to develop a high-resolution map of the shallow waters of the Caribbean basin. “By learning how these systems live and how they adapt, maybe not our generation, but maybe the next will be able to bring them back,” says Luis Solorzano, The Nature Conservancy’s Caribbean Coral Reef project lead.

Mapping services are hardly new to conservation. Geographic Information Systems have been a staple in the conservation toolkit for years, providing interactive maps to facilitate environmental monitoring, regulatory enforcement, and preservation planning. But, mapping services are only as good as the underlying data, which can be expensive to acquire and maintain. As a result, many conservationists resort to what’s freely available, like the 30-meter-resolution images supplied by the United States Geological Survey.

Ellicott City and the Chesapeake watershed demonstrate the challenges of responding to a changing climate and the impacts of human activity. Since the 1950s, the bay’s oyster reefs have declined by more than 80 percent. Biologists discovered one of the planet’s first marine dead zones in Chesapeake Bay in the 1970s. Blue crab populations plunged in the 1990s. The sea level has risen more than a foot since 1895, and, according to a 2017 National Oceanic and Atmospheric Administration (NOAA) report, may rise as much as 6 feet by the end of this century.

Allenby joined the Chesapeake Conservancy in 2012 when technology companies provided a grant to explore the ways in which technology could help inform conservation. Allenby sought ways to deploy technology to help land managers, like those in Ellicott City, improve upon the dated 30-meter-resolution images that FEMA also uses for flood planning and preparation.

In 2015, Allenby connected with the University of Vermont—nationally recognized experts in generating county-level high-resolution land-cover maps—seeking a partner on a bigger project. They secured funding from a consortium of state and local governments, and nonprofit groups in 2016. The year-long effort involved integrating data from such disparate sources as aerial imagery, road maps, and zoning charts. As the data set came together, a Conservancy board member introduced Allenby to Microsoft, which was eager to demonstrate how its AI and cloud computing could be leveraged to support conservation.

“It’s been the frustration of my life to see what we’re capable of, yet how far behind we are in understanding basic information about the health of our planet,” says Lucas Joppa, Microsoft’s chief environmental scientist, who oversees AI for Earth. “And to see that those individuals on the front line solving society’s problems, like environmental sustainability, are often in organizations with the least resources to take advantage of the technologies that are being put out there.”

The ultimate question, however, is whether the diagnoses offered by these AI-powered land-cover maps will arrive in time to help cure the problems caused by man.

More Great WIRED Stories

How to Install MongoDB on Ubuntu 18.04

MongoDB is an open-source, modern document database management system designed for high performance data persistence, high availability, as well as automatic scaling, based on the state-of-the-art technology of NoSQL. Under MongoDB, a record is a document, which is a data structure that comprises of field and value pairs (MongoDB documents are comparable to JSON objects).

Because it provides high performance and great scalability features, it is being used for building modern applications that require powerful, mission-critical and high-availability databases.

In this article, we will explain how to install MongoDB, manage its service and setup basic authentication on Ubuntu 18.04.

Important: You should note that the developers of MongoDB only offer packages for 64-bit LTS (long-term support) Ubuntu releases such as 14.04 LTS (trusty), 16.04 LTS (xenial), and so on.

Read Also: How to Install MongoDB on Ubuntu 16.04/14.04 LTS

Step 1: Installing MongoDB on Ubuntu 18.04

1. Ubuntu’s official software package repositories comes with the latest version of MongoDB, and can be easily installed using the APT package manager.

First update the system software package cache to have the most latest version of the repository listings.

$ sudo apt update

2. Next, install MongoDB package that includes several other packages such as mongo-tools, mongodb-clients, mongodb-server and mongodb-server-core.

$ sudo apt install mongodb

3. Once you have successfully installed it, the MongoDB service will start automatically via systemd and the process listens on port 27017. You can verify its status using the systemctl command as shown.

$ sudo systemctl status mongodb
Check Mongodb Status

Check Mongodb Status

Step 2: Managing the MongoDB Service

4. The MongoDB installation comes as a systemd service and can be easily manageable via a standard systemd commands as shown.

To stop running MongoDB service, run the following command.

$ sudo systemctl stop mongodb 

To start a MongoDB service, type the following command.

$ sudo systemctl start mongodb

To restart a MongoDB service, type the following command.

$ sudo systemctl restart mongodb 

To disable automatically started MongoDB service, type the following command.

$ sudo systemctl disable mongodb 

To enable again MongoDB service, type the following command.

$ sudo systemctl enable mongodb 

Step 3: Enable Remote MongoDB Access on Firewall

5. By default MongoDB runs on port 27017, to allow access from everywhere you can use.

$ sudo ufw allow 27017

But enabling access to MongoDB from everywhere gives unrestricted access to the database data. So, it’s better to give access to specific IP address location to default MongoDB’s port using following command.

$ sudo ufw allow from your_server_IP/32 to any port 27017 $ sudo ufw status

6. By default the port 27017 is listens on the local address only. To allow remote MongoDB connections, you need to add your server IP address to /etc/mongodb.conf configuration file as shown.

bind_ip =,your_server_ip
#port = 27017

Save the file, exit the editor, and restart MongoDB.

$ sudo systemctl restart mongodb

Step 4: Create MongoDB Database Root User and Password

7. By default MongoDB comes with user authentication disabled, its therefore started without access control. To launch the mongo shell, run the following command.

$ mongo 

8. Once you have connected to the mongo shell, you can list all available databases with the following command.

> show dbs

9. To enable access control on your MongoDB deployment to enforce authentication; requiring users to identify themselves every time they connect to the database server.

MongoDB uses the Salted Challenge Response Authentication Mechanism (SCRAM) authentication mechanism by default. Using SCRAM, MongoDB verifies the supplied user credentials against the user’s name, password and authentication database (the database in which the user was created, and together with the user’s name, serves to identify the user).

You need to create a user administrator (analogous to root user under MySQL/MariaDB) in the admin database. This user can administrate user and roles such as create users, grant or revoke roles from users, and create or modify customs roles.

First switch to the admin database, then create the root user using following commands.

> use admin > db.createUser({user:"root", pwd:"[email protected]!#@%$admin1", roles:[{role:"root", db:"admin"}]})
Create MongoDB Root User

Create MongoDB Root User

Now exit the mongo shell to enable authentication as explained next.

10. The mongodb instance was started without the --auth command line option. You need to enable authentication of users by editing /lib/systemd/system/mongod.service file, first open the file for editing like so.

$ sudo vim /lib/systemd/system/mongodb.service 

Under the [Service] config section, find the parameter ExecStart.

ExecStart=/usr/bin/mongod --unixSocketPrefix=${SOCKETPATH} --config ${CONF} $DAEMON_OPTS

Change it to the following:

ExecStart=/usr/bin/mongod --auth --unixSocketPrefix=${SOCKETPATH} --config ${CONF} $DAEMON_OPTS
Enable Authentication in MongoDB

Enable Authentication in MongoDB

Save the file and exit it.

11. 8. After making changes to configuration file, run ‘systemctl daemon-reload‘ to reload units and restart the MongoDB service and check its status as follows.

$ systemctl daemon-reload
$ sudo systemctl restart mongodb $ sudo systemctl status mongodb 
Verify MongoDB Authentication

Verify MongoDB Authentication

12. Now when you try to connect to mongodb, you must authenticate yourself as a MongoDB user. For example:

$ mongo -u "root" -p --authenticationDatabase "admin"
Connect to MongoDB as Root User

Connect to MongoDB as Root User

Note: It is not recommended to enter your password on the command-line because it will be stored in the shell history file and can be viewed later on by an attacker.

That’s all! MongoDB is an open-source, modern No-SQL database management system that provides high performance, high availability, and automatic scaling.

In this article, we have explained how to install and get started with MongoDB in Ubuntu 18.04. If you have any queries, use the comment form below to reach us.

YouTube Debuts Plan to Promote and Fund ‘Authoritative’ News

Following a year in which YouTube has repeatedly promoted conspiracy-theory videos during breaking news events like the shootings in Parkland, Florida, and Las Vegas, the company announced on Monday a slew of new features it hopes will make news on the platform more reliable and less susceptible to manipulation. The company is also investing $25 million in grants to news organizations looking to expand their video operations, as part of a larger, $300 million program sponsored by YouTube’s sister company, Google.

According to YouTube executives, the goal is to identify authoritative news sources, bring those videos to the top of users’ feeds, and support quality journalism with tools and funding that will help news organizations more effectively reach their audiences. The challenge is deciding what constitutes authority when the public seems more divided than ever on which news sources to trust—or whether to trust the traditional news industry at all.


Among the many changes YouTube announced Monday are substantive tweaks to the tools it uses to recommend news-related videos. In the coming weeks, YouTube will start to display an information panel above videos about developing stories, which will include a link to an article that Google News deems to be most relevant and authoritative on the subject. The move is meant to help prevent hastily recorded hoax videos from rising to the top of YouTube’s recommendations. And yet, Google News hardly has a spotless record when it comes to promoting authoritative content. Following the 2016 election, the tool surfaced a WordPress blog falsely claiming Donald Trump won the popular vote as one of the top results for the term “final election results.”

YouTube is also expanding a feature, currently available in 17 countries, that shows up on the homepage during breaking news events. This section of the homepage will only surface videos from sources YouTube considers authoritative. The same goes for the videos that YouTube recommends viewers watch next.

These changes attempt to address the problem of misinformation online without adding more human moderators. With some 450 hours of video going up on YouTube every minute, “human curation isn’t really a viable solution,” Neal Mohan, YouTube’s chief product officer, told reporters Monday.

Traditionally, YouTube’s algorithm has prioritized a user’s personal viewing history, as well as the context of the video that user is currently watching, when deciding what videos to surface next. That can be problematic because, as researchers have found, once you watch one conspiracy-theory video claiming that the student survivors of the Parkland shooting are crisis actors, YouTube may recommend you watch even more. With this change, the company is trying to interrupt that downward spiral. It’s important to note, though, that YouTube is applying that standard only to breaking news and developing stories. For all other videos that users find on YouTube, the recommendation engine will work the old-fashioned way, which, YouTube executives acknowledge, may well turn up content that people find objectionable.

“There are going to be counter points of view, and there’s going to be [videos] where people who have a conspiratorial opinion are going to express them,” Mohan says. “What I think we can do is, instead of telling users what to think, give them as much information as possible, so that they can make those decisions themselves.”

To that end, YouTube is also beginning to implement its previously announced partnerships with Wikipedia and Encyclopedia Brittanica, which it will use to fact-check more evergreen conspiracy theories about, say, the moon landing or the Bermuda Triangle. Those videos will now feature an information panel with context from either Encyclopedia Brittanica or Wikipedia. For the moment, though, these panels are being applied only to a small subset of videos that, Mohan says, “tend to be accompanied by misinformation,” meaning they’re hardly a cure-all for the vast quantities of new and less predictable misinformation being uploaded to YouTube every day.

Eradicating that content isn’t the goal for YouTube, anyway. After all, merely spreading falsehoods isn’t against the platform’s policies, unless those falsehoods are considered to be hate speech or harassment. That’s one reason why known propagandists like Alex Jones of Infowars have managed to build wildly successful channels on the back of conspiracy theories that carefully adhere to YouTube’s terms. As it walks the fine line between openness, profitability, and living up to its responsibility to the public, YouTube is less focused on getting rid of the hoaxers than it is on trying to elevate journalism it considers valuable.

That’s one reason it’s giving $25 million in grants to newsrooms that are investing in online video capabilities. That’s a small amount for the multibillion-dollar company, but YouTube’s executives say it could grow in time. The funding is part of the so-called Google News Initiative, a three-year, $300 million fund aimed at strengthening and lifting up quality journalism, which Google announced in March. The hope is that this funding can help news organizations build more robust video operations to compete with the amateurs who might like to mislead their audiences. YouTube has also formed a working group of newsrooms that will help the company develop new products for journalists. “We’re doing this because, while we see the news industry changing, the importance of news is not,” says Robert Kyncl, YouTube’s chief business officer.

Still, questions remain about how this experiment will play out in practice. Identifying which news outlets are authoritative is hard enough in the United States, where people can subsist on completely different media diets according to their politics. Among the news organizations that YouTube highlighted in the announcement as authoritative were CNN and Fox News; the former is routinely rejected by President Trump as “fake news,” the latter is among the least trusted news sources among Democratic voters. This bifurcation of the media poses a challenge for all tech platforms, not just YouTube, that resist taking a stand on what constitutes truth. In attempting to satisfy people all across the political spectrum—and do it on a global scale—they risk landing themselves smack in the center of the same ideological battles they helped foment.

More Great WIRED Stories