Why Are We All Still Using Venmo?

From today’s Wired Magazine:

VENMO, THE POPULAR payment app owned by PayPal, has become the default way millions of Americans settle a check, pay a friend back for coffee, or buy a concert ticket off Craigslist. Writers have argued that Venmoing makes us petty, and that the app has nearly killed cash. Fewer have questioned whether it’s really the best service for exchanging money, or storing sensitive banking information.

The app has reigned supreme for over half a decade, but in 2018, there are more secure and easier-to-use payment options worth considering as replacements. Venmoing may be standard, but here’s why I’ve switched.

Most Venmo competitors, like Square’s Cash app, share the same core feature: You can send money with a few taps and swipes. Venmo is unique in that it has a social networking component. By default, all peer-to-peer Venmo transactions—aside from the payment amount—are public, to everyone in the world.

Creepy, right? Venmo does give users the ability to limit who can see transactions both before and after they’re sent, but many people don’t choose to adjust their privacy settings. When I opened Venmo recently, the first payment on my news feed was from a friend whose concerns about privacy have led him to delete both his Instagram and Facebook accounts. Despite taking drastic steps to limit his digital footprint, I know who he ate sushi with last night, thanks to Venmo.

Venmo’s insistence on mimicking a social networking app isn’t just weird—it can have unnerving consequences. In July, privacy advocate and designer Hang Do Thi Duc released Public by Default, a site that taps into Venmo’s API to highlight how much information can be gathered about you from your public activity on the app. She was able to trace the exact spending habits of a couple in California, documenting what stores they shopped at, when they took their dog to the vet, and when they made loan payments.

Read the complete article here.

In #MeToo Era Companies Embrace Rolling Background Checks at Work

From today’s Bloomberg News Service:

Jay Cradeur takes pride in his 4.9 driver rating on Uber Technologies Inc.’s five-star scale and the almost 19,000 rides he’s given in the capital of ride sharing, San Francisco. So he was puzzled — and more than a little annoyed — when Uber kicked him off its platform last December.

Little did he know that he had fallen victim to a growing practice among U.S employers: regular background checks of existing workers in addition to the routine pre-employment screening. Uber’s post-hiring check had thrown up a red flag on Cradeur, an issue that took six weeks to resolve and which the company later attributed to a “technical error.”

The number of companies constantly monitoring employees isn’t known, but the screening industry itself has seen explosive growth in recent years. Membership in the National Association of Professional Background Screeners more than quadrupled to 917 last year from 195 members when it was formed in 2003, said Scott Hall, the organization’s chairman and also chief operating officer of the screening company, FirstPoint.

“I think the concern is coming from a fear that either something was missed the first time around or a fear of, ‘Really do we know who’s working for us?’” said Jon Hyman, a Cleveland employment lawyer who has seen a pick-up in calls from manufacturers in the past six months inquiring about continuous checks.

“I think the MeToo movement plays into this, too, because they wonder, ‘Do we have people who might have the potential to harass?” he added.

Companies are trying to balance privacy concerns with mounting pressure to do a better job in rooting out workers who might steal, harass or even commit violent acts in the workplace. Some high-profile incidents among Uber drivers are helping spook employers into taking action, including an Uber Eats driver in Atlanta who allegedly shot and killed a customer in February.

Healthcare and financial service workers have gone through extra screening for years, but the practice of running periodic checks or continuous checks is spreading to other sectors including manufacturing and retailing within the past six to 12 months, said Tim Gordon, senior vice president of background-screening company, InfoMart Inc.

Read the complete article here.

The Cambridge Analytica-Facebook Scandal and the Coming Data Bust

From today’s New York Times:

The queasy truth at the heart of Facebook’s Cambridge Analytica scandal, which is so far the company’s defining disgrace of 2018, is that its genesis became scandalous only in retrospect. The series of events that now implicate Facebook began in 2014, in plain view, with a listing on Amazon’s Mechanical Turk service, where users can complete small tasks for commensurately modest sums of cash. In exchange for installing a Facebook app and completing a survey — in the process granting the app access to parts of your Facebook profile — you would get around a dollar. Maybe two.

This was a great deal, at least by the standards of the time. Facebook users were then accustomed to granting apps permission to see their personal data in exchange for much less. It was the tail end of a Facebook era defined by connected apps: games like FarmVille, Candy Crush and Words With Friends; apps that broadcast your extra-Facebook activities, like Spotify and Pinterest; and apps that were almost explicitly about gathering as much useful data as possible from users, like TripAdvisor’s Cities I’ve Visited app, which let you share a digital pushpin map with your friends.

Most of these apps, when installed, demanded permission to access “your profile info,” which could include things like your activity, birthday, relationship status, interests, religious and political views, likes, education and work history. They could also collect information about users’ friends, multiplying their reach. In providing a marketplace for such apps, Facebook made it easy for users to extend their extraordinarily intimate relationship with the site to thousands of third-party developers. One of them turned out to be connected to Cambridge Analytica, which was using the data for right-wing political campaigns — a fact that was lucidly and widely reported as early as 2015 but promptly lost in the roiling insanity of primary season. (As of Facebook’s most recent admission, data was collected on as many as 87 million users.)

Not that more exposure in the news cycle would have mattered much back then. It was self-evidently absurd to grant a virtual-farming game access to your religious views, but that’s just how the platform worked at the time, and so we got used to it, much in the same way we got used to conducting our private lives on any other corporate platform. (When Gmail first started in 2004, the fact that it placed ads based on the contents of users’ emails was considered invasive. That feeling passed; Google continued scanning consumer email until 2017, and Gmail now has more than a billion users.) Still, these individually trivial decisions never gave us cause to confront just how much we had come to trust Facebook.

Read the complete article here.

Facebook Says Cambridge Analytica Harvested Data of Up to 87 Million Users

From today’s New York Times:

Facebook on Wednesday said that the data of up to 87 million users may have been improperly shared with a political consulting firm connected to President Trump during the 2016 election — a figure far higher than the estimate of 50 million that had been widely cited since the leak was reported last month.

Mark Zuckerberg, the company’s chief executive, also announced that Facebook would offer all of its users the same tools and controls required under European privacy rules. The European rules, which go into effect next month, give people more control over how companies use their digital data.

Facebook had not previously disclosed how many accounts had been harvested by Cambridge Analytica, the firm connected to the Trump campaign. It has also been reluctant to disclose how it was used by Russian-backed actors to influence the 2016 presidential election.

Among Facebook’s acknowledgments on Wednesday was the disclosure of a vulnerability in its search and account recovery functions that it said could have exposed “most” of its 2 billion users to having their public profile information harvested.

The new effort to appear more transparent about the data leaks — including a rare question-and-answer session with Mr. Zuckerberg and reporters — came just before Mr. Zuckerberg’s expected testimony next week on Capitol Hill, where he will most likely face criticism over how the company collects and shares the personal data of its users. Sheryl Sandberg, Mr. Zuckerberg’s top deputy, has several national television interviews scheduled for later this week.

The company said that on Monday it would start telling users whether their information may have been shared with Cambridge Analytica.

Andy Stone, a spokesman for Facebook in Washington, said the 87 million figure was an estimate of the total number of users whose data could have been acquired by Cambridge Analytica. He said that the estimate was calculated by adding up all the friends of the people who had logged into the Facebook app from which Cambridge Analytica collected profile data.

Read the complete article here.