Colorado introduces tight consumer protection law on data privacy

From today’s Digital Journal:

Colorado is set to become the third state — alongside Virginia and California — to sign a privacy act into law, marking another step towards consumer data protection in the U.S.

The new law will be known as ‘The Colorado Privacy Act (CPA)’ and it is scheduled go into effect July 2023. The proposal is for the Act to be applicable to companies that either collect personal data from 100,000 Colorado residents or collect data from 25,000 Colorado residents and also derive some portion of revenue from sales.

The Act will affect businesses and they will need to prepare and put in place systems to ensure compliance. In addition, the Act will provide new rights for customers, and there remains the potential for more states to get on board with this form of legislation.

Looking at the changes for Digital Journal is Tyrone Jeffrees, Vice President of Engineering & US Information Security Officer at Mobiquity.

Jeffrees looks at the growing array of privacy bills appearing in the U.S.: “The news of Colorado joining Virginia and California in the passage of privacy acts is welcome as the nation moves towards ensuring these rights for residents and consumers. The law, while holding many similarities to Virginia’s privacy regulations, is expected to be more effective than others as it can be enforced by both the Colorado office of the Attorney General as well as local district attorney offices.”

He adds that the CPA is a little different to the earlier bills: “The CPA goes beyond California’s by requiring a blocking option for consumers to “opt-out” of having their personal information shared to create consumer profiles.”

This means new challenges for businesses, says Jeffrees. He recommends: “To ensure compliance with the CPA’s heavier guidelines, businesses and organizations must have a deeper understanding of how their data is collected and exactly what it is being used for when targeting new customers and sharing publicly.”

Jeffrees sees the legislation as something positive, noting: “I’m thrilled for the residents of Colorado. Ultimately, each new legislation is a win for U.S. consumers and privacy advocates. As more states introduce privacy regulation, U.S. consumers will be afforded increased agency and control over how their data can be collected and used.”

He see the U.S. as moving towards stronger consumer rights: “Right now, we have a patchwork of privacy regulations that guarantee rights for some, but not all, U.S. consumers based on residency. Each state that adopts common privacy principles will slowly start to raise the bar, but it would be ideal for U.S. residents to have one single framework for data privacy that serves all Americans.”

Read the complete article here.

To Protect Consumer Data, Don’t Do Everything on the Cloud

From today’s Harvard Business Review:

When collecting consumer data, there is almost always a risk to consumer privacy. Sensitive information could be leaked unintentionally or breached by bad actors. For example, the Equifax data breach of 2017 compromised the personal information of 143 million U.S. consumers. Smaller breaches, which you may or may not hear about, happen all the time. As companies collect more data — and rely more heavily on its insights — the potential for data to be compromised will likely only grow.

With the appropriate data architecture and processes, however, these risks can be substantially mitigated by ensuring that private data is touched at as few points as possible. Specifically, companies should consider the potential of what is known as edge computing. Under this paradigm, computations are performed not in the cloud, but on devices that are on the edge of the network, close to where the data are generated. For example, the computations that make Apple’s Face ID work happen right on your iPhone. As researchers who study privacy in the context of business, computer science, and statistics, we think this approach is sensible — and should be used more — because edge computing minimizes the transmission and retention of sensitive information to the cloud, lowering the risk that it could land in the wrong hands.

But how does this tech actually work, and how can companies who don’t have Apple-sized resources deploy it?

Consider a hypothetical wine store that wants to capture the faces of consumers sampling a new wine to measure how they like it. The store’s owners are picking between two competing video technologies: The first system captures hours of video, sends the data to third-party servers, saves the content to a database, processes the footage using facial analysis algorithms, and reports the insight that 80% of consumers looked happy upon tasting the new wine. The second system runs facial analysis algorithms on the camera itself, does not store or transmit any video footage, and reports the same 80% aggregated insight to the wine retailer.

The second system uses edge computing to restrict the number of points at which private data are touched by humans, servers, databases, or interfaces. Therefore, it reduces the chances of a data breach or future unauthorized use. It only gathers sufficient data to make a business decision: Should the wine retailer invest in advertising the new wine?

As companies work to protect their customers’ privacy, they will face similar situations as the one above. And in many cases, there will be an edge computing solution. Here’s what they need to know.

Read the complete article here.

With a Huge Victory, UK Uber Driver Moves on to Next Gig Worker Battlefront

From today’s Inequality.org:

In recent weeks, courts in multiple countries have delivered huge victories for gig workers by establishing the principle that these workers are, in fact, employed by digital platforms and are thus entitled to basic worker rights and protections.

The most stunning win was the UK Supreme Court’s recent scathing judgement against Uber. While lower courts had ruled again and again that UK-based drivers are in fact workers, the company had refused to comply with this classification until this final ruling.

James Farrar, a former Uber driver and a lead plaintiff in the case, is celebrating this huge victory, which means that gig workers will have the right to wage protections, holiday pay, and other basic benefits. But during six years of litigation against Uber, Farrar and his colleagues realized that gig workers would need to fight on additional fronts. Right now, these employees lack access to the data that their app-based employers gather about them.

To take on this critical battlefront for worker rights in the 21st Century, Farrar has founded Worker Info Exchange. I asked Farrar to explain why he started this new nonprofit organization and what it hopes to achieve.

How did you come to realize the need for a data rights strategy?

When we brought the employment case, Uber challenged me with my own data and they came to the tribunal with sheaves of paper that detailed every hour I worked, every job I did, how much I earned, whether I accepted or rejected jobs. And they tried to use all this against me. And I said we cannot survive and cannot sustain worker rights in a gig economy without some way to control our own data.

So I used Europe’s General Data Protection Regulation (GDPR) to try to extract my data from Uber. And it began by asking questions, what data do you have and what can you give me? And I began to understand that Uber was unwilling or unable or both to give it to me. And I needed an entity behind me to get that to happen.

How will access to their data help workers?

Gig workers need access to data to see how they are being managed and paid. Right now companies are using automated decision making. This means allocation of work, performance management, and dismissals are decided based on data that the app gathers and feeds into algorithms. We need to understand the code behind those because sometimes those decisions are unfair. When decisions are unfair we can’t just let company executives say it wasn’t intentional. We need to expose and challenge the logic fed into the algorithm. Very few people are doing this right now.

GDPR is useful because it doesn’t just give you the right to data, it’s access to logic of processing. I have a right to fairness of processing under GDPR. So data rights are more comprehensive than just simple access to raw information. What we have done so far is challenge Uber to disclosure — what data the app collects, things like GPS trace. But what we really want are inference data. What decisions has it made about me? How has it profiled me? How does that affect my earnings? This is what Uber has not given us.

Read the complete article here.

New Privacy Bills Aim to Protect Health Data During the Pandemic

From today’s Consumer Reports Online:

Tech companies are developing new contact-tracing apps, sharing people’s location information with health researchers, and taking other steps to put consumer data to work in the fight against the coronavirus pandemic. Now, lawmakers are writing laws to ensure the increased surveillance doesn’t also end up hurting consumers.

Over the past two weeks, legislators in the House and Senate proposed competing privacy bills that would establish safeguards.

The bills differ in some big ways, but both include rules mandating transparency and consent, and controlling the use of data for purposes other than public health. The first, the COVID-19 Consumer Data Protection Act, was introduced by Senate Republicans last week. Democrats introduced a counterproposal today, the Public Health Emergency Privacy Act. 

Tech companies are taking a variety of approaches to collecting and sharing consumer data in the wake of the pandemic.

A Facebook survey conducted by researchers at Carnegie Mellon University is tracking symptoms to look for new hot spots around the country. Ancestry and 23andMe are using their collections of DNA data to search for genetic clues that might predict how severely a patient will react to a coronavirus infection. Apple and Google joined forces to build a contact tracing technology that uses Bluetooth signals from cell phones to identify and notify people who have been exposed to someone infected with the coronavirus. And businesses from location data brokers to smart thermometer companies are repurposing other kinds of data for public health research.

Public health experts have mixed opinions on whether the efforts will provide useful tools for containing the pandemic. But even tools that do help can also introduce serious privacy concerns.

“It’s all very well intentioned, but there is a huge risk here that there could be some really pernicious discrimination, especially when you think about how this virus is disproportionately affecting African Americans, Hispanics, older Americans, and other marginalized communities,” says David Brody, counsel and senior fellow for privacy and technology at the Lawyers’ Committee for Civil Rights Under Law, an advocacy group that endorsed the Democrats’ Public Health Emergency Privacy Act.

Read the compete article here.

Data Privacy: What Californians can do about creepy data collection in 2020

From today’s The Mercury News:

Starting New Year’s Day, Californians creeped out by the trove of personal data companies collect on their online shopping, searching and social media habits will get sweeping new privacy rights that will let them opt out of having their information sold or shared and let them demand that it be deleted.

“This is really a watershed moment for consumers,” said Scott W. Pink, a Menlo Park lawyer who advises companies on cybersecurity and privacy. “It’s the first law in the United States outside specialized industries like health care that provides consumers some degree of control and access over data collected on them.”

The California Consumer Privacy Act approved in June 2018 was inspired by public outrage over data breaches at major companies such as Facebook, Yahoo and Equifax that exposed consumers to potential fraud and misuse of their personal information, and by the European Union’s General Data Protection Regulation.

The new law requires that businesses disclose their data gathering and sharing practices and allows consumers to opt out of it and to demand that businesses delete collected information on them. It prohibits companies from penalizing consumers with higher rates or fewer services for exercising their privacy rights and from selling information about children under age 16 without their explicit consent.

But questions continue to swirl as companies scramble to comply. The state attorney general is still finalizing proposed regulations intended to guide consumers and businesses in order to meet a July deadline when enforcement is expected to begin.

And both consumer and business advocates continue to spar over whether the new privacy provisions go too far or not far enough, with proposed state and federal substitutes in the works.

Read the complete article here.

A brutal year: how ‘techlash’ caught up with Facebook, Google and Amazon

From The Guardian Online:

What goes up must come down, and in 2019, gravity reasserted itself for the tech industry.

After years of relatively unchecked growth, the tech industry found itself on the receiving end of increased scrutiny from lawmakers and the public and attacks from its own employees.

Facebook and Instagram ads were linked to a Russian effort to disrupt the American political process.
Social Media, Fake News, and the hijacking of democracy by reactionary forces at home and from abroad.

“The whole year has been brutal for tech companies,” said Peter Yared, chief executive officer and founder of data compliance firm InCountry. “The techlash we have seen in the rest of the world is just now catching up in the US – it’s been a long time coming.”

From new privacy legislation to internal strife, here are some of the major hurdles the tech industry has faced in the past year.

As the 2020 presidential race intensified, tech companies faced a growing backlash over the campaign-related content they allow on their platforms.Advertisement

In October, Facebook quietly revised its policy banning false claims in advertising to exempt politicians, drawing fierce criticism from users, misinformation watchdogs, and politicians. Following the change in policy, presidential candidate Elizabeth Warren took out advertisements on Facebook purposely making false statements to draw attention to the policy.

Democratic lawmaker Alexandria Ocasio-Cortez grilled Facebook’s chief executive, Mark Zuckerberg, over the policy change in a congressional hearing in October. “Do you see a potential problem here with a complete lack of factchecking on political advertisements?” Ocasio-Cortez asked, as Zuckerberg struggled to answer. “So, you won’t take down lies or you will take down lies?”

Meanwhile, other tech companies took the opposite stance.TikTok, whose reported 500 million users makes it one of Facebook’s largest rivals, made clear in a blogpost in October it would not be hosting any political advertisements.

And Facebook rival Twitter banned almost all political advertising in October. Google stated in November it would no longer allow political advertisers to target voters based on their political affiliations.

Read the complete article here.

Opinion: One Man Can Bring Equifax to Justice (and Get You Your Money)

From today’s New York Times:

On Dec. 19, District Judge Thomas Thrash of Atlanta will hold a final approval hearing for the Equifax 2017 data breach settlement. There’s a lot at stake. If the settlement is approved, the $31 million pool earmarked for claims will be paid out to some victims. Others will get free credit monitoring (because the cash reward set aside for victims was so small, if all 147 million people affected by the breach filed a claim, everyone would get just 21 cents).

There’s another option. As I wrote in a September column, victims could file a formal, legal objection, which would nullify the settlement. If Judge Thrash finds those objections convincing, Equifax’s class-action counsel wouldn’t receive their $77.5 million fee and Equifax would be liable again to face a substantial penalty for the breach. I’m happy to report quite a few people — maybe even a record number — did just that.

Over the past month Reuben Metcalfe, the founder of Class Action Inc., helped 911 individuals object (another 294 objected but did not provide signatures by the Nov. 19 deadline) by creating a chatbot tool that allowed victims to file objections automatically for the Equifax settlement at no cost (Class Action Inc. waived its 5 percent fee for Equifax). Theodore H. Frank, a lawyer who specializes in class-action suits, has jumped in the ring himself along with another victim, David Watkins. Frank’s objections, which are more formal and detailed than Metcalfe’s many automated ones, argue that the settlement is too broad and doesn’t take into account state-by-state protections for data breaches (in Utah, where Watkins lives, victims could claim damages up to $2,000).

Now it’s up to Judge Thrash to sift through the settlement and its objections and decide. Thanks to Metcalfe and Frank, he’s likely to be feeling some pressure. Back in September a class-action lawyer told me that even if only 1,000 people object, it can send a powerful message. Frank is hopeful the settlement will look weak on its own merits. “If the judge gives an honest look, he’ll realize it doesn’t meet muster,” he told me recently.

I’d argue there’s even more resting on Judge Thrash’s shoulders, including whether companies can get away with abusing our data in the future. Metcalfe, who has steeped himself in the world of class-action suits, suggested that the settlements, initially a method for accountability, have become a mechanism for companies to knowingly skirt liability for not protecting consumers. “It’s becoming cheaper to say sorry after the fact than to obey the law in the first place,” he told me.

This feels especially true in the world of data privacy, where breaches are so frequent that a discovery last week of an open database containing the personal information of 1.2 billion people hardly made news. We seem locked in a vicious cycle: Companies that gather and trade data have few checks or regulations. This allows them to collect more, which means more money. And deeper pockets make it harder to impose meaningful penalties that might deter repeat and future offenders (see: the Federal Trade Commission’s $5 billion slap on the wrist of Facebook). Judge Thrash, then, has a unique opportunity to make a statement by objecting.

Read the complete article here.

Facebook Halts Advertising Targeting Cited in Bias Complaints and Lawsuits

From today’s New York Times:

After years of criticism, Facebook announced on Tuesday that it would stop allowing advertisers in key categories to show their messages only to people of a certain race, gender or age group.

The company said that anyone advertising housing, jobs or credit — three areas where federal law prohibits discrimination in ads — would no longer have the option of explicitly aiming ads at people on the basis of those characteristics.

The changes are part of a settlement with groups that have sued Facebook over these practices in recent years, including the American Civil Liberties Union, the National Fair Housing Alliance and the Communications Workers of America. They also cover advertising on Instagram and Messenger, which Facebook owns.

“We think this settlement is historic and will go a long way toward making sure that these types of discriminatory practices can’t happen,” Sheryl Sandberg, the company’s chief operating officer, said in an interview.

The company said it planned to carry out the changes by the end of the year and would pay less than $5 million to settle five lawsuits brought by the groups.

Read the complete article here.

Why Are We All Still Using Venmo?

From today’s Wired Magazine:

VENMO, THE POPULAR payment app owned by PayPal, has become the default way millions of Americans settle a check, pay a friend back for coffee, or buy a concert ticket off Craigslist. Writers have argued that Venmoing makes us petty, and that the app has nearly killed cash. Fewer have questioned whether it’s really the best service for exchanging money, or storing sensitive banking information.

The app has reigned supreme for over half a decade, but in 2018, there are more secure and easier-to-use payment options worth considering as replacements. Venmoing may be standard, but here’s why I’ve switched.

Most Venmo competitors, like Square’s Cash app, share the same core feature: You can send money with a few taps and swipes. Venmo is unique in that it has a social networking component. By default, all peer-to-peer Venmo transactions—aside from the payment amount—are public, to everyone in the world.

Creepy, right? Venmo does give users the ability to limit who can see transactions both before and after they’re sent, but many people don’t choose to adjust their privacy settings. When I opened Venmo recently, the first payment on my news feed was from a friend whose concerns about privacy have led him to delete both his Instagram and Facebook accounts. Despite taking drastic steps to limit his digital footprint, I know who he ate sushi with last night, thanks to Venmo.

Venmo’s insistence on mimicking a social networking app isn’t just weird—it can have unnerving consequences. In July, privacy advocate and designer Hang Do Thi Duc released Public by Default, a site that taps into Venmo’s API to highlight how much information can be gathered about you from your public activity on the app. She was able to trace the exact spending habits of a couple in California, documenting what stores they shopped at, when they took their dog to the vet, and when they made loan payments.

Read the complete article here.

In #MeToo Era Companies Embrace Rolling Background Checks at Work

From today’s Bloomberg News Service:

Jay Cradeur takes pride in his 4.9 driver rating on Uber Technologies Inc.’s five-star scale and the almost 19,000 rides he’s given in the capital of ride sharing, San Francisco. So he was puzzled — and more than a little annoyed — when Uber kicked him off its platform last December.

Little did he know that he had fallen victim to a growing practice among U.S employers: regular background checks of existing workers in addition to the routine pre-employment screening. Uber’s post-hiring check had thrown up a red flag on Cradeur, an issue that took six weeks to resolve and which the company later attributed to a “technical error.”

The number of companies constantly monitoring employees isn’t known, but the screening industry itself has seen explosive growth in recent years. Membership in the National Association of Professional Background Screeners more than quadrupled to 917 last year from 195 members when it was formed in 2003, said Scott Hall, the organization’s chairman and also chief operating officer of the screening company, FirstPoint.

“I think the concern is coming from a fear that either something was missed the first time around or a fear of, ‘Really do we know who’s working for us?’” said Jon Hyman, a Cleveland employment lawyer who has seen a pick-up in calls from manufacturers in the past six months inquiring about continuous checks.

“I think the MeToo movement plays into this, too, because they wonder, ‘Do we have people who might have the potential to harass?” he added.

Companies are trying to balance privacy concerns with mounting pressure to do a better job in rooting out workers who might steal, harass or even commit violent acts in the workplace. Some high-profile incidents among Uber drivers are helping spook employers into taking action, including an Uber Eats driver in Atlanta who allegedly shot and killed a customer in February.

Healthcare and financial service workers have gone through extra screening for years, but the practice of running periodic checks or continuous checks is spreading to other sectors including manufacturing and retailing within the past six to 12 months, said Tim Gordon, senior vice president of background-screening company, InfoMart Inc.

Read the complete article here.