First of all, apologies to subscribers for not providing more context about why we suddenly had two newsletters in a row about protecting democracy by fighting disinformation! As we say on the About page, The Nexus of Privacy looks at the connections between technology, policy, strategy, and justice, so disinformation and the strategies about how to respond to it are certainly within scope. And with the very real and growing threats to democracy from disinformation on social media in the US midterm elections, I wanted to get something out to prepare people for the likely increase in disinfo over the next few weeks.
But what I didn't highlight in those posts is that there are direct ties between privacy and the ways disinfo is spread on social media. Two examples –
- There's still disagreement about how much impact Cambridge Analytica's illegally-acquired data had on Brexit, the 2016 US election, and all the other contries they worked in ... but there's no debate at all that they acquired the data without consent. And back in July, as I discussed in ADPPA Advances. But what about the elephant?, Cobun Zweifel-Keegan of IAPP pointed out to me that one of the changes the House Energy & Commerce Committee made to proposed American Data Privacy Protection Act reminded me could exempt businesses like Cambridge Analytica from some audit and governance requirements. What could possibly go wrong?
- In 2022, Facebook and Tiktok aren't removing ads with election disinformation. Just like other ads, data acquired without consent from all kinds of different sources is used to target them at the people most likely to be "persuaded". And another change the committee made was to remove ADPPA's requirement for third-party algorithmic impact assessments (AIAs). So now Facebook gets to do their own assessment of whether there are any risks or biases in their targeting of disinfo. Call me skeptical but I think that undercuts ADPPA’s ability to protect civil rights ... unless of course you trust Facebook to do these analyses. Yeah, right.
So I wasn't just chaning the subject with those disinfo posts – even though it might have looked that way!
Sarah Lamdan on WIRED (wired.com)
In an excerpt from her new book Data Cartels: The Companies That Control and Monopolize Our Information, Landam looks at Reed Elsevier LexisNexis (RELX), which she describes as "a Frankensteinian amalgam of publishers and data brokers, stitched together into a single information giant." Indeed.
Police have abused LexisNexis systems to spy on exes and even to blackmail women using the personal information the company’s policing products provide. Using RELX products for data surveillance is problematic because the company funnels a deluge of unfiltered, unvetted data through biased data-processing algorithms. The combination of bad data and bad algorithms leads to government systems that bake historically racist, xenophobic policing practices and outcomes into a Minority Report-like digital policing dystopia.
The companies’ error-riddled data has prevented people from accessing their own bank accounts and getting insurance, and from being able to rent homes. The mistakes in RELX’s data make it all the more worrisome. RELX is growing its list of data analytics products, and is even developing technology that makes predictions about your health based on your private medical records, assessing your health risks for insurers and your doctors. Imagine what could happen to your health care access if you were wrongly tagged as at risk for opioid abuse or as having a certain chronic illness.
And as Alfred Ng describes in Privacy bill triggers lobbying surge by data brokers, RELX and data brokers have been lobbying heavily to weaken ADPPA.
The brokers, including U.K.-based data giant RELX and credit reporting agency TransUnion, want changes to the bill — such as an easing of data-sharing restrictions that RELX says would hamper investigations of crimes. Some data brokers also want clearer permission to use third-party data for advertising purposes.
Thomas Germain on Gizmodo (gizmodo.com)
An independent test suggests Apple collects data about you and your phone when its own settings promise to “disable the sharing of Device Analytics altogether.”
The iPhone Analytics setting makes an explicit promise. Turn it off, and Apple says that it will “disable the sharing of Device Analytics altogether.” However, Tommy Mysk and Talal Haj Bakry, two app developers and security researchers at the software company Mysk, took a look at the data collected by a number of Apple iPhone apps—the App Store, Apple Music, Apple TV, Books, and Stocks. They found the analytics control and other privacy settings had no obvious effect on Apple’s data collection—the tracking remained the same whether iPhone Analytics was switched on or off.
Chole Xiang on vice.com
Tenants get to throw away their checkbooks. Landlords learn how much to raise your rent by. What could possibly go wrong?
Our Response to the FTC’s Advanced Notice of Proposed Rulemaking (ANPR) on Commercial Surveillance and Data Security
Data & Society (datasociety.net)
Comments for the FTC's ANPR have to be submitted by November 21. Data & Society's comments make some important points – and at 19 pages long show how much effort some organizations put into comments. That said, if you're doing comments yourself, they don't have to be anywhere near this elaborate. As I say in How and why to submit FTC comments on Commercial Surveillance and Data Security
Comments can be simple and short; even just saying something like "My personal data shouldn't be exploited, the FTC needs to take action" is useful. Even better is if you can tell a story about yourself or somebody you know. Or, you can respond to one or more of the 95 (!) questions in the ANPR (also available in a PDF).
State privacy legislation
Steve Britt and Sarah Hutchins on WRAL TechWire (wraltechwire.com)
The expansive powers of the California Privacy Protection Agency (CPPA) should not be overlooked, especially since CCPA has already been the subject of four rounds of Attorney General regulations, in some cases imposing rules beyond what was provided in the statute.
Caitlin Dewey on Pew Trust's Stateline News Service (pewtrusts.org)
California's new Age Appropriate Design Code imposes sweeping restrictions on internet companies that serve minors, requiring that they design their platforms with children’s “well-being” in mind and barring eight common data-collection practices. A broad coalition of privacy groups supported it – although there are also a lot of concerns about unintended consequences. Tech industry groups strongly opposed it, and could still sue to block it. As Dewey reports, "Even advocates have acknowledged that portions of the act are overly vague, leaving major questions about how companies will comply when it goes into effect." But it's also likely to be used as a model for other bills around the country.
Mastodon (the social network not the extinct elephant)
Mastodon's the decentralized open-source Twitter alternative that's getting a lot of attention in the aftermath of the Twitter acquisition. There's a lot of buzz about Mastodon as Twitter turns into a hellscape, a lot of interesting privacy issues, and a lot of connections between the technology, policy, strategy, and justice. So expect to see some Mastodon news here from time to time!
Danielle Navarro's Everything I know about Mastodon is a great introduction -- it's written for data science people but useful for everybody! Sunny Singh's Leaving Twitter says more about you than Elon Musk and Twitter threads from @ISASaxonists and @IBJIYONGI have good racial justice perspectives; Black Twitter's not giving up without a fight looks has several more links.
on Graham Cluley (grahamcluley.com)
Cluley has some solid suggestions for the security basics: choose a strong password, enable two-factor authentication, be alert because it's easy to impersonate, and be aware that direct messages aren't encrypted so administrators and moderators on your site and the sites the other people you're talking to are on can all read them. It's disappointing, though, that he doesn't talk about how to keep yourself secure against harassment, which as Caroline Sinders points out in I’m @Sinders on Mastodon but I’m not giving up on Twitter, yet is a big problem.
Martin Husovec on Verfassungsblog (verfassungsblog.de)
The Digital Services Act (DSA) is an ambitious project. It constrains private power to protect the freedom of individuals. Arguably, it is based on ordoliberal thinking that if competition does not discipline private power enough to facilitate individual freedoms, the state must intervene to prescribe basic rules of the game to constrain it; competition can do the rest. The DSA has many components but, in its essence, it is a digital due process regulation bundled with risk-management tools. But will these tools work?
Joseph Menn on The Washington Post (washingtonpost.com)
TrustCor Systems, which vouches for the legitimacy of websites, has connections to contractors for U.S. intelligence agencies and law enforcement, according to security researchers, documents and interviews.
J. Nathan Matias on Citizens and Technology Lab (citizensandtech.org)
How can independent researchers reliably detect bias, discrimination, and other systematic errors in software-based decision-making systems?
Michael Birnhack on papers.ssrn.com
From the abstract:
This article proposes a critical temporal analysis of surveillance. Along with identifying temporal aspects of a given surveillance setting, we should search for the temporal elements in the discourse about surveillance. This inquiry may enlighten how a particular surveillance apparatus was justified or rejected.
This article illustrates the relevance of critical temporal inquiry for surveillance studies through a case study from Israel, where mass state surveillance was implemented for contact tracing during the Covid-19 pandemic. I examine three Supreme Court cases that scrutinized this apparatus, exposing how the judicial portrayal of the different time vectors affected its legitimacy.
Victor Dey on VentureBeat (venturebeat.com)
While AR/VR can create next-gen experiences for users, without stringent data privacy, it can also cause harmful manipulation.
zimmermannstlouisfed.org on ideas.repec.org
This short analysis aims to provide an overview of the anticipated costs caused by the EU’s proposed AI regulation, the AI Act (AIA), to impacted organisations: both providers and deployers of systems
robin on DutchNews.nl (dutchnews.nl)
Three Dutch foundations which are pursuing legal claims against social media company TikTok can continue to fight their cases in the Dutch court system, judges in Amsterdam ruled on Wednesday. TikTok had argued the cases should not be heard in the Netherlands because the claims were made against Tik…
Carly Page on TechCrunch (techcrunch.com)
Russian-speaking cybercriminals are leaking sensitive personal and health data stolen from Australia’s largest health insurance firm.
Read More on Future of Privacy Forum (fpf.org)
In May 2022, the Future of Privacy Forum (FPF) launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision-Making (ADM). Our research highlighted that the GDPR’s protections for individuals against for…
Vincent Manancourt on POLITICO (politico.eu)
MEPs were in Britain to scrutinize the country’s GDPR reform plans.
Privacy regulators call for respect of privacy, transparency rights in creating digital ID ecosystem
Katrina Eñano on Canadian Lawyer (canadianlawyermag.com)
Regulators set out list of conditions to include in design and operation of ID ecosystem
Paul Karp on The Guardian (theguardian.com)
Business Council of Australia warns of ‘unintended drafting error’ in privacy bill
WRAL on WRAL.com (wral.com)
Duke University Hospital has asked a court to throw out a lawsuit over data tracking on its website.
Emma Hatton on Newsroom (newsroom.co.nz)
Members of Facebook groups may hold more legal liability than they realise, as new privacy laws being tested through the courts decide where the lines will be drawn.
Vanessa Lim on CNA (channelnewsasia.com)
The Online Safety (Miscellaneous Amendments) Bill will allow IMDA to deal with harmful online content accessible to Singapore users, including those that originate outside the country.
on NL Times (nltimes.nl)
The Netherlands Data Protection Foundation is preparing a mass claim against Twitter on behalf of 11 million Netherlands residents. According to the foundation, Twitter collected and sold their privacy-sensitive data without permission through the advertising company MoPub, Trouw reports.
Golriz Chrostowski on Request a Free Demo (news.bloomberglaw.com)
New York lawyers will need to complete one CLE credit hour of cybersecurity, privacy, and data protection as part of their biennial learning requirement next year, making New York the first jurisdiction to implement such a requirement.
Serina Sandhu on inews.co.uk (inews.co.uk)
The prosecutor said: ‘The legal change they have introduced clearly undermines victim privacy and anybody who has experience of prosecuting these cases understands that’