Skip to content

Privacy News: August 22

A road sign shaped like an arrow with the word "Privacy" on it

A range of stories to start the week off ...

Birthing Predictions of Premature Death

J. Khadijah Abdurahman on Logic Magazine (

Logic Magazine's incoming editor takes a powerful in-depth look at how family policing (aka "child welfare") became data-driven – and the horrific implications.

Every aspect of interacting with the various institutions that monitored and managed my kids—ACS, the foster care agency, Medicaid clinics—produced new data streams....  Documentation and data collection was not something that existed outside of analog, obscene forms of violence, like having your kids torn away. Rather, it’s deeply tied to real-life harm.

I had no pre-existing interest in the banal details of data collection, but when I read the political scientist Virginia Eubanks’ Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor in 2018, I finally had a framework to understand how our experience was so deeply driven by, and entangled with, the production of data. Eubanks’ notion of the “digital poorhouse”—a complex set of computational geographies for disciplining poor people—was viscerally real for me.


A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal

Kashmir Hill on NYTimes (

Mark's son's penis was painful and looking swollen, so (at the doctor's request) he sent photos to prepare for their emergency video consultation.  Google's scanning for child sexual abuse material flagged the picture, so they deleted Mark's account and reported him to the authorities.  He was cleared after an investigation – but still doesn't have his account back. The same thing happened to somebody in Texas, and who knows how many others.

Dr. Suzanne Haney, chair of the American Academy of Pediatrics’ Council on Child Abuse and Neglect, advised parents against taking photos of their children’s genitals, even when directed by a doctor.

“The last thing you want is for a child to get comfortable with someone photographing their genitalia,” Dr. Haney said. “If you absolutely have to, avoid uploading to the cloud and delete them immediately.”

Class action against Oracle’s worldwide surveillance machine

ICCL on Irish Council for Civil Liberties (

Oracle claims to have amassed dossiers on 5 billion people, and generates $42.4 billion in annual revenue. ICCL’s Dr Johnny Ryan is a lead plaintiff in a new U.S. class action to stop its global surveillance machine.  Ryan's complaint against Internet Advertising Bureau Europe eventually lead to their RTB system being found in violation of the GDPR, so this is a big deal.

But the legal environment is much tricker in the US; California's privacy law only has a private right of action for data breaches.  So the suit instead alleges violations the California's Constitution and Invasion of Privacy Act, as well as competition law, the common law, and the Electronic Communications Privacy Act (ECPA)


Bad Data “For Good”: How Data Brokers Try to Hide in Academic Research

Gennie Gebhart on Electronic Frontier Foundation (

When data broker SafeGraph got caught selling location information on Planned Parenthood visitors, their CEO claimed that this harvesting and sharing of sensitive data was, in fact, an engine for beneficial research on abortion access. Other data brokers also have "data for good" programs.  But ...

Location data brokers do not come close to meeting human subjects research standards. This starts with the fact that meaningful opt-in consent is consistently missing from their business practices. In fact, Google concluded that SafeGraph’s practices were so out of line that it banned any apps using the company’s code from its Play Store, and both Apple and Google banned X-Mode from their respective app stores.

And ...

Fears over China’s access to genetic data of UK citizens, Shanti Das on The Guardian ( Biobank urged to review transfer of information for medical research

How to Stop Robots From Becoming Racist, Khari Johnson on WIRED (  Algorithms can amplify patterns of discrimination. Robotics researchers are calling for new ways to prevent mechanical bodies acting out those biases.

Regulating the Risks of AI, Margot E. Kaminski on Lawmakers in both the United States and the European Union (EU) have turned to the tools of risk regulation for governing AI systems.   But risk regulation comes with its own policy baggage: a set of tools and troubles that have emerged in other fields.

Why AI regulation will resemble privacy regulation, Bobby Napiltonia, Okera on VentureBeat ( Concerns over data privacy have sparked regulations in the U.S. and Europe. How will AI regulations look as they come to fruition?

Hitting the Books: How can privacy survive in a world that never forgets?: An excerpt from  Dr. Toby Walsh's Machines Behaving Badly: The Morality of AI on Engadget (

A Lesson from Google: Can AI Bias be Monitored Internally?, Cold Call podcast on Harvard Business Review ( A star AI researcher was forced out of Google when she raised concerns about bias in the company’s large language models. Now tech companies must rethink their AI ethics.

LexisNexis sued by immigration advocates over data practices, Sebastian Klovig Skelton, on ( Four immigration advocacy groups launch lawsuit in Illinois alleging data broker’s collection, aggregation and sale of people’s personal data, including non-public information, to corporations and government bodies.

New Pentagon Budget Could Force Military to Disclose Data Purchases, Dell Cameron on Gizmodo ( The House approved a little-discussed budget amendment requiring the Pentagon to make its data purchases public. Now the Senate will decide its fate.

TikTok to tackle midterm misinformation, Rebecca Klar on The Hill ( TikTok released its plan to tackle midterm-related content, including how it will crack down on paid influencer political ads. Meanwhile, the Department of Energy announced on …

There are two factions working to prevent AI dangers. Here’s why they’re deeply divided, Kelsey Piper on Vox ( AI poses present risks and future ones. Why don’t the teams that work on them get along?

EFF & ACLU Brief: SFPD Violated Surveillance Law by Spying on Protests for Black Lives, Jason Kelley on Electronic Frontier Foundation ( San Francisco police violated the city’s surveillance technology law by tapping into a private surveillance camera network to spy on demonstrators protesting the 2020 police murder of George Floyd, the Electronic Frontier Foundation (EFF) and American Civil Liberties Union Foundation (ACLU) told a state appeals court in a brief filed Monday.

Mozilla slaps 18 period and pregnancy tracking apps and devices with a ‘Privacy Not Included’ warning label, Jordan Parker Erb on Insider ( Popular reproductive health apps including Flo, Glow, and Ovia were given a privacy warning label from Mozilla following the reversal of Roe v. Wade.

The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: what can they learn from each other?, Jakob Mökander on SpringerLink ( On the whole, the US Algorithmic Accountability Act of 2022 (US AAA) is a pragmatic approach to balancing the benefits and risks of automated decision systems. Yet there is still room for improvement.

Privacy Concerns Ground Torrington, Conn., PD Drone Purchase, Thad Rueter on GovTech ( The Torrington City Council has tabled a vote to approve the purchase of two drones for police use after citizen privacy concerns were raised. The vote has been postponed until the September meeting.

Why America should not adopt Europe’s model for tech regulation, Frances Townsend on The Hill ( The author (a former Homeland Security Advisor) argues that the European Union is "barreling toward the final stages of enacting sweeping new rules in antitrust and speech that are gerrymandered to target only America’s most successful technology companies."

Seven Questions To Ask When Choosing A Privacy Management Software, Jodi Daniels on Forbes ( There are a few ways to make an off-the-shelf program highly effective.

FTC weighs in on customer data privacy, Kim Davis on MarTech ( Addressing data privacy concerns, the FTC has announced proposed rule-making on consumer surveillance and data security.

New report warns about privacy of period-tracking, pregnancy apps, Good Morning America on Good Morning America ( The Mozilla Foundation looked at the privacy policies of 25 reproductive health apps.

Meet MetaGuard, an ‘incognito mode’ for the metaverse, Thomas Claburn on The Register ( As if the VR giants will let that stand – worth a try, though

Image Credit: Privacy by Nick Youngson CC BY-SA 3.0 Alpha Stock Images via Picpedia.