“This is the Biden-Harris administration really saying that we need to work together, not only just across government, but across all sectors, to really put equity at the center and civil rights at the center of the ways that we make and use and govern technologies. We can and should expect better and demand better from our technologies.”
– Dr. Alondra Nelson, deputy director for science and society at the White House Office of Science and Technology Policy (OSTP), quoted in Garance Burke's White House unveils artificial intelligence ‘Bill of Rights’on Associated Press
The Blueprint for an AI Bill of Rights announced Tuesday by the Biden administration is an important step to dealing with harms caused by the rise of artificial intelligence systems. These systems can have benefits , but as the Blueprint points out
Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.
These outcomes are deeply harmful—but they are not inevitable.
The Blueprint proposes core five principles to forestall these harmful outcomes, each with an associated "Principles to Practice" page that goes into more detail and including real-life examples of how the principles can become reality:
- Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
- Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
- Notice and Explanation: You should know when an automated system is being used and understand how and why it contributes to outcomes that impact you.
- Human Alternatives, Consideration, and Fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
Initial responses from algorithmic justice and privacy experts are mostly positive:
- Algorithmic Justice League describes it as "an encouraging step in the right direction in the fight toward algorithmic justice," noting that the Blueprint affirms the importance of addressing algorithmic discrimination and provides a roadmap that should be leveraged for greater consent and equity.
- Willmary Escoto of Access Now (quoted in Melissa Heikkilä's The White House just unveiled a new AI Bill of Rights on MIT Technology Review) notes that guidelines name and address the diverse harms people experience from AI-enabled technologies, and adds that "the AI Bill of Rights could have a monumental impact on fundamental civil liberties for Black and Latino people across the nation.”
- Upturn applauds it, and notes "This kind of critical analysis of tech needs to permeate across the policymaking spectrum — from federal agencies to city councils, and across issues like housing, employment, policing, health, education, financial services … you name it. These aren’t just “tech” issues anymore."
- ReNika Moore, director of the American Civil Liberties Union’s Racial Justice Program (quoted in Rachel Metz' The White House released an ‘AI Bill of Rights’ on CNN), called the principles “an important step in addressing the harms of AI” and added that “there should be no loopholes or carve-outs for these protections.”
- The Leadership Conference on Civil and Human Rights commends the Biden administration for a major effort.
- Center for Democracy and Technology welcomes it.
- Electronic Privacy Information Center calls it a significant step, although notes that "the Administration must also ensure that systems used by the federal government meet the goals set out in today’s Blueprint."
Update, October 13: The US Chamber of Commerce has some "concerns." Who could have predicted? Ben Winters of EPIC has the details, and some quick responses, on Twitter.
As AJL points out, though, the Blueprint isn't everything that algorithmic justice advocates want. For example, as Meredith Whittaker, Sarah T. Hamid, Albert Fox-Cahn, and Jackie Singh all highlight, the Blueprint's focus on the dangers of "continuous" and "unchecked" surveillance normalizes surveillance in general. Think about "gunshot detection" technologies like ShotSpotter, which (supposedly) only kick in when shots are fired and (supposedly) has checks that reduce the number of people falsely arrested or killed by police responding to calls. You can certainly imagine law enforcement and vendors arguing that this isn't continuous or unchecked surveillance ...
There are other issues as well. Unlike the EU's AI Act, it doesn't prohibit the use technologies that cuase "unacceptable risk." Upturn is concerned by the legal disclaimer up front that "appears to suggest that law enforcement and national security should operate under a different, softer set of rules and principles." And I'm sure more detailed analysis will reveal other areas for improvement.
Still, overall, the Blueprint is stronger than I had expected it to be.
The next step is to go from principles to changing policy. In a fact sheet, the White House listed over a dozen upcoming agency actions that begin to make the principles concrete. For example, Department of Labor has released “What the Blueprint for an AI Bill of Rights Means for Workers” and is ramping up enforcement of required surveillance reporting to protect worker organizing. Department of Education will release recommendations on the use of AI for teaching and learning by early 2023. And this one in particular could have a big impact on the federal government's use of AI:
To guide federal procurement, the Administration will work across agencies to develop new policies and guidance for using and buying AI products and services that are based on effective and promising practices to prevent and address bias and algorithmic discrimination resulting from the use of AI and other advanced technologies.
As well as having an impact at the federal level, the Blueprint could also give a boost to states looking to regulate AI-based systems. One of the topics we discussed on the Washington State Automated Decision-making Systems Workgroup last year was the potential risk that if Washington's government procurement guidelines were far more rigorous than elsewhere, vendors might just ignore us. Aligning with the Blueprint's recommendations is a good way to reduce that risk – and convince legislators that we're not just doing our own thing, but are part of a broader movement.
What about applying these principles to the private sector? As Theodora Lau, co-founder of Unconventional Ventures, says (in Penny Crosman's What the White House’s blueprint for an AI bill of rights means for banks on American Banker), "at least it sends a signal to the industry: Hey, we will be watching." And who knows, companies like Facebook, Palantir, and Clearview AI could voluntarily adhere to the Blueprint's principles ... hahahahaha. Yeah, right.
As Lau says, a nonbinding bill with no enforcement measures is "like a toothless tiger." Khari Johnson goes into more detail in Wired with similar framing: Biden’s AI Bill of Rights Is Toothless Against Big Tech
The White House’s blueprint for AI rights is primarily aimed at the federal government. It will change how algorithms are used only if it steers how government agencies acquire and deploy AI technology, or helps parents, workers, policymakers, or designers ask tough questions about AI systems. It has no power over the large tech companies that arguably have the most power in shaping the deployment of machine learning and AI technology.
Still, everybody agrees the Blueprint is just a first step. Potential next steps to applying the blueprint in the private sector include
- States continuing to take the lead as laboratories of democracy; for example, the California Privacy Rights Act has some equity-focused AI regulation, including the ability to opt out of automated profiling, and their new Age-Appropriate Design Code also has some algorithmic regulations.
- The FTC's proposed commercial surveillance rulemaking, whose scope is likely to include AI and other automated decision systems.
- New federal legislation, perhaps passing the Algorithmic Accountability Act of 2022 or strengthening the extremely weak algorithmic sections of the American Data Privacy and Protection Act (ADPPA), which currently fall far short AI Bill of Rights' principles
A more pessimistic scenario is if the current version of ADPPA passes. ADPPA doesn't require notice that AI-based systems are being used, it doesn't require companies to allow people to opt out, it's algorithmic impact assessment requirements don't require third-party evaluation ... so without even digging into the details, it's already struck out on three of the five Blueprint principles. Not only that, ADPPA has some major data privacy loopholes that put pregnant people at risk, it doesn't protect LGBAIQ2S+ people, and it preempts states from passing stronger legislation. In fact, since ADPPA has exceptions for government contractors acting as service providers, it could even undercut progress within the federal government. Let's hope that doesn't happen.
More positively, though, the principles and practices in the Blueprint provide insights for how to improve the ADPPA and other legislation. The backing from the Biden administration is at the very least an important acknowledgement of the breadth and importance of the issues. And the detailed analyses are potentially also a roadmap that could lead to meaningful change.
So kudos to all the advocates and organizations who have been pushing for this – and also to Dr. Nelson and the team that developed it. As Nelson, Dr. Sorelle Friedle (OSTP Assistant Director for Data and Democracy), and Ami Fields-Meyer (Chief of Staff, OSTP Science and Society Division) say in Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age
What could it look like for industry developers and academic researchers to think about equity at the start of a design process, and not only after issues of discrimination emerged downstream? What kind of society could we have if all innovation began with ethical forethought? How do we ensure that the guardrails to which we are entitled in our day-to-day lives carry over into our digital lives?
The Blueprint for an AI Bill of Rights begins to answer these questions. It offers a vision for a society where protections are embedded from the beginning, where marginalized communities have a voice in the development process, and designers work hard to ensure the benefits of technology reach all people.
Image Credit: Willmary Escoto via Twitter, used with permission