A recently released report from The Connecticut Advisory Committee to the U.S. Commission on Civil Rights focuses on the civil rights implications of algorithms currently used by government.
The report documents the ways in which algorithms can be discriminatory, sometimes due to the use of data sets that are discriminatory and sometimes by mischaracterizing or misinterpreting data, and the ways in which these are used by government decision makers.
“While an algorithm’s potential to perpetuate discrimination is troubling in the private sector, it is all the more concerning when used by government.” the report states, citing the example of artificial intelligence used by police to predict areas of high crime and concentrate police presence as a deterrent to criminal behavior. The result is more arrests in a concentrated area, leading the algorithm to predict an even higher crime rate in the area when new arrest data is fed into it.
“This feedback loop could thus create more problems of over-policing and put more people of color into the criminal justice system.” the report states. It further cites New York City’s stop-and-frisk policy, which allowed police to stop and interrogate New Yorkers on the basis of reasonable suspicion and resulted in racial profiling, as evidence this kind of algorithmic-driven bias is more than hypothetical.
Other issues related to government’s use of algorithms highlighted by the report include transparency. According to testimony received by the commission, much of the general public is unaware of how algorithms are used in government decision-making. The report states this makes it “difficult for the public to take any meaningful steps to hold the government accountable.”
Further complicating the issue are limits to the state’s freedom of information law. “Algorithms are not specifically named in most Freedom of Information legislation, leading agencies to believe they are not covered by those laws. Some agencies have said they believe algorithms are covered by FOIA requests but do not believe they have any responsive data as the algorithms are run by private corporations.” the report states.
Another issue is the trade secret exemption to freedom of information laws. Connecticut’s Freedom of Information Act (FOIA) roughly defines trade secrets as information that derives independent economic value from being generally unknown and is the subject of reasonable efforts to maintain its secrecy or as commercial information given in secret.
According to the report, Connecticut state agencies have cited this statute to claim “that providing the source code or supporting documentation for an algorithm would violate the developer’s rights to keep information from the public.”
The difficulty of obtaining data on algorithms through FOIA requests is also the subject of a January 2022 study from Yale Law School’s Media Freedom and Information Access Clinic (MFIA). As part of the clinic’s Algorithmic Accountability Project, they filed FOIA requests with the state’s Department of Children and Families (DCF), Department of Education (DOE), and Department of Administrative Services (DAS).
The requests sought information about algorithms known to be used by the three departments, including what the algorithms did, how they were acquired, and how they were tested. They also sought “the extent to which answers to these questions can be obtained under current disclosure laws.”
According to MFIA, the agencies’ responses were “generally deficient across all four metrics.” They did not receive a complete response to any of the three requests and did not receive any response from the DAS, which invoked the trade secret exemption to withhold most of the documents the clinic was requesting.
Further, MFIA received responses from all three agencies stating their algorithms were proprietary and not subject to disclosure.
“The use of algorithms by the three agencies we tested remains opaque in several ways: the lack of information in the possession of the agency concerning the operation of algorithms used, any assessments of effectiveness or bias, and the manner of its procurement.” the MFIA white paper concludes.
Data from these MFIA study was incorporated into some of the committee’s findings and recommendations. In total, the committee’s report makes four findings, each of which includes a se recommendations for the state’s legislature and its agencies.
Among its recommendations are that the government include people from protected classes who are most adversely impacted by the use of algorithms in government efforts to monitor and assess them. The report also recommends the state create a public education campaign to raise the general public’s awareness of the government’s use of automated decision making. Other recommendations include a publicly available dashboard listing which government agencies use automated decision-making, frequent independent audits of algorithms, and revisions to the state’s freedom of information laws that require disclosure of publicly available data sources used by algorithms and used by government.
According to David McGuire, the committee’s chairperson, government action is a crucial next step to managing the public use of algorithms.
“I think the legislature stepping in is crucial and our committee members heard a lot of testimony that made us understand how big of a role algorithms are going to play in our country moving forward. It’s now a question of the government stepping in to regulate its use of this technology.” said McGuire.
McGuire added that it’s “unclear whether the state even could if they needed to right now name all of the agencies that use automated decision making and what it’s used for.” He said he was unaware if there is a central government hub that documents all its uses of algorithms, which he called “troubling.”
But McGuire did express optimism that the issue of algorithmic use by the government is something the legislature is taking seriously. He referenced legislation mentioned by Sen. James Maroney, D-Milford, during an April 25 press conference held about the release of the commission’s report.
He said the bill, SB 1103, “does have many of the facets we’re calling on the states to enshrine in law and figure out, mainly around the transparency and also mandating that algorithms be assessed and evaluated and audited regularly. I think there seems to be a general thought across the legislature that that makes sense.”
SB 1103, which was advanced out of the Committee on General Law by a unanimous vote and now awaits a vote in the Senate, is focused on artificial intelligence, automated decision-making, and personal data privacy.
If passed, the bill would require the Office of Policy and Management to designate an artificial intelligence (AI) officer to adopt procedures for the use of automated systems. It would also require DAS to designate an AI implementation officer to inventory automated systems.
The bill also would establish the Connecticut Artificial Intelligence Advisory Board within the legislative branch and establish a task force to study AI and make recommendations on adoption of an AI bill of rights. Further, it would prohibit state agencies from entering into a contract without a provision requiring service providers to comply with consumer data privacy law.
Maroney said the report’s recommendations about transparency will be addressed in SB 1103 by a provision that will “have DAS create an inventory of all the agencies that are currently using automated decision tools and that will be posted on the open data portal.” Additionally, Maroney said the bill would create “policies and procedures and agencies will not be able to put automated decision tools into effect using algorithms for what we’re going to call critical decisions unless they follow those policies and procedures.”
If implemented, the bill would also require the AI officer conduct an assessment of the impact automated decision making tools have on residents of the state.
“Exactly what that impact assessment will look like will be determined in the policies and procedures, but in the law we’re going to put that after January 1, 2025 you cannot put any automated decision tools in effect without first doing an impact assessment. And if any of those impact assessments undergo a significant update they will have to have a new impact assessment done.” said Maroney.