Making AI Work for Labor: Transparency, Accountability, and Human Oversight in the Workplace
By Acacia Rodriguez
Artificial Intelligence (AI) has become the go-to solution for “fast and cheap” labor. It’s implemented to answer calls, run reports, write essays, create art, and supplement social media campaigns. However, DC 37 draws the line when AI threatens to replace union jobs or severely impact workers’ job duties.
“AI can be used to bring benefits to a workplace and support employees,” said Brittany Stinson, Deputy Director of DC 37’s Political Action & Legislation Department. “Unfortunately, it’s also being used as a cost-saving measure that displaces employees. We need to be at the decision-making table and emphasize that there must be proper implementation and regulations around AI to make sure our members are able to use it as a supportive tool and not as a mechanism to eventually eliminate their jobs.”

AI systems are fed data from a variety of sources. If an employer determines which sources are valuable and relies on machines to carry out decision-making, worker expertise and discretion is filtered out. Machine-based decision-making often prioritizes employer profits over human needs.
Unions can protect workers by establishing boundaries around AI usage when bargaining collectively. Additionally, creating clear legislative policies around how AI is used in the workplace limits the replacement of workers and human input. That’s why DC 37 is working in conjunction with the AFL-CIO and other labor groups to support legislation aimed at AI oversight.
On Jan. 15, labor leaders submitted testimony to the New York State Senate Standing Committee on Internet and Technology. Backed by members of Local 1549 NYC Clerical Administrative Employees, Executive Director Henry Garrido’s testimony underscored the deskilling and dehumanization of the workforce.
“City agencies such as Health + Hospitals in the Department of Social Services are aggressively introducing AI in work that deals directly with the health and welfare of the public sector and of New Yorkers,” Garrido’s testimony read. “A recent audit revealed that an AI tool that’s being used at the Administration of Children’s Services was trained on data that’s a decade old, and the programs used factors for determining eligibility that are clear proxies for race and socioeconomic status. This algorithm does not empower social workers. It undermines their professional expertise with a biased system.”
DC 37 is monitoring AI programs that pose fundamental challenges to the rights, security, and dignity of union members and the public. In the case of Local 1549’s workers who determine eligibility for SNAP and Medicaid applications, having a person oversee the process can mean the difference between approval for benefits or erroneous denials that strangle an already overwhelmed system.
“Maximus, a government technology company with which the state has several contracts from former governor Andrew Cuomo, incorrectly processed over 30,000 cases using their AI program, affecting countless individuals in need of help and causing our members to work overtime to fix those errors,” Garrido stated. “As a union, we must step in before workers see themselves being replaced due to shortsighted AI adoption.”
Members can support these endeavors by remaining watchful, informed, and keeping their shop stewards, union representatives, and division heads apprised of any changes in the workplace. While audits of AI usage are helpful for statistics and facts, Garrido encourages members to speak up and share their stories so that bad actors can be held accountable.
Elected officials have access to data, but what they need to hear are individual stories of people impacted by AI. Has AI been implemented in your workplace or changed your job duties? Your story is critical to ensuring AI policies are crafted with worker input.
To submit your testimony about the usage of AI at your agency, click HERE.
DC 37-Supported AI & Worker Protection Legislation

Boundaries on Technology (BOT ACT)
In Committee – AFL-CIO Priority
- Restricts workplace electronic monitoring tools that gather employee data
- Requires employers to give written notice to employees about the use of electronic monitoring and allow employee access to the data that is collected.
- Includes language to prohibit the use of tools for bias, discrimination, and exploitation in the workplace.
- Regulates AI tools that evaluate work and make decisions regarding hiring, promotion, and other labor relations issues.
- Requires human oversight of the operations and output of AI systems.
LOADING Act – Part 1
Signed into Law – 2025
- Prohibits state agencies from using automated decision-making in delivering public services, impacting public services materially, or affecting statutory or constitutional rights unless explicitly authorized by law.
LOADING Act – Part 2
Awaiting Governor’s Signature – 2025
- This legislation expands on the Loading Act Part 1 by expanding AI requirements to state, county, and municipal agencies and defines meaningful human oversight.
- Requires any automated systems used by government agencies to undergo regular impact assessments and include meaningful human oversight.
- Requires government agencies to disclose existing automated decision-making systems to the legislature.
This article originally appeared in the January-March 2026 issue of PEPTalk Magazine.