Current:Home > MyPoinbank Exchange|AI pervades everyday life with almost no oversight. States scramble to catch up -EverVision Finance
Poinbank Exchange|AI pervades everyday life with almost no oversight. States scramble to catch up
NovaQuant Quantitative Think Tank Center View
Date:2025-04-10 17:31:10
DENVER (AP) — While artificial intelligence made headlines with ChatGPT,Poinbank Exchange behind the scenes, the technology has quietly pervaded everyday life — screening job resumes, rental apartment applications, and even determining medical care in some cases.
While a number of AI systems have been found to discriminate, tipping the scales in favor of certain races, genders or incomes, there’s scant government oversight.
Lawmakers in at least seven states are taking big legislative swings to regulate bias in artificial intelligence, filling a void left by Congress’ inaction. These proposals are some of the first steps in a decades-long discussion over balancing the benefits of this nebulous new technology with the widely documented risks.
“AI does in fact affect every part of your life whether you know it or not,” said Suresh Venkatasubramanian, a Brown University professor who co-authored the White House’s Blueprint for an AI Bill of Rights.
“Now, you wouldn’t care if they all worked fine. But they don’t.”
Success or failure will depend on lawmakers working through complex problems while negotiating with an industry worth hundreds of billions of dollars and growing at a speed best measured in lightyears.
Last year, only about a dozen of the nearly 200 AI-related bills introduced in statehouses were passed into law, according to BSA The Software Alliance, which advocates on behalf of software companies.
Those bills, along with the over 400 AI-related bills being debated this year, were largely aimed at regulating smaller slices of AI. That includes nearly 200 targeting deepfakes, including proposals to bar pornographic deepfakes, like those of Taylor Swift that flooded social media. Others are trying to rein in chatbots, such as ChatGPT, to ensure they don’t cough up instructions to make a bomb, for example.
Those are separate from the seven state bills that would apply across industries to regulate AI discrimination — one of the technology’s most perverse and complex problems — being debated from California to Connecticut.
Those who study AI’s penchant to discriminate say states are already behind in establishing guardrails. The use of AI to make consequential decisions — what the bills call “automated decision tools” — is pervasive but largely hidden.
It’s estimated as many as 83% of employers use algorithms to help in hiring; that’s 99% for Fortune 500 companies, according to the Equal Employment Opportunity Commission.
Yet the majority of Americans are unaware that these tools are being used, polling from Pew Research shows, let alone whether the systems are biased.
An AI can learn bias through the data it’s trained on, typically historical data that can hold a Trojan Horse of past discrimination.
Amazon scuttled its hiring algorithm project after it was found to favor male applicants nearly a decade ago. The AI was trained to assess new resumes by learning from past resumes — largely male applicants. While the algorithm didn’t know the applicants’ genders, it still downgraded resumes with the word “women’s” or that listed women’s colleges, in part because they were not represented in the historical data it learned from.
“If you are letting the AI learn from decisions that existing managers have historically made, and if those decisions have historically favored some people and disfavored others, then that’s what the technology will learn,” said Christine Webber, the attorney in a class-action lawsuit alleging that an AI system scoring rental applicants discriminated against those who were Black or Hispanic.
Court documents describe one of the lawsuit’s plaintiffs, Mary Louis, a Black woman, applied to rent an apartment in Massachusetts and received a cryptic response: “The third-party service we utilize to screen all prospective tenants has denied your tenancy.”
When Louis submitted two landlord references to show she’d paid rent early or on time for 16 years, court records say, she received another reply: “Unfortunately, we do not accept appeals and cannot override the outcome of the Tenant Screening.”
That lack of transparency and accountability is, in part, what the bills are targeting, following the lead of California’s failed proposal last year — the first comprehensive attempt at regulating AI bias in the private sector.
Under the bills, companies using these automated decision tools would have to do “impact assessments,” including descriptions of how AI figures into a decision, the data collected and an analysis of the risks of discrimination, along with an explanation of the company’s safeguards. Depending on the bill, those assessments would be submitted to the state or regulators could request them.
Some of the bills would also require companies to tell customers that an AI will be used in making a decision, and allow them to opt out, with certain caveats.
Craig Albright, senior vice president of U.S. government relations at BSA, the industry lobbying group, said its members are generally in favor of some steps being proposed, such as impact assessments.
“The technology moves faster than the law, but there are actually benefits for the law catching up. Because then (companies) understand what their responsibilities are, consumers can have greater trust in the technology,” Albright said.
But it’s been a lackluster start for legislation. A bill in Washington state has already floundered in committee, and a California proposal introduced in 2023, which many of the current proposals are modeled off of, also died.
California Assembly member Rebecca Bauer-Kahan has revamped her legislation that failed last year with the support of some tech companies, such as Workday and Microsoft, after dropping a requirement that companies routinely submit their impact assessments. Other states where bills are, or are expected to be, introduced are Colorado, Rhode Island, Illinois, Connecticut, Virginia and Vermont.
While these bills are a step in the right direction, said Venkatasubramanian of Brown University, the impact assessments and their ability to catch bias remain vague. Without greater access to the reports — which many of the bills limit — it’s also hard to know whether a person has been discriminated against by an AI.
A more intensive but accurate way to identify discrimination would be to require bias audits — tests to determine whether an AI is discriminating or not — and to make the results public. That’s where the industry pushes back, arguing that would expose trade secrets.
Requirements to routinely test an AI system aren’t in most of the legislative proposals, nearly all of which still have a long road ahead. Still, it’s the start of lawmakers and voters wrestling with what’s becoming, and will remain, an ever-present technology.
“It covers everything in your life. Just by virtue of that you should care,” said Venkatasubramanian.
——-
Associated Press reporter Trân Nguyễn in Sacramento, California, contributed.
veryGood! (9619)
Related
- Head of the Federal Aviation Administration to resign, allowing Trump to pick his successor
- Northern Europe braces for gale-force winds, floods
- How Justin Timberlake Is Feeling Amid Britney Spears' Memoir Revelations
- Birds nesting in agricultural lands more vulnerable to extreme heat, study finds
- Tree trimmer dead after getting caught in wood chipper at Florida town hall
- Why Gwyneth Paltrow Really Decided to Put Acting on the Back Burner
- After 189 bodies were found in Colorado funeral home, evidence suggests families received fake ashes
- Natalee Holloway's Mom Slams Joran van der Sloot's Apology After His Murder Confession
- Bodycam footage shows high
- Detroit-area county will use federal money to erase medical debts
Ranking
- Sam Taylor
- What could convince Egypt to take in Gaza's refugees?
- Cheetos pretzels? A look at the cheese snack's venture into new taste category
- Sidney Powell pleads guilty in case over efforts to overturn Trump’s Georgia loss and gets probation
- Who's hosting 'Saturday Night Live' tonight? Musical guest, how to watch Dec. 14 episode
- Israeli mother recounts being held hostage by Hamas with her family, husband now missing
- DHS and FBI warn of heightened potential for violence amid Israel-Hamas conflict
- Corn Harvests in the Yukon? Study Finds That Climate Change Will Boost Likelihood That Wilderness Gives Way to Agriculture
Recommendation
Questlove charts 50 years of SNL musical hits (and misses)
Cherelle Griner Honors Wife Brittney Griner in Birthday Tribute Nearly a Year After Captivity Release
Russia’s foreign minister offers security talks with North Korea and China as he visits Pyongyang
California's annual statewide earthquake drill is today. Here's what to know about the Great ShakeOut.
US appeals court rejects Nasdaq’s diversity rules for company boards
Week 8 college football expert picks: Top 25 game predictions led by Ohio State-Penn State
2 special elections could bring more bad news for Britain’s governing Conservatives
Britney Spears Admits to Cheating on Justin Timberlake With Wade Robson