If you applied for a job in California recently, there is a good chance a computer helped decide whether a human ever looked at your resume. Automated tools now screen applications, rank candidates, score video interviews, and flag employees for promotion or discipline. California has new rules that tell employers what they can and cannot do with those tools. The rules took effect on October 1, 2025, and they apply to nearly every employer in the state with five or more employees.
This post walks through the new regulations in plain English. It covers what the rules actually require, what it means for a worker who thinks an algorithm may have treated them unfairly, and what employers should be doing right now to stay on the right side of the law.
What the New AI Rules Actually Cover
In June 2025, the California Civil Rights Council finalized new regulations that add an artificial intelligence layer on top of the Fair Employment and Housing Act, or FEHA. The rules sit in the California Code of Regulations, Title 2, starting at section 11008, and they became enforceable on October 1, 2025.
The rules regulate what they call "Automated Decision Systems," or ADS. That is a broad term. It includes traditional machine learning, generative AI, statistical models, and even simpler rule-based scoring tools, so long as the tool is used to help make an employment decision. Resume screeners, chatbot interviewers, video analysis tools that grade tone of voice or facial expressions, skill assessments, personality tests, and promotion recommendation engines all fall inside the definition.
Two things are worth underlining. First, the rule applies whether the tool makes the final decision or simply "facilitates" a human decision. A hiring manager who rubber-stamps an algorithm's ranking is still subject to the rule. Second, the rule applies to the employer even if the tool was built and run by a third-party vendor. The regulations expressly define "agent" to include vendors performing traditional employer functions like recruiting or screening, which means a company generally cannot hide behind its software provider.
What Employers Are Now Required to Do
The core obligation is familiar: do not discriminate on the basis of a FEHA-protected category, including race, color, national origin, ancestry, religion, sex, gender identity, sexual orientation, age, disability, medical condition, genetic information, marital status, veteran status, and others. What is new is how the rules expect employers to prove they are not discriminating when an algorithm is in the mix.
Anti-Bias Testing That Is Actually Ongoing
The regulations make clear that a one-time pre-launch validation is not enough. Employers using an ADS are expected to conduct anti-bias testing or independent audits that are timely, repeatable, and transparent. In practice, that means periodic fairness checks across protected categories, not a single certificate sitting in a drawer. The more the tool is used and the more consequential the decisions, the more rigorous the testing should be.
Four-Year Record Retention
Employers must keep ADS-related records for at least four years. That includes the data fed into the system, the outputs such as scores and rankings, the criteria applied to candidates or employees, and the results of any bias testing. If a candidate later challenges a decision, the employer will be expected to produce this record. Companies that cannot do so are in a weaker position on the merits and may face adverse inferences.
Meaningful Human Oversight
The rules require genuine human participation, not a token sign-off. Humans need to understand how the tool affects candidate and employee outcomes and when to intervene. If a company cannot explain how its tool reaches a result, or if no human ever reviews the outputs, the company is carrying a large legal risk.
Notice to Applicants and Employees
Applicants and employees are generally entitled to pre-use and post-use notice when ADS tools are used in decisions affecting them. That includes what the tool does, what data it uses, and how a person can request human review or raise concerns about accuracy.
Common Problem Areas Workers Should Know About
The regulations do not exist in a vacuum. They codify concerns that have been showing up in real cases for years. A few patterns to watch for:
- Resume screeners that penalize gaps. Tools that downgrade candidates for employment gaps may have a disparate impact on people who took caregiving leave, dealt with a medical condition, or served in the military. FEHA prohibits facially neutral practices that produce a discriminatory effect without a sufficient business justification.
- Video interview scoring. Algorithms that rate tone, word choice, or facial expressions may systematically downgrade candidates with disabilities, accents, or cultural differences. These tools are often marketed as "objective," but that framing does not insulate them from FEHA.
- Personality and cognitive tests. Assessments that screen for traits correlated with disability or neurodivergence raise risks under both FEHA and the Americans with Disabilities Act. Employers should be able to show that the test measures job-related traits and is necessary for the job.
- Promotion and performance tools. Internal tools that predict which employees to promote or terminate can reflect and amplify patterns of past bias. The new regulations apply to these tools just as much as to external hiring tools.
A worker who believes an automated tool affected a hiring, promotion, or discipline decision generally has the same FEHA remedies available for any discrimination claim. That includes filing a complaint with the California Civil Rights Department, with a three-year filing window from the date of the alleged violation, followed potentially by a civil lawsuit if a right-to-sue notice is issued.
What This Means for California Employees
If you are a job applicant or current employee and you suspect an algorithm played a role in an adverse decision, a few practical steps may help preserve your position:
- Document the process. Save job postings, emails, application portal screenshots, and any notices the employer provided about automated tools. The new regulations require employers to keep records, but you should keep your own as well.
- Ask questions in writing. You may ask the employer how the decision was made and whether automated tools were involved. A written record of your inquiry, and any response or silence, can matter later.
- Note the timeline. Discrimination claims under FEHA generally must be filed with the California Civil Rights Department within three years of the adverse action. Do not assume there is unlimited time.
- Consider whether a pattern exists. Automated tools often affect many applicants or employees similarly. Talking to others who went through the same process, when appropriate, can help identify class or systemic issues.
None of this is legal advice, and every situation is different. What an individual worker can do depends on the specific facts and the jurisdiction where the work occurred.
What This Means for California Employers
Employers have real exposure here, and vendor contracts alone may not be enough to shift that exposure. Practical steps that generally reduce risk include:
- Inventory every ADS in use. Recruiting, screening, interview scoring, performance management, scheduling, promotion, and discipline tools are all on the list. Many employers underestimate how many algorithms touch their workforce.
- Demand audit evidence from vendors. Ask for current fairness testing results, including results broken out by protected categories. Bake audit rights and cooperation into vendor contracts.
- Build your own records. Retain inputs, outputs, criteria, and audit results for at least four years. A vendor's records are not a substitute for the employer's own.
- Train the humans in the loop. A human reviewer who does not understand the tool is a legal liability rather than a safeguard. Document training and review procedures.
- Provide required notices. Clear, accessible pre-use and post-use notices help satisfy the regulations and reduce the odds of a dispute later.
For small and mid-sized employers, the most common failure mode is assuming that "we just use the vendor's default settings" is a defense. Under these regulations, using an off-the-shelf tool without any internal validation or oversight is often where liability sits.
How the New Rules Fit With Existing Law
The regulations do not replace existing FEHA case law; they layer on top of it. Disparate treatment and disparate impact theories still apply. Claims connected to AI tools may also intersect with the Americans with Disabilities Act, the Age Discrimination in Employment Act, the California Consumer Privacy Act, and the new California AI Transparency Act. Employers that handle health, biometric, or behavioral data through these tools should also be thinking about privacy obligations, not only about discrimination.
Agencies are not the only enforcers. The rules expand potential liability to "agents," which can include vendors that make or influence employment decisions. That could support shared or indirect claims against the companies that sell these tools, not only the companies that deploy them. For more background on related topics, see our guides on workplace privacy rights in California, requesting a reasonable accommodation, and the .
How Wiser Workplace Can Help
Many AI hiring disputes can be resolved faster and more affordably through structured mediation than through a drawn-out litigation process. A neutral mediator can help both sides get on the same page about what a tool actually does, what data exists, and what a reasonable resolution looks like. That is often especially useful where the employer wants to preserve the business relationship or correct a flawed process rather than fight.
Wiser Workplace is a California-based resolution platform that connects workers and employers with experienced neutrals who understand both sides of these disputes. The platform is confidential, protected by California's mediation confidentiality rules, and designed to be accessible to workers without a lawyer and to employers that want to resolve issues efficiently.
The Bottom Line
California's new FEHA regulations treat AI hiring tools for what they are: powerful systems that can replicate old forms of discrimination at a new scale. The rules do not ban these tools. They require employers to test them, document them, oversee them, and tell workers when they are being used. Workers who suspect an algorithm played a role in an unfair decision now have a clearer framework for pushing back, and employers who take the rules seriously have a clearer roadmap for reducing risk.