Wiser Workplace

California AI Hiring Rules: FEHA Automated Decision System Regulations Explained

Wiser Workplace is not a law firm and does not provide legal representation. This article is general educational information about California employment law, not legal advice, and does not create an attorney-client relationship. For advice about your specific situation, consult a licensed California attorney. Prior results do not guarantee a similar outcome.

Wiser Workplace Editorial Team

If you applied for a job in California recently, there is a good chance a computer helped decide whether a human ever looked at your resume. Automated tools now screen applications, rank candidates, score video interviews, and flag employees for promotion or discipline. California has new rules that tell employers what they can and cannot do with those tools. The rules took effect on October 1, 2025, and they apply to nearly every employer in the state with five or more employees.

This post walks through the new regulations in plain English. It covers what the rules actually require, what it means for a worker who thinks an algorithm may have treated them unfairly, and what employers should be doing right now to stay on the right side of the law.

What the New AI Rules Actually Cover

In June 2025, the California Civil Rights Council finalized new regulations that add an artificial intelligence layer on top of the Fair Employment and Housing Act, or FEHA. The rules sit in the California Code of Regulations, Title 2, starting at section 11008, and they became enforceable on October 1, 2025.

The rules regulate what they call "Automated Decision Systems," or ADS. That is a broad term. It includes traditional machine learning, generative AI, statistical models, and even simpler rule-based scoring tools, so long as the tool is used to help make an employment decision. Resume screeners, chatbot interviewers, video analysis tools that grade tone of voice or facial expressions, skill assessments, personality tests, and promotion recommendation engines all fall inside the definition.

Two things are worth underlining. First, the rule applies whether the tool makes the final decision or simply "facilitates" a human decision. A hiring manager who rubber-stamps an algorithm's ranking is still subject to the rule. Second, the rule applies to the employer even if the tool was built and run by a third-party vendor. The regulations expressly define "agent" to include vendors performing traditional employer functions like recruiting or screening, which means a company generally cannot hide behind its software provider.

What Employers Are Now Required to Do

The core obligation is familiar: do not discriminate on the basis of a FEHA-protected category, including race, color, national origin, ancestry, religion, sex, gender identity, sexual orientation, age, disability, medical condition, genetic information, marital status, veteran status, and others. What is new is how the rules expect employers to prove they are not discriminating when an algorithm is in the mix.

Anti-Bias Testing That Is Actually Ongoing

The regulations make clear that a one-time pre-launch validation is not enough. Employers using an ADS are expected to conduct anti-bias testing or independent audits that are timely, repeatable, and transparent. In practice, that means periodic fairness checks across protected categories, not a single certificate sitting in a drawer. The more the tool is used and the more consequential the decisions, the more rigorous the testing should be.

Four-Year Record Retention

Employers must keep ADS-related records for at least four years. That includes the data fed into the system, the outputs such as scores and rankings, the criteria applied to candidates or employees, and the results of any bias testing. If a candidate later challenges a decision, the employer will be expected to produce this record. Companies that cannot do so are in a weaker position on the merits and may face adverse inferences.

Meaningful Human Oversight

The rules require genuine human participation, not a token sign-off. Humans need to understand how the tool affects candidate and employee outcomes and when to intervene. If a company cannot explain how its tool reaches a result, or if no human ever reviews the outputs, the company is carrying a large legal risk.

Notice to Applicants and Employees

Applicants and employees are generally entitled to pre-use and post-use notice when ADS tools are used in decisions affecting them. That includes what the tool does, what data it uses, and how a person can request human review or raise concerns about accuracy.

Common Problem Areas Workers Should Know About

The regulations do not exist in a vacuum. They codify concerns that have been showing up in real cases for years. A few patterns to watch for:

A worker who believes an automated tool affected a hiring, promotion, or discipline decision generally has the same FEHA remedies available for any discrimination claim. That includes filing a complaint with the California Civil Rights Department, with a three-year filing window from the date of the alleged violation, followed potentially by a civil lawsuit if a right-to-sue notice is issued.

What This Means for California Employees

If you are a job applicant or current employee and you suspect an algorithm played a role in an adverse decision, a few practical steps may help preserve your position:

  1. Document the process. Save job postings, emails, application portal screenshots, and any notices the employer provided about automated tools. The new regulations require employers to keep records, but you should keep your own as well.
  2. Ask questions in writing. You may ask the employer how the decision was made and whether automated tools were involved. A written record of your inquiry, and any response or silence, can matter later.
  3. Note the timeline. Discrimination claims under FEHA generally must be filed with the California Civil Rights Department within three years of the adverse action. Do not assume there is unlimited time.
  4. Consider whether a pattern exists. Automated tools often affect many applicants or employees similarly. Talking to others who went through the same process, when appropriate, can help identify class or systemic issues.

None of this is legal advice, and every situation is different. What an individual worker can do depends on the specific facts and the jurisdiction where the work occurred.

What This Means for California Employers

Employers have real exposure here, and vendor contracts alone may not be enough to shift that exposure. Practical steps that generally reduce risk include:

For small and mid-sized employers, the most common failure mode is assuming that "we just use the vendor's default settings" is a defense. Under these regulations, using an off-the-shelf tool without any internal validation or oversight is often where liability sits.

How the New Rules Fit With Existing Law

The regulations do not replace existing FEHA case law; they layer on top of it. Disparate treatment and disparate impact theories still apply. Claims connected to AI tools may also intersect with the Americans with Disabilities Act, the Age Discrimination in Employment Act, the California Consumer Privacy Act, and the new California AI Transparency Act. Employers that handle health, biometric, or behavioral data through these tools should also be thinking about privacy obligations, not only about discrimination.

Agencies are not the only enforcers. The rules expand potential liability to "agents," which can include vendors that make or influence employment decisions. That could support shared or indirect claims against the companies that sell these tools, not only the companies that deploy them. For more background on related topics, see our guides on workplace privacy rights in California, requesting a reasonable accommodation, and the .

How Wiser Workplace Can Help

Many AI hiring disputes can be resolved faster and more affordably through structured mediation than through a drawn-out litigation process. A neutral mediator can help both sides get on the same page about what a tool actually does, what data exists, and what a reasonable resolution looks like. That is often especially useful where the employer wants to preserve the business relationship or correct a flawed process rather than fight.

Wiser Workplace is a California-based resolution platform that connects workers and employers with experienced neutrals who understand both sides of these disputes. The platform is confidential, protected by California's mediation confidentiality rules, and designed to be accessible to workers without a lawyer and to employers that want to resolve issues efficiently.

The Bottom Line

California's new FEHA regulations treat AI hiring tools for what they are: powerful systems that can replicate old forms of discrimination at a new scale. The rules do not ban these tools. They require employers to test them, document them, oversee them, and tell workers when they are being used. Workers who suspect an algorithm played a role in an unfair decision now have a clearer framework for pushing back, and employers who take the rules seriously have a clearer roadmap for reducing risk.

Legal Disclaimer: This article is for general informational purposes only and does not constitute legal advice. While we aim to provide accurate information about California employment law, the law in this area is new and evolving. Every situation is unique. Wiser Workplace is not a law firm, does not provide legal advice, and does not create an attorney-client relationship.