Views: 45
NYC Law 144 & EU AI Act: The Compliance Trap Catching Thousands of Companies
Discover NYC Law 144 & the EU AI Act compliance trap. Avoid fines, lawsuits, and penalties with explainable AI and proper AEDT audits.
On July 5, 2023, New York City began enforcing Local Law 144, the first U.S. statute to impose operational requirements on automated hiring systems. According to the NYC Department of Consumer and Worker Protection (DCWP), employers and employment agencies that use Automated Employment Decision Tools (AEDTs) must have each tool independently bias-audited within the previous 12 months, post audit results publicly, and give NYC applicants at least 10 business days’ notice before the tool is used. Failure to meet these requirements can trigger civil penalties assessed per violation — ranging from initial fines through penalties up to $1,500 per violation (and effectively rolling daily penalties when a non-compliant tool continues to be used), which rapidly add up into thousands or even millions for employers that process many NYC candidates without required notices. (See DCWP guidance and legal summaries).
What makes this a real compliance trap is scope and execution. The DCWP’s guidance and industry legal briefings underline that Local Law 144 applies where AEDTs are used “in the City” — a definition that can reach remote roles that are based in New York City or otherwise target NYC applicants, and it can therefore capture organizations with distributed or offshore hiring models. Independent compliance reviews and academic audits show that a large share of employers are not yet meeting the public-posting and notice obligations: one empirical study that surveyed employer postings found audit reports and transparency notices to be rare, highlighting a substantial compliance gap. Combine that gap with high candidate volumes and the per-violation penalty structure, and the math becomes simple and stark: a screening workflow that touches 100 NYC applicants in a week without proper notice could generate $1,500 × 100 = $150,000 in weekly penalties — roughly $7.8 million if repeated over a year — not counting parallel litigation exposure that Part 1 of this series warned may total into the billions.
What Is NYC Local Law 144? (And Why You Should Care)
NYC Local Law 144 regulates “Automated Employment Decision Tools” (AEDTs)—any AI system used to screen candidates or employees for hiring or promotion decisions. Understanding what counts as an AEDT is crucial for avoiding costly fines and legal exposure.
What Counts as an AEDT? According to Deloitte’s legal analysis, tools considered AEDTs include:
- ✅ AI resume screening tools
- ✅ Video interview analysis platforms (e.g., HireVue, Spark Hire)
- ✅ Candidate assessment algorithms (e.g., Pymetrics, Criteria)
- ✅ Automated reference checking tools
- ✅ Chatbots that pre-screen candidates
- ✅ Skills matching algorithms
Tools not covered under the law include:
- ❌ Applicant tracking systems that only store or organize data without AI scoring
- ❌ Recruiting outreach tools (used only for sourcing)
- ❌ Background check services
The Gray Zone: Most modern ATS platforms like Workday, Greenhouse, or Lever now include AI features. If your ATS performs automated scoring, ranking, or candidate recommendations, it likely qualifies as an AEDT. Failing to recognize this can put your organization in violation.
For official guidance on NYC AEDTs, see the NYC DCWP overview and the AEDT FAQ (PDF).
The Three Mandatory Requirements of NYC Local Law 144 (Get One Wrong = Violation)
Complying with NYC Local Law 144 means meeting three critical requirements for any Automated Employment Decision Tool (AEDT) you use. Missing even one can result in substantial fines.
Requirement 1: Annual Bias Audit (Publicly Posted)
Your AEDT must undergo an independent bias audit within the past 12 months. The audit must:
Use of AI in HR – NY City Law 144 – Dorf Nelson & Zauderer LLP
Test for Disparate Impact:
- Selection rates by race/ethnicity
- Selection rates by sex
- Impact ratios comparing protected groups to the most-selected group
Be Publicly Available:
- Posted on your company website
- No password protection or access barriers
- Include methodology, data sources, and results
Be Conducted by Independent Auditor:
Audit Cost: Typically $15,000-$30,000 per tool, per year
The Trap: Dorf Nelson & Zauderer LLP warns that using multiple AI tools (e.g., resume screening + video interviews + skills tests) requires separate audits for each.
Requirement 2: Candidate Notification (10 Days Before Screening)
All NYC resident candidates must receive clear notification at least 10 business days before an AEDT is used. According to Norton Rose Fulbright, the notice must include:
Required Elements:
- That an automated tool will be used
- The job qualifications and characteristics the AEDT will assess
- Instructions for requesting an alternative selection process or accommodation
- Data retention policy for information collected through the AEDT
Sample Compliant Notice:
AUTOMATED HIRING TOOL NOTICE
[Company Name] uses an AI-powered tool to evaluate applications for this role.
WHAT IT DOES:
The tool analyzes resumes for skills, experience, and qualifications, such as Python programming, project management, SQL, or years of experience and degree requirements.
YOUR RIGHTS:
- Request a human review of your application
- Request accommodation if you have a disability
- Contact: hiring@company.com or (555) 123-4567
DATA RETENTION:
Application data retained for 3 years per company policy.
For bias audit results, see [Link to public audit results].
The Trap: Notification must occur before screening, not after rejection. Auto-rejecting a candidate before sending notice violates the law.
Requirement 3: Alternative Evaluation Process
Candidates must have the option to request an alternative to the AI evaluation. According to Fairly AI’s implementation guide, compliant alternatives include:
Acceptable Options:
- ✅ Human recruiter review instead of AI screening
- ✅ Phone screening instead of video AI analysis
- ✅ Portfolio submission in place of automated skills tests
Non-Compliant Practices:
- ❌ “You can’t opt out, but we’ll have a human review the AI’s decision”
- ❌ “We don’t offer alternatives”
Providing a true alternative ensures candidates’ rights while keeping your organization compliant.
The EU AI Act: Global Compliance or Global Liability
While NYC Local Law 144 governs hiring practices in New York City, the EU AI Act—set to take effect in 2025—establishes global compliance obligations for any multinational company using AI in recruitment. Failure to comply can trigger substantial penalties and global operational implications.
Transparency Obligations
The EU AI Act emphasizes full transparency for AI-driven HR systems:
- Candidates must be informed whenever an AI system is used in hiring or promotion decisions.
- Companies must explain how the AI works, ensuring there are no “black box” decisions.
- An audit trail is required for every AI decision, documenting how candidate data influenced outcomes.
These measures ensure applicants can understand and challenge automated decisions, promoting fairness and accountability in hiring.
High-Risk System Classification
AI hiring tools fall under the “high-risk” category according to the EU AI Act. Obligations include:
- Pre-deployment conformity assessments to verify compliance with legal and ethical standards.
- Ongoing monitoring for bias, accuracy, and effectiveness during the AI system’s lifecycle. (according to EY Global).
This classification means even small errors or overlooked bias in HR AI can trigger regulatory scrutiny and reputational risk across all company operations.
Penalties for Non-Compliance
The EU AI Act imposes strict penalties for violations:
- Up to €35 million or 7% of global annual revenue, whichever is higher.
- Penalties are applied globally, not limited to the EU, if your company employs staff or conducts hiring in EU countries.
The Global Trap: Any company with employees or operations in the EU must comply with the EU AI Act across its entire global hiring process—not just for EU-based hires—creating a potential global liability risk for non-compliance.
Real‑World Enforcement: Why AEDT Compliance Matters
Since NYC Local Law 144 came into effect on July 5, 2023, the New York City Department of Consumer and Worker Protection (DCWP) has opened mechanisms for complaints and potential investigations against employers using Automated Employment Decision Tools (AEDTs) without meeting the law’s requirements. These requirements include conducting bias audits, publicly posting results, notifying candidates before AI screening, and providing alternative evaluation options. According to the official AEDT FAQ, failure to comply can trigger civil penalties and enforcement actions, making adherence not just recommended, but mandatory.
Research indicates that compliance gaps are widespread. A 2024 empirical study found that very few employers had posted bias audit summaries or provided transparency notices as required. Only a small fraction of organizations made these disclosures publicly accessible — suggesting that many companies remain non-compliant, whether knowingly or inadvertently (arXiv, 2024). Experts warn that failing to address these gaps leaves organizations exposed to fines, enforcement actions, and reputational risk (arXiv, 2024).
The Compliance Checklist: Are You Violating Right Now?
🚨 High-Risk Violations (Fix Immediately)
☑ Using AI/ATS to screen NYC candidates without a bias audit
☑ No candidate notification sent before AI screening
☑ Bias audit results not publicly posted
☑ No alternative evaluation process offered
☑ Audit is older than 12 months
☑ Using multiple AI tools but only audited one
If any box is checked, you are in violation. Immediate action required. (According to Norton Rose Fulbright, 2025)
⚠️ Medium-Risk Issues (Fix Within 30 Days)
- Notification missing required elements
- Bias audit conducted by AI vendor, not independent
- Audit doesn’t test for both race/ethnicity and sex
- Data retention policy not disclosed
- Alternative process is unclear or burdensome
Dorf Nelson & Zauderer LLP warns that ignoring these medium-risk issues can escalate compliance risk.
✅ Compliant Profile
- Independent bias audit within last 12 months
- Audit results publicly posted without barriers
- Candidates notified 10+ days before AI screening
- Notification includes all required elements
- Alternative evaluation process clearly offered
- Data retention policy disclosed
- Separate audits for each AI tool used
How to Get Compliant: 5-Step Action Plan
Step 1: Audit Your AI Tools (This Week)
- Make a list of all AI tools used in hiring:
- Resume screening (Workday, Greenhouse AI features)
- Video interviews (HireVue, Spark Hire)
- Skills assessments (Codility, HackerRank)
- Personality tests (Pymetrics, Criteria)
- Checklist:
- Does it automatically screen, score, or rank candidates? (AEDT)
- When was the last bias audit? (<12 months)
- Are NYC candidates being screened? (If yes, Law 144 applies)
(BABL AI, 2024 provides guidance on identifying AEDTs.)
Step 2: Conduct Bias Audit (Weeks 2–4)
- Choose Independent Auditor:
- Fairly AI – $15K–25K/tool
- BABL AI – $20K–30K/tool
- Holistic AI – custom pricing
- Timeline: 3–4 weeks
- Deliverables: Selection rate analysis by race/sex, impact ratios, compliance certification, public audit summary
(Fairly AI, 2025 explains audit methodology for NYC compliance.)
Step 3: Update Candidate Notification (Week 3)
- Template: See “Sample Compliant Notice” in Part 2
- Where to Post:
- Job application page (before Submit)
- Email confirmation
- Careers site FAQ
(Littler, 2023 emphasizes that timely notification is legally required.)
Step 4: Publish Audit Results (Week 4)
- Public Page: yourcompany.com/ai-hiring-audit
- No password protection
- Include audit date, methodology, results, auditor name
- Update annually
Sample Page Content:
AI HIRING BIAS AUDIT RESULTS
Last Updated: November 2025
Auditor: Fairly AI (Independent)
TOOLS AUDITED:
- Resume Screening AI
– Selection Rate (White): 18.2%
– Selection Rate (Black): 17.8%
– Impact Ratio: 0.98 (COMPLIANT)
- Video Interview AI
– Selection Rate (Male): 24.1%
– Selection Rate (Female): 23.6%
– Impact Ratio: 0.98 (COMPLIANT)
Full methodology: [Download PDF]
Next audit scheduled: November 2026
Step 5: Establish Alternative Process (Week 4)
- Human Review Option:
- Checkbox: “Request human review instead of AI screening”
- Train HR team (2–3 hours/week capacity)
- Respond within 5 business days
- Cost: $20–30K/year
(Deloitte, 2023 highlights importance of alternative evaluation to comply with Law 144.)
💰 The Hidden Cost: What Compliance Actually Takes
|
Activity |
Frequency |
Cost |
Annual Total |
|
Bias Audit |
Annual |
$15K–30K/tool |
$15K–$90K |
|
Auditor Retainer |
Ongoing |
$5K/quarter |
$20K |
|
Legal Review |
Annual |
$10K–20K |
$15K |
|
Alternative Process |
Ongoing |
$2K/month |
$24K |
|
Candidate Notifications |
Automated |
$1K setup |
$1K |
|
Staff Training |
Quarterly |
$3K |
$12K |
|
TOTAL |
– |
– |
$87K–$162K |
- Non-Compliance Cost:
- NYC Law 144 fines: $1,500/violation; $10,000/week
- Class action exposure: $500K–$5M per lawsuit
- EU AI Act fines: up to €35M or 7% global revenue
ROI: Avoid $1M+ in fines/lawsuits for ~$100K/year investment (According to Norton Rose Fulbright, 2025)
🌎 What’s Coming Next: More Regulations, More States
- State Legislation: 10+ states drafting AI hiring laws modeled on NYC Law 144
- California: stricter version likely 2026
- Illinois: AI hiring transparency bill introduced
- Massachusetts: “lie detector” law covers some AI
- Federal Proposal: “AI Accountability Act”
- Nationwide bias audits
- Private right of action
- Timeline: Federal law expected by 2027–2028
(American Bar Association, 2024 claims early adoption trends indicate rapid expansion of state-level AI hiring regulations)
⚡ The Only Real Solution: Explainable AI
Bias audits show past discrimination but don’t prevent future violations.
Only explainable AI can prove, in real-time, that decisions are based on skills, not demographics.
In Part 3, we will show how explainable AI is the only legal defense.
🚀 Take Action: Start Your Compliance Journey with AIRA
📖 Read the Full Series
Part 1: The $50 Billion Lawsuit Wave: Why AI Hiring Is the New Asbestos
Part 2: You are here
Part 2: Explainable AI: The Only Legal Defense Against $50 Billion in Discrimination Lawsuits
Who AIRA Helps — At Each Step of the Talent Lifecycle
👩💼 For HR Managers & Talent Leaders
AIRA transforms AI-powered recruitment from a legal risk into a strategic advantage. Our explainable AI platform provides:
- ✔ Explainable scoring with clear decision rationale
- ✔ Full audit trails for compliance with NYC Local Law 144 & EU AI Act
- ✔ Bias reduction through standardized evaluation frameworks
- ✔ Faster, fairer hiring with automated yet transparent screening
Transform your applicant tracking system into a defensible recruitment tool that accelerates hiring while mitigating AI discrimination liability.
🏢 For Outplacement Firms & Career Transition Services
Leverage AIRA’s Career Transition AI to modernize your offering and deliver measurable outcomes:
- ✔ Personalized reskilling recommendations based on skill-gap analysis
- ✔ AI-powered career pathing for displaced workers
- ✔ Accelerated re-employment via intelligent job matching
- ✔ Scalable workforce transition solutions
Provide cutting-edge career transition tools that differentiate your services and improve client success rates.
🧑💻 For Job Seekers
Access AIRA’s free AI resume analysis to navigate today’s AI-driven hiring landscape:
✔ Create ATS-friendly CVs that pass automated screening systems
✔ Get personalized role-fit assessments and career insights
✔ Receive actionable feedback to optimize resumes for AI
✔ Explore tailored career paths, especially valuable for career changers or workforce re-entry
Turn AI-powered applicant tracking into an advantage with transparent AI scoring and personalized guidance.
⚡ Get Started Today
-
Learn More & Start for free → https://www.edligo.net/aira/

