top of page

AI in HR: The Risks and Opportunities Every Tech Founder Needs to Understand




You've already got AI somewhere in your product. You're probably using it in your operations. And there's a reasonable chance it's crept into your hiring process without anyone formally deciding that it should.

That's the reality of AI in HR for most UK tech businesses right now: adoption is outpacing governance. Tools are being used before policies exist to cover them. Hiring decisions are being shaped by algorithms nobody has audited. And a regulatory environment that was playing catch-up is now moving fast to close the gap.

This isn't a post about whether AI in HR is good or bad. It's a practical guide for busy founders who need to understand what AI is genuinely useful for in people management, where the real risks sit, and what good governance looks like before it becomes a legal or reputational problem.



The State of AI in HR UK: Where Things Stand in 2026

AI adoption in UK HR functions is growing, but it's still concentrated in larger organisations.

AI adoption in UK HR functions is growing, but it's still concentrated in larger organisations. According to the ICO's 2025 data, just 8% of UK organisations reported using AI decision-making tools when processing personal information — up only marginally from the year before. For tech scale-ups specifically, the picture is more nuanced: AI is often present in individual tools (applicant tracking systems, engagement platforms, performance software) without being recognised or governed as AI at the organisational level.


ICO AI and Biometrics Strategy 2025

In 2024, just 8% of UK organisations reported using AI decision-making tools when processing personal information.


Source: ico.org.uk — Preventing harm, promoting trust: our AI and biometrics strategy



The CIPD's research tells a revealing story about the gap between adoption and impact. Their Good Work Index 2025 found that 16% of employees reported tasks being automated using AI — and of those, 85% said it had improved their performance. That's a strong signal. But the same CIPD research consistently flags that HR and people teams are among the least likely functions to be involved when AI tools are introduced into organisations — which means the governance gap often starts in the people function itself.



CIPD Good Work Index 2025

16% of employees report tasks automated by AI. Of those, 85% say it improved their performance. Yet HR is among the least likely functions involved in AI decision-making.




Where AI in HR Creates Real Value for Tech Scale-Ups


Recruitment and Candidate Screening:


AI-assisted screening is the most immediately useful application in HR

For a tech scale-up hiring multiple roles simultaneously, AI-assisted screening is the most immediately useful application. Tools that parse CVs, rank candidates against defined criteria and surface relevant profiles can compress weeks of manual review into hours. The operational gain is real — and for a founder or COO managing hiring alongside a dozen other priorities, it matters.

The caveat — and it's an important one — is covered in detail in the risk section below. But used well, with human review of outputs and defined criteria that are regularly audited, AI screening is a genuine efficiency lever.



People Analytics and Attrition Risk:

One of the highest-value applications of AI in HR for growing tech businesses is predicting who might leave before they hand in their notice. Platforms including Workday, HiBob and Lattice use behavioural signals — engagement survey responses, one-to-one frequency, performance trend data — to flag attrition risk at the individual level. For a business where losing a senior engineer or product manager carries significant cost and disruption, early warning is operationally valuable. In the case of a senior leader leaving, this can be a significant opportunity to engage such colleague or ensure succession plan is well established to mitigate business impact.

Similarly, AI-powered analysis of employee sentiment across pulse surveys can surface general, departmental engagement concerns or cultural signals faster and more reliably than manual review, giving people leads, a data-driven basis for intervention before problems become crises.


HR Administration and Self-Service:

The most unglamorous but often most impactful AI application in HR is administrative: intelligent chatbots handling policy queries, leave request processing, onboarding checklists and benefits information. For a scale-up where the HR function is lean — or where HR responsibility sits with a founder or operations lead — automating the routine frees up human time for the work that actually requires human judgement.


Learning and Development:

The CIPD's analysis of generative AI's impact on HR functions identifies training and development specialists as having the highest potential for AI augmentation — 68% of their role could be enhanced by AI tools. Personalised learning pathways, AI-curated content recommendations and automated skills gap analysis are increasingly accessible to businesses outside the enterprise tier. For tech scale-ups investing in manager capability or technical upskilling, this is an area worth exploring now.



Quantifying the Impact of Generative AI on HR

Training and development specialists have the highest potential for AI augmentation (68%) due to abstract reasoning and problem-solving requirements in their roles.




The AI in HR Risks UK Tech Founders Cannot Afford to Ignore


AI Risks UK Tech Founders Cannot Afford to Ignore in HR


Algorithmic Bias and Your Equality Act Exposure:

This is the risk that has the clearest, most immediate legal exposure for UK employers. In November 2024, the ICO published its AI in Recruitment Outcomes Report following a series of consensual audits with AI recruitment tool providers. The findings were stark: some tools were filtering candidates based on protected characteristics including gender, race and sexual orientation — in direct conflict with the Equality Act 2010.



ICO AI Tools in Recruitment Report — November 2024

ICO audits found that some AI recruitment tools allowed recruiters to filter out candidates based on protected characteristics, and some lacked accuracy testing entirely.




The Equality and Human Rights Commission (EHRC) has made AI bias a regulatory priority for 2024–25, explicitly including recruitment practices in its enforcement focus. The EHRC's position is clear: organisations may be breaking equality law without knowing it, because the bias is embedded in the tool rather than the human making the decision. That is not a defence — it is a liability.



EHRC — Update on AI Regulation Approach

The EHRC's priorities include 'the use of AI in Recruitment Practices, developing solutions to address bias and discrimination in AI systems.'


Source: equalityhumanrights.com — An update on our approach to regulating artificial intelligence



"A useful test for any AI hiring tool you're currently using: ask the vendor to demonstrate how their system was tested for bias across protected characteristics. If they can't show you the results, treat that as a material risk."

UK GDPR and the Right to Explanation:

Article 22 of the UK GDPR gives individuals the right not to be subject to decisions based solely on automated processing that significantly affects them — which includes hiring and dismissal decisions. If your AI tool is making or substantially influencing decisions without a human meaningfully reviewing the output, you may be in breach.

The ICO has made clear that a Data Protection Impact Assessment (DPIA) is required before deploying AI systems in HR contexts. For most tech scale-ups, this step is either being skipped or is not on the radar at all. The GOV.UK guidance on Responsible AI in Recruitment, published in March 2024 with contributions from both the ICO and EHRC, sets out the due diligence framework employers should follow.



GOV.UK — Responsible AI in Recruitment Guidance (March 2024)

Developed with ICO and EHRC input, this guidance covers impact assessments, monitoring, performance testing and procedures for contesting AI-based decisions.




The Governance Gap — and the Employment Rights Act 2025 Dimension:

From January 2027, the Employment Rights Act 2025 reduces the qualifying period for unfair dismissal protection to six months (From 2 years), and removes the cap on compensation to former employees if their former employer is found guilty of unfair dismissal. This creates a direct intersection with AI in HR: if an AI-assisted performance management tool contributes to a dismissal decision, and that tool's outputs cannot be explained, audited or challenged, the legal exposure is significant.


Employment tribunals are increasingly scrutinising the role of algorithms in workplace decisions. The principle being established in case law is that AI involvement in a decision does not remove employer liability — it extends it. Employers must be able to demonstrate that human judgement was meaningfully applied, and that the AI tool operated fairly.



CIPD Labour Market Outlook — Autumn 2025

One in six employers say AI will reduce headcount. Among those, 62% believe clerical, junior managerial and administrative roles are most at risk.




One in six UK employers now say AI will shrink their headcount in the next 12 months, according to CIPD's Autumn 2025 Labour Market Outlook. As AI-driven restructuring accelerates, the governance framework around those decisions becomes a critical legal and reputational asset.


A Practical AI in HR Governance Framework for Tech Scale-Ups


1. Audit What You Already Have:

Most tech scale-ups are surprised by how many AI-adjacent tools are already in use across their HR stack. Start by listing every tool involved in hiring, performance management and employee data — and for each one, identify whether AI is being used to make, influence or inform decisions about people.


2. Check Every Vendor for Bias Testing:

For any tool involved in hiring or performance assessment, ask your vendor directly: how was this system tested for bias? What protected characteristics were included in the testing? What corrective actions have been taken? The ICO's November 2024 report found that many providers monitored bias but some lacked accuracy testing entirely. Vendor capability varies enormously.


3. Ensure Humans Make the Final Call:

AI should inform decisions, not make them. Every hiring decision, performance rating and disciplinary outcome should have a named human who reviewed the AI's output and made the final call. Document this. The UK GDPR right to explanation and the Employment Rights Act 2025 both make this documentation valuable — and potentially essential.


4. Complete DPIAs Before Deployment:

A Data Protection Impact Assessment is not optional for high-risk AI processing — and the ICO considers AI in recruitment and performance management to be high-risk. If you haven't completed a DPIA for your AI HR tools, this is a compliance gap worth addressing. The ICO's DPIA template is freely available at ico.org.uk.


5. Brief Your HR Partner:

If you're working with an outsourced or fractional HR partner, ensure they have full visibility of every AI tool in use across the employee lifecycle. HR governance cannot outpace the tools it doesn't know about.


What's Coming: The UK Regulatory Direction of Travel:

The UK's current approach to AI regulation is principles-based rather than prescriptive — but that is changing. The ICO is preparing a statutory code of practice on AI and automated decision-making, expected in autumn 2025. The AI Regulation Bill was reintroduced to the House of Lords in March 2025, proposing a dedicated AI Authority and codified principles that would bring the UK closer to the EU's AI Act framework.

For tech founders, the direction of travel is clear: the regulatory environment around AI in HR is tightening, the enforcement focus of both the ICO and EHRC is sharpening, and the legal risk of ungoverned AI in people decisions is growing. The businesses that build governance frameworks now will be ahead of compliance requirements, not scrambling to catch up.


The Bottom Line for Tech Founders

AI in HR UK is not a future consideration — it's a present reality with present legal implications. The opportunities are real: faster hiring, smarter retention, more efficient HR operations. But so are the risks: algorithmic bias, UK GDPR exposure, and employment tribunal liability under a strengthening legal framework.


The founders who get the most from AI in HR are those who adopt it deliberately, govern it carefully, and ensure that the human judgement at the heart of every significant people decision is visible, documented and defensible.

• Audit every AI tool currently used in hiring, performance and people data

• Demand bias testing evidence from every HR AI vendor

• Ensure humans make and document final decisions on people matters

• Complete DPIAs before deploying AI in HR contexts

• Stay ahead of the ICO's forthcoming statutory code of practice on AI and ADM

• Brief your HR partner on all AI tools across the employee lifecycle

• Ensure all data fed, processed and generated by the AI tool is securely generated, processed, transmitted and stored.


Is your HR approach keeping pace with your AI adoption?

Book a free 30-minute HR Audit Call with a senior M923 consultant.


No Pitch. No obligation. Just clarity -> m923consulting.com/contact-us


Comments


© 2025 by The M923 Group

bottom of page