Deepfakes are becoming an HR incident, not just a cybersecurity story

img-001.jpg

By Dmitry Zaytsev, founder of Dandelion Civilization

Most organisations still treat identity as a given.

If someone is on a video call, looks right, sounds right, uses the right phrases and references the right projects, the conversation moves forward. HR processes are built on that assumption. Hiring screens, onboarding steps, policy exceptions, payroll changes, internal approvals. They all rely on a simple idea: a human can recognise a human.

Deepfakes broke that idea.

Not in some futuristic way. In a boring operational way. The kind that shows up as one Slack message, one Teams call, one “quick favour” that lands in the wrong inbox at the right time.

The reason this becomes an HR story is simple. HR sits on the highest trust surfaces in the company. Personal data, contracts, bank details, access provisioning, internal status. When identity becomes spoofable, trust becomes a target. That target is HR.

The new attack is not a fake document. It is a fake person.

Phishing trained people to distrust links. Fraud training taught employees to question invoices. Compliance taught managers to follow approval chains.

Deepfakes bypass all of that because the payload is not a file. The payload is a person.

A familiar face asking for something “urgent”. A voice note that sounds like a leader. A candidate who interviews well and says the right things. A new hire who has a plausible story and a clean LinkedIn.

Most organisations are still defending the old perimeter while the new perimeter is behavioural. Who is allowed to request what, through which channel, with what verification.

If you do not define those rules, your culture will define them for you. Culture usually defines them as “be helpful” and “do not slow things down”. That is how expensive mistakes happen.

Where this hits first inside HR

Deepfakes will not arrive as a dramatic incident. They will arrive as exceptions.

Payroll and bank changes
Someone requests a last minute update to payment details. The message is polite. The reason is plausible. The urgency is designed to make verification feel rude.

Offer and contract manipulation
A candidate claims they accepted a different number. A manager says they approved an exception. A call happens. The voice matches. HR is asked to “just update the letter”.

Onboarding and access provisioning
A new hire needs access early. A laptop must be sent to a new address. A background check “is delayed”. The request is framed as business continuity.

Reference checks and offboarding
A reference call becomes a data extraction exercise. An exit process becomes a chance to obtain system details, naming conventions, internal contacts and what the company actually monitors.

None of these require hacking. They require permission, pressure and a believable identity.

Why most AI upskilling programs miss the point

Many companies are telling employees to use AI for drafting, summarising, analysis and automation. That is fine. It is also incomplete.

The real risk is not that employees will use AI poorly. The risk is that employees will trust AI shaped inputs too easily. A meeting recap that no one verifies. A recommendation that sounds confident. A “recording” that becomes evidence. A screenshot that becomes truth.

When fluency is cheap, a clean narrative is no longer a sign of ownership. It is a sign that someone knows how to produce clean narratives.

HR leaders should assume that the next wave of workplace disputes will include synthetic artefacts. Messages that were never sent. Calls that never happened. Approvals that were never given.

This changes how you run investigations, how you document decisions and how you protect managers who are about to be accused of saying something they did not say.

The core shift: trust needs infrastructure

Most organisations run trust as a social norm. It works until it does not.

If you want trust to survive deepfakes, you need a verification layer that feels normal, not exceptional. The goal is not paranoia. The goal is repeatable behaviour.

Here is what that looks like in practice.

Create protected workflows for high risk HR actions
Bank detail changes, offer exceptions, early access provisioning, vendor payment approvals, severance details. Anything that moves money, data, access or status should have a “protected lane” with fixed steps.

Make verification boring and standard
If verification only happens when someone feels suspicious, it becomes political. People worry about optics. They hesitate. Build it into the process so it is not personal.

Use channel authentication, not human recognition
A familiar voice is no longer proof. A familiar face is no longer proof. Proof comes from controlled channels: known internal numbers, verified meeting links, call backs through directories, secondary confirmation in a different system.

Separate urgency from authority
Deepfake attacks use urgency to hijack good behaviour. Teach employees a simple rule: urgency is not a reason to skip verification. If it is truly urgent, it will survive a two minute check.

Build decision logs that capture intent, not just outcome
HR documentation often records what happened. It rarely records why the decision was authorised, by whom, through which path. That gap is where disputes grow. When synthetic evidence enters the workplace, you will want clear records of real authorisation.

What changes for managers

Managers are now part of the verification system whether they like it or not.

They will be impersonated. Their teams will receive fake requests. Their words can be simulated. That means two things.

First, managers need scripts. Not training slides. Scripts. Short default language they can use without thinking.
“I cannot approve this through chat. Use the protected workflow.”
“I will call you back via the directory.”
“I will confirm in the HR system before we proceed.”

Second, managers need protection from performance theatre. In many workplaces, fast responsiveness is treated as competence. That incentive is dangerous now. If speed is rewarded and verification is seen as friction, you are training people to become victims.

The question HR should ask this quarter

Not “how do we get people to use AI more”.

Ask “which decisions in our organisation can be triggered by a believable fake”.

Then map them.

Which actions can be initiated by message. Which can be completed by message. Which require a second channel. Which have a protected workflow. Which rely on a human recognising a human.

That map is your real AI readiness score.

Deepfakes do not require you to distrust people. They require you to stop treating trust as a feeling. Trust has to become a system. If you do not build it deliberately, someone else will test it for you.

The post Deepfakes are becoming an HR incident, not just a cybersecurity story first appeared on HR News.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy