...

Deepfake Scams & North Korea’s AI-Powered Job Fraud: Can You Trust What You See?

Securify

In 2019, recruiters asked for resumes. By 2021, cameras had to stay on during interviews. By 2023, coding tests became the norm. And by 2025? Some interviewers are asking candidates to wave their hand in front of their face or even joke about Kim Jong Un to expose AI-driven impostors. Funny? Yes. But also a real warning sign of where identity verification is heading.

Deepfake Scams Are Getting Smarter

AI-powered deepfakes can swap faces, clone voices, and create polished fake profiles in minutes. With tools like FaceSwap or Deep-Live-Cam, even someone with no technical background can build a convincing synthetic identity ready for video interviews.

These systems still struggle with details like sudden lighting changes, quick head turns, or hands covering the face, but they’re improving fast.

North Korea’s Remote IT Fraud

Microsoft Threat Intelligence recently uncovered how North Korea is exploiting this technology at scale. Under the threat cluster Jasper Sleet, operatives have infiltrated 300+ U.S. companies by posing as remote IT workers.

Figure 1. The North Korean IT worker ecosystem

Their playbook includes:

  • AI-enhanced resumes and LinkedIn/GitHub profiles
  • Voice-changing software for interviews
  • Laptop farms run by domestic facilitators
  • Fake LLCs and bank accounts to move money back to DPRK

Some even become trusted employees with access to codebases and sensitive systems.

How They Do It

  1. Craft Personas: Fake or stolen identities matched to job locations.
  2. Build Digital Footprints: Social media, GitHub repos, polished portfolios.
  3. AI Assistance: Face-swaps, voice filters, multiple resume versions.
  4. Facilitators: Middlemen who validate documents, forward hardware, and run remote-access tools like TeamViewer or PiKVM.

Spotting AI Imposters

Recruiters are already sharing tricks to fight back:

  • Ask candidates to turn sideways or wave a hand over their face.
  • Push them to disable filters/backgrounds.
  • Throw curveball questions 

Another tactic: asking unexpected questions like, “How ugly is Kim Jong Un?” Perhaps unsurprisingly, it can be enough of a curveball to instantly trigger operatives to disconnect from the call abruptly. The technique is effective because North Korean agents are prohibited from saying anything negative about their leader. When asked to criticize him, they typically end the interview immediately rather than risk punishment for appearing disloyal. Of course, that might not always work, nor is it ever enough.

Attach this video to the blog

Other red flags include avoiding video calls, blaming “broken cameras,” or sudden call drop-offs when challenged.

Real-World Evidence

  • Cutout.pro Breach (2024): Palo Alto Networks’ Unit 42 uncovered email addresses tied to DPRK operatives experimenting with deepfake headshots.
  • Fake LinkedIn Profiles: Accounts like “Joshua Desire” posed as U.S.-based engineers with AI-generated work histories.
  • Freelancer Sites: North Korean workers even bid for short-term projects on Upwork and Fiverr as an additional revenue stream.

How Companies Can Protect Themselves

Organizations need multi-layered defenses against deepfake job fraud:

1. Stronger Video Verification

  • Require live movements, gestures, and profile views during interviews.
  • Watch for visual artifacts during lighting changes or occlusions.

2. Cross-Check Digital Footprints

  • Verify LinkedIn against GitHub contributions, email accounts, and phone numbers.
  • Look for inconsistencies like multiple resumes with the same photo.

3. Restrict Remote Management Tools

  • Block unapproved apps like TeamViewer, TinyPilot, and PiKVM.
  • Monitor for unusual RMM connections.

4. Analyze Network Activity

  • Watch for suspicious VPN/VPS usage.
  • Flag repeated logins from unusual geo-locations.

5. Educate Recruiters & Teams

  • Train HR and IT staff to recognize deepfake artifacts.
  • Share common red flags: “broken cameras,” refusal to do live gestures, sudden call drop-offs.

Protecting Individuals

Deepfakes aren’t just a company risk—they can misuse your photos and identity too. Here’s how to stay safe:

  • Limit what you share publicly online
  • Keep accounts private where possible
  • Use reverse image search to track misuse
  • Avoid random face apps or filters that scan deep data
  • Enable 2FA and biometric protection
  • Educate friends and colleagues about deepfake risks

Conclusion

What once felt like sci-fi is now part of everyday hiring. Deepfakes are no longer just about viral videos; they’re being weaponized for cybercrime and corporate fraud. Organizations must rethink identity checks, and individuals must protect their digital presence.

Because in today’s world, you really can’t trust your eyes and ears alone.

Reference Blog for Image and context:

Leave a Reply