Photo credit: www.cnbc.com
An illustration from Pindrop Security highlights a fictitious job applicant referred to as “Ivan X,” a deceiver utilizing deepfake AI technology to obscure his identity, according to Pindrop CEO Vijay Balasubramaniyan.
When voice authentication startup Pindrop Security advertised a senior engineering position, one applicant drew particular attention among hundreds of hopefuls.
This candidate, identified as Ivan, presented a strong resume filled with qualifications. However, during a recent video interview, a recruiter at Pindrop noticed an inconsistency: Ivan’s facial movements did not correspond with his verbal expressions.
The truth revealed itself when it was discovered that “Ivan X” was not a genuine candidate, but rather a con artist leveraging advanced deepfake software and generative AI tools as part of his strategy to secure a position at the tech company, Balasubramaniyan explained.
“Generative AI has effectively blurred the distinctions between human and machine,” Balasubramaniyan remarked. “Individuals are now deploying artificial identities, including manipulated appearances and voices, to attain employment. In some cases, this even involves swapping faces with actual candidates who attend interviews.”
In recent years, organizations have wrestled with cyberattacks aimed at exploiting their software vulnerabilities, employees, or suppliers. Now, an emerging challenge has surfaced: candidates who misrepresent their identities, employing AI technologies to fabricate identification documents, engineer false employment histories, and provide scripted answers during interviews.
According to projections from research firm Gartner, by 2028, one in four job applicants globally will likely be fraudulent.
The potential risks associated with hiring a fake candidate can vary significantly based on the individual’s motives. Once integrated into a company, these impostors may introduce malware that demands ransom, or they may steal sensitive customer data, proprietary information, or financial resources, Balasubramaniyan noted. In many instances, these deceitful employees are merely looking to draw a salary they might not otherwise obtain.
Rising Threat
The proliferation of fake job seekers has particularly alarmed firms in the cybersecurity and cryptocurrency sectors, experts informed CNBC. These industries, which are often searching for remote employees, have become increasingly enticing targets for deceitful actors.
Ben Sesser, the CEO of BrightHire, reported that he first learned about this phenomenon a year ago but noted a “massive” increase in fraudulent candidates this year. BrightHire assists over 300 corporate clients in fields like finance, technology, and healthcare in evaluating job applicants through video interviews.
“Humans are typically the weak link in cybersecurity, and since the hiring process involves so many human interactions, it is inherently vulnerable,” remarked Sesser. “This has made it an easy target for exploitation.”
This situation is not limited to the technology sector. Reports indicate that over 300 U.S. companies inadvertently hired impostors associated with North Korea for information technology positions, impacting major television networks, defense contractors, automakers, and other Fortune 500 organizations, according to allegations from the Justice Department made in May.
These workers reportedly used stolen American identities to apply for remote jobs and employed various techniques to conceal their actual locations. Consequently, they channeled millions of dollars in wages toward North Korea’s weapons program, the DOJ alleged.
This case, which involved a network of alleged collaborators, including an American citizen, revealed a fraction of what U.S. authorities claim is an extensive overseas network of thousands of IT employees linked to North Korea. The DOJ has since pursued additional cases involving North Korean IT workers.
Emerging Challenge
The issue of fraudulent job candidates shows no signs of abating, as expressed by Lili Infante, founder and CEO of CAT Labs, whose Florida-based startup operates at the convergence of the cybersecurity and cryptocurrency sectors, making it particularly attractive to malicious actors.
“Every time we post a job, we receive applications from approximately 100 suspected North Korean operatives,” Infante stated. “Their resumes appear exemplary, often containing all the relevant keywords we seek.”
To mitigate this risk, Infante’s company collaborates with an identity-verification service, reflecting a growing industry that comprises firms like iDenfy, Jumio, and Socure.
A wanted poster from the FBI displays suspects identified as IT workers from North Korea, officially known as the Democratic People’s Republic of Korea.
According to security expert Roger Grimes, the landscape of fraudulent applicants has extended beyond North Korean operatives to include criminal organizations based in Russia, China, Malaysia, and South Korea.
Ironically, many of these fraudulent workers perform at levels that would be considered exceptional in most corporate environments, Grimes noted.
“While some may underperform, I’ve encountered cases where they excelled to such a degree that I’ve had colleagues express regret when they had to be let go,” he commented.
Grimes’ organization, cybersecurity firm KnowBe4, reported an incident in October where they unknowingly hired a North Korean software engineer.
This individual utilized AI to rework a stock photograph alongside a valid but compromised U.S. identity, managing to pass background checks and navigate through four video interviews before raising suspicions when unusual activity emerged from his account.
Combating Deepfakes
Despite the DOJ’s case and a limited number of high-profile instances, most hiring managers remain largely unaware of the dangers posed by fraudulent candidates, according to BrightHire’s Sesser.
As deepfake technology becomes more sophisticated, the challenges associated with it are expected to intensify, Sesser added.
With respect to “Ivan X,” Balasubramaniyan indicated that Pindrop utilized a newly developed video authentication tool to confirm the individual was a fraud.
Although Ivan claimed to be situated in western Ukraine, the analysis of his IP address revealed he was actually thousands of miles away in a location near the North Korean border, possibly linked to a Russian military facility.
Pindrop, which has the backing of firms like Andreessen Horowitz and Citi Ventures, was founded over a decade ago to address fraud in voice communication but may soon shift its focus to video authentication. The company’s clients include several leading banks, insurance firms, and healthcare organizations in the United States.
“Trusting our visual and auditory senses is no longer sufficient,” Balasubramaniyan stated. “Without technological intervention, our chances of making effective decisions are akin to a random coin toss.”
Source
www.cnbc.com