中國發生用AI技術的人臉識別詐騙案件,帳戶綁定資訊被更改,導致資金被盜刷

2025-07-22

中國山東威海公安近日通報一宗涉及人工智慧(AI)技術的人臉識別詐騙案件,引起社會廣泛關注。案件的起因是一個金融平台的多名用戶帳號,突然出現異地非法登入的情況,隨後其中至少五名用戶的帳號遭到AI深度偽造攻擊,帳戶綁定資訊被更改,導致資金被盜刷,損失金額達數萬元人民幣。

其中一位受害人朱女士的案例尤為典型。她的帳號在某電商平台被異地非法登入後,犯罪分子利用AI深度合成技術生成模擬朱女士臉部動態的人臉識別影片,成功通過平台的人臉驗證程序。隨後,該不法分子將帳號綁定的手機號碼更換,進一步修改支付密碼,最後綁定新的銀行卡。完成這一系列操作後,他們立即在該平台上購買兩部高價智能手機,總價值達15,996元人民幣。

警方指出,犯罪分子所使用的AI合成視頻技術,能夠模擬受害人面部特徵與表情動作,足以欺騙目前市面上不少依賴人臉識別技術的安全驗證系統。這一過程甚至無需受害人提供動態視頻,只要社交媒體、網路平台上有清晰的正臉照片,就有可能被用來訓練AI模型,生成逼真的偽造影像。

這宗案件暴露出人臉識別系統在面對AI技術攻擊時的潛在漏洞。警方特別提醒公眾,切勿隨意在社交媒體、朋友圈、電商平台曬出自己的高清正臉照片,特別是表情生動、角度清晰的影像,因為這些照片一旦落入有心人手中,就有可能被利用來進行深度偽造,模擬人臉識別視頻,進而對個人帳戶安全造成威脅。

此外,公安機關也呼籲各大平台加強身份驗證機制的複合性,不能單一依賴人臉識別技術,應當結合行為識別、多因素驗證等手段,降低AI仿冒的風險。對於一般用戶而言,定期更換密碼、啟用雙重驗證、避免在不安全網站登錄個人帳號等安全習慣,也能在一定程度上防止此類詐騙的發生。

此次案件凸顯AI技術在便利生活的同時,也可能成為新型犯罪工具的雙面性,社會大眾對於個人資訊的保護意識亟需提升,否則極易淪為數位詐騙的受害者。

Recently, the Public Security Bureau of Weihai, Shandong Province, China, reported a case involving the use of artificial intelligence (AI) technology in facial recognition fraud, which has raised widespread public concern. The incident began when multiple users of a financial platform experienced unauthorized logins from different locations. At least five of these users had their accounts compromised through AI-based deepfake attacks. The attackers successfully changed the phone numbers linked to the accounts, modified payment passwords, and bound new bank cards, resulting in tens of thousands of yuan being fraudulently spent.

One particularly notable case involved a victim identified as Ms. Zhu. After her account was accessed illegally from a remote location, the criminals used AI deepfake technology to generate a dynamic facial recognition video that mimicked her facial features and movements. This forged video successfully passed the platform’s facial authentication system. The attacker then replaced the phone number associated with the account, reset the payment password, and added a new bank card. With the account fully under their control, the fraudster proceeded to purchase two high-end smartphones on the e-commerce platform, totaling 15,996 yuan.

According to police, the AI-generated deepfake video was realistic enough to deceive many commercial facial recognition systems. Shockingly, the creation of such a video did not require dynamic video footage of the victim; a clear, high-resolution frontal photo—often publicly available on social media or other online platforms—was sufficient for training AI models to generate convincing forgeries.

This case reveals serious vulnerabilities in current facial recognition systems when faced with sophisticated AI attacks. Authorities warned the public not to casually share high-resolution frontal facial photos online—especially expressive or well-lit images—as these can be exploited by malicious actors to create synthetic videos capable of bypassing facial verification processes, leading to financial theft.

 

The police also urged major platforms to strengthen their identity verification mechanisms. Relying solely on facial recognition is no longer sufficient. A more robust, multi-factor authentication system—combining biometric data, behavioral analytics, and other security layers—is necessary to counter AI impersonation threats.

For individual users, adopting good cybersecurity practices such as regularly changing passwords, enabling two-factor authentication, and avoiding logins on untrusted websites can help reduce the risk of being targeted.

This incident highlights the double-edged nature of AI technology: while it brings convenience to daily life, it can also be weaponized as a tool for new types of cybercrime. It underscores the urgent need for heightened public awareness around data privacy and the growing threats posed by digital impersonation.