• Vulnerable U
  • Posts
  • Fake AI Video Generators Deliver Rust-Based Malware via Malicious Ads

Fake AI Video Generators Deliver Rust-Based Malware via Malicious Ads

Analysis of UNC6032’s Facebook and LinkedIn ad blitz shows social-engineered ZIPs leading to multi-stage Python and DLL side-loading toolkits

the hype machine makes perfect bait

Generative-AI tools are exploding on social and in the press, so curiosity clicks come cheap. UNC6032 has spent the past year buying thousands of Facebook and LinkedIn ads that impersonate prompt-to-video services like Luma AI, Canva Dream Lab and Kling AI. The lure works across industries because almost everyone wants to test-drive “the next Sora.” (Google Cloud)

Those ads funnel victims to slick copy-cat sites that mimic real dashboards. A prompt box, a spinning “rendering” bar and a download button are enough to drop a ZIP file. No exploit chain, no watering hole, just pure social engineering wrapped in AI buzz.

who is UNC6032

Mandiant tags the crew as Vietnam-nexus cybercriminals rather than a nation-state team. The researchers traced payment trails, WHOIS records and reused payload infrastructure back to operators active in other fraud schemes. UNC6032’s goal is monetization, so they cast a wide net: ad targeting hit the United States most often, but impressions spanned Europe, Australia and Southeast Asia.

The group refreshes domains daily to dodge takedowns. Google counted 30+ unique sites advertised since mid-2024, each spun up, blasted with ads, then abandoned once flagged. Meta’s Ad Library shows several campaigns lasting less than 48 hours, yet the top five alone reached more than 1.1 million EU users.

infection chain in detail

stage 1: braille-stuffed droppers

The lure page always serves a ZIP that contains a single executable named like Lumalabs_1926326251082123689-626.mp4⠀⠀⠀⠀⠀⠀⠀⠀.exe. Thirteen to thirty-plus Braille Pattern Blank characters shove the .exe off the screen so Windows displays a harmless video icon.

stage 2: starkveil double-run trick

The binary, STARKVEIL, written in Rust, needs to run twice. First execution quietly unpacks an archive to C:\winsystem\, dropping legitimate binaries (heif.exe, python.exe, ffplay.exe) with matching malicious DLLs. It then throws a bogus “file corrupted” dialog to prod the user into re-launching it.

stage 3: coilhatch python loader

On the second run STARKVEIL spawns py.exe and feeds it an obfuscated one-liner. That snippet Base85-decodes, zlib-decompresses and marshal-loads the COILHATCH Python dropper, which chains RSA, AES, RC4 and XOR to decrypt a second Python stage. This script side-loads heif.dll into the signed heif.exe, kicking off the modular payloads.

stage 4: launcher and persistence

heif.dll moves the stash into %APPDATA%, sets an AutoRun key named “Dropbox”, and launches three legitimate processes, python.exe, pythonw.exe, ffplay.exe, each loading its own malicious DLL. The DLLs inject code via process-replacement, so nothing materially touches disk after the first run.

source: Google Threat Intel

Each component is delivered via DLL side-loading and process replacement, so nothing ever touches disk in its final form. If any one payload is caught, the others still run, a built-in fail-safe that the operators reinforce by rotating hashes and C2 ports across samples.

payload roster: grimpull, xworm, frostrift

GRIMPULL is a .NET downloader with heavy anti-analysis logic. It checks mutexes, BIOS strings, screen resolution and sandbox DLLs, then spins up a local Tor 13.0.9 bundle if one is absent. Traffic rides over Tor to strokes[.]zapto[.]org:7789, where TripleDES-encrypted .NET assemblies are fetched, reversed, and run in memory.

XWORM arrives through pythonw.exe. It surveys the host, captures keystrokes, files, and browser data, then utilizes a hard-coded Telegram bot token to exfiltrate the data. Operators get an instant ping with machine-ID so they can pivot manually if the target looks juicy.

FROSTRIFT comes via the ffplay.exe chain. It enumerates forty-eight browser extensions tied to password managers, 2FA helpers, and crypto wallets, lifts cookies, screen-grabs, and checks AV presence before downloading plugins for deeper theft.

The redundancy is deliberate: any single module can fail without killing the breach. Each DLL is repacked for every campaign, which beats hash-based blocks and keeps EDR alerts lost in “grayware” noise.

Running all three gives the attacker passwords, cookies, credit cards, Facebook session tokens and a backdoor sturdy enough for follow-on fraud or resale on markets.

Two factors make the toolkit dangerous. First, modular architecture keeps the first-stage binary small, cutting sandbox detonation odds. Second, each module is commodity malware, so detections are drowned in the noise of “grayware” alerts many SOCs ignore.

scale and damage

Google estimates the Facebook ads alone clocked 2.3 million views in EU countries. Actual installs are harder to tally, but Mandiant incident-response teams already see the malware inside small businesses, marketing agencies and media outlets,exactly the workers most likely to experiment with video generators during a lunch break.

Credential theft cascades fast. Compromised Google and Microsoft logins have led to email forwarding rules, payroll-diversion attempts and access to ad accounts that then place more malicious AI ads, funding the next wave.

defense playbook

  1. Block or sandbox executables zipped from recently registered domains, especially when the referrer is a social-media ad redirect.

  2. Watch outbound Tor (port 9050) and Telegram API traffic that spins up minutes after a ZIP execution.

  3. Force extension visibility. Users should see .mp4.exe, not an MP4 icon.

  4. Tune EDR for DLL side-loading into signed binaries (heif.exe, ffplay.exe) and for process replacement into AddInProcess32.exe.

  5. Harden the ad stack. Marketing teams should run new AI tools only inside disposable VMs or browser isolates.

the larger lesson

AI enthusiasm has widened the pool of technically savvy but impatient users. Unlike deep-fake crypto scams, these sites mimic legitimate developer tooling, so even seasoned engineers skip the usual caution. Social-engineering lures will track every hot AI release for the foreseeable future. Next up will likely be fake text-to-3D or synthetic voice generators.

Security teams must treat “try the new AI tool” the way we already treat “open this doc for e-sign.” Vet the vendor list, host sanctioned installers internally and warn employees that pop-up ads promising miraculous AI are malware until proven otherwise.

UNC6032 proves that a polished landing page and a trending AI buzzword can replace zero-days. The fastest patch is still user skepticism backed by tight download controls.