🎓️ Vulnerable U | #168

So many more supply chain worms, so many more linux 0days, some android 0days as a treat, and much more!

Read Time: 8 minutes

Brought to you by:

Howdy friends!

I’ve fallen into a new hobby by force. I’m keynoting a conference in a few weeks called Descent Cyber - and will be scuba diving with CISOs and other industry. So I’ve been very busy getting scuba certified for the first time. What a wild process! Will I see any of you down there? I think the other keynote is the CISO of the NFL and there are a ton of industry vets going. I’m excited.

If you’re reading this and enjoy a good livestream I’m officially a few months into Tue, Wed, Thur streams on YouTube and Twitch. We have great convos and chat fills with practitioners living through the front lines. Love when I talk about something which causes someone in stream to have to go run an incident because thats how they found out. Come check it out. I start in the 9am hour CST and run about 2-4 hours depending on whats going on. This week I even had Andrew Peterson, former CEO of Signal Sciences and current Investor for Aviso come in the studio and jam.

ICYMI

🖊️ Something I wrote: My ELI5 breakdown of the recent supply chain malware issue

🎧️ Something I heard: Primeagen’s run down of recent Mythos news and capabilities

🎤 Something I said: The most work we’ve put into a YouTube video yet. I interviewed the teams that found the recent iOS 0days in the wild.

🔖 Something I read: This org made a honeypot GitHub to catch AI bots pushing BS PRs so they can block them from their real repos.

Vulnerable News

Another week, another NPM worm. This latest wave, “Mini Shai-Hulud”, hit packages tied to TanStack, Mistral AI, and a growing list of projects across NPM and PyPI. At this point, the playbook is becoming painfully familiar: compromise a developer, steal GitHub and NPM tokens, poison legitimate packages, spread to additional maintainers and repos, repeat. But this one feels bigger because the attackers abused legitimate GitHub Actions workflows and trusted release pipelines to publish malicious updates under real developer identities. That’s a very different problem than typo-squatted malware packages sitting in a registry corner waiting for someone to accidentally install them.

The malware itself was extremely focused on developers and CI/CD infrastructure. Once installed, it hunted for GitHub tokens, NPM credentials, AWS secrets, Kubernetes configs, SSH keys, and other high-value developer artifacts. If it found package publishing access, it attempted to spread itself further. That’s the worm behavior. And this is where the modern dependency ecosystem starts looking really fragile. Developers are now stuck in an impossible tradeoff: patch too slowly and you sit on known vulnerabilities; patch too quickly and you might automatically pull malware before researchers catch it. That’s why teams are now talking seriously about “minimum release age” policies, intentionally delaying dependency updates by several days so security vendors have time to identify malicious releases.

The really uncomfortable part is how much of this is just the ecosystem functioning as designed. Trusted automation. Trusted publishers. CI/CD pipelines moving at internet speed. The attackers are using the same workflows developers use. And they’re evolving fast. Some variants reportedly included dead-man-switch behavior where revoking a stolen GitHub token could trigger destructive actions on the victim machine. Others experimented with geofenced wiper functionality. This is active supply-chain warfare happening inside the developer ecosystems that run modern software. (Read more here, here, here and here)

Intruder analyzed 3,000 organizations' attack surfaces. Top finding: more teams should be asking 'does this actually need to be on the internet?’

There’s no better time to ask it. AI can now find zero-days autonomously and time-to-exploit has shrunk to a single day. Anything on the internet that doesn't need to be is a target the moment a new CVE drops.

In the report:

  • What are the most common attack surface exposures?

  • How long are organizations taking to fix them?

  • How does your industry compare?

*Sponsored

Project Zero dropped a really interesting write-up showing how they went from a zero-click context to full root on Android using just two exploits chained together. The original work targeted the Pixel 9 earlier this year, and then they adapted the same general approach to the Pixel 10. The wildest part is how straightforward they said portions of it were. One of the bugs apparently stood out specifically because it was “exceptionally simple to exploit.” Zero-click to root on a mobile device is still one of the scariest classes of bugs out there.

The important context here is this was pre-Mythos. You can’t blame AI for this one. This is just elite vulnerability research and exploit development. But one encouraging thing was Project Zero actually praising Google’s response process, noting it was the first time the vendor patched within the 90-day disclosure window. It’s also kind of funny because even though both teams are technically Google, Project Zero still treats them like a third-party vendor relationship. (read more)

Another day, another Linux privilege escalation. Idk whats going on. If you missed the last two, Copy Fail and DirtyFrag - you can catch up here. The latest, Fragnesia, which reportedly emerged as an unintended side effect of one of the patches for DirtyFrag. Security is hard. Just like the npm issues, if you patch it doesn’t mean you’re negating all risk.

The good news is you can disable the vulnerable kernel modules (esp4, esp6, rxrpc) if you don't need them, or restrict unprivileged user namespaces. If you suspect you've been hit, a simple reboot or cache flush will clear the in-memory modifications. (read more)

Google’s new threat intelligence report is the strongest evidence yet that AI is already being operationalized by threat actors in real attacks. The report goes through attackers using AI for vulnerability research, exploit development, phishing, malware refinement, and attack automation, and they even call out Team PCP, the same group behind the Mini Shai-Hulud supply-chain worm.

What stands out to me is that attackers getting faster and scaling harder. Google says they’ve now seen evidence of AI being used to discover and help weaponize a zero-day for the first time, including a 2FA bypass exploit. Honestly, I think this is just the beginning. We’re already seeing vibe-coded malware, automated payload generation, and increasingly autonomous attack workflows. I’m probably going to do a long YouTube video walking through this whole report because it feels like one of those “this is where the industry changes” moments. (read more)

AiTM phishing, ClickFix, device code phishing, ConsentFix, malicious browser extensions — two years ago, most of these were research curiosities. Today they're industrialized, available as PhaaS, and behind the majority of identity compromises. Push Security's Browser & Identity Attacks Matrix maps every technique in one open-source framework.

Explore the matrix from Push Security. (read more)

*Sponsored

OpenAI launched yet another cybersecurity AI initiative, this one called Daybreak, and my first reaction is: what the hell is going on? Because at this point we’ve had Trusted Access for Cyber, Codex Security, Expanded Trusted Access, GPT-5 Cyber, scaling trusted access with GPT-5 Cyber… and now Daybreak. The branding and positioning are getting impossible to follow, even for people who live in this space every day.

But underneath the marketing confusion, I do think there’s something important happening here. Daybreak feels less like “AI security tooling” and more like OpenAI acknowledging that frontier models are fundamentally changing vulnerability discovery and defensive operations. The Trail of Bits write-up framed it really well: frontier models are now finding bugs faster than maintainers can triage them:

That’s a pretty massive shift if you stop and think about it. We’re heading toward a world where AI systems are acting more autonomously inside security workflows: vulnerability research, SOC operations, triage, remediation, code review, all of it. The interesting part is whether enterprises are actually going to trust these systems enough to let them operate at scale inside production environments. (More here, here and here)

So this researcher is fed up with Microsoft’s disclosure process and decided: “f it, we’re doing painful disclosure.” Instead of quietly coordinating fixes, he started publicly dropping the research straight onto GitHub, including the latest, YellowKey and GreenPlasma.

The one that really stood out to me was YellowKey, which is a BitLocker bypass affecting Windows 11 and newer server builds. The way he describes it almost sounds like discovering some weird hidden debug or recovery functionality left inside WinRE. You boot into recovery, hit a specific key combo, and there’s behavior around BitLocker relocking that apparently should not exist. His whole point is basically: “why is this functionality even here?” And I’ll be honest, reading through it, I kind of get why he starts speculating about it being purposefully created backdoor. (read more here, here and here)

The Forza Horizon 6 leak is one of those stories where I genuinely don’t know if it’s “cyber” or just an absolutely catastrophic operational screw-up. The initial reporting made it sound like somebody accidentally pushed the game live on Steam early with unencrypted files, which immediately led to the raw game assets leaking and pirates cracking it before launch.

But then they came out and said it wasn’t a preload issue, which honestly just makes the whole thing weirder. Because if it wasn’t some accidental publishing mistake… then what was it? That’s the part that has everybody speculating right now. Meanwhile, the downstream effects are already happening: cracked builds circulating before release, pirated copies spreading everywhere, and the publisher threatening franchise-wide bans for anyone involved. (read more here and here)

The mayor of Arcadia pled guilty to acting as an illegal agent for China tied to operating a website that pushed pro-PRC messaging into the local Chinese-American community. From the reporting, it wasn’t just “generally pro-China opinions.” The allegation is she was actually receiving directives from PRC officials about what content to publish and sometimes even seeking approval before circulating material.

The part that makes this way more serious is that this is an elected official. If this were just some random propaganda site, okay, that’s one thing. At the same time, this doesn’t sound like “spy movie” espionage stuff so much as an information operation, influence, messaging, narrative shaping, all of that. (read more)

The Canvas/Instructure hack got really wild because the ransomware group didn’t just ransom the company, they basically threatened every school using Canvas too. The attackers published a giant list of something like 9,000 schools and essentially said: “If you don’t want your school’s data released, contact us.” That’s a pretty massive escalation compared to the normal playbook.

ShinyHunters came out and basically said the matter was resolved and schools wouldn’t be further targeted. Then Instructure publicly confirmed they paid the ransom. Straight up. They said the data was returned and they received assurances it wouldn’t be further shared. FBI guidance is always “don’t pay.” But honestly I completely understand why companies do it, especially when the blast radius suddenly includes thousands of schools/children. (read more)

Mozilla used Claude Mythos Preview and other models to find 271 security vulnerabilities in Firefox - part of a massive 423 total bugs they squashed in April alone. The lesson that stands out to me is that we all better be building custom harnesses. Mozilla seemingly is more successful than other project Glasswing participants and all evidence points towards the fact that they’re using a very sophisticated harness on top of their legacy bug finding/fixing system to swap in models as they release.

The "agentic harness" can actually test and validate bugs instead of just spitting out false positives. They went from AI bug reports being mostly useless noise to finding legitimate sandbox escapes that would make any red teamer jealous. Mozilla's basically saying the AI security audit game has fundamentally shifted - they're encouraging everyone to start building similar pipelines now because the models are finally good enough to be worth the effort. Given how many of these were sandbox escapes and parent process UAFs, attackers are probably already doing this too. (read more)

Miscellaneous mattjay

How'd I do this edition?

It's hard doing this in a vacuum. Screaming into a void. Feedback is incredibly valuable to make sure I'm making a newsletter you love getting every week.

Login or Subscribe to participate in polls.

Parting Thoughts:

Community was foundational in launching and propelling my career. Community is the only reason I can stand being in Texas during the summer months. Community is the point. Today, I invite you to embrace discomfort on the road to a more vulnerable you.

Stay safe, Matt Johansen
@mattjay