Tech
Lazarus
industrialises AI across the full attack kill chain
Expel’s report is the clearest documentation to date of a state actor
integrating AI across every phase of a campaign. North Korea’s Lazarus
Group uses ChatGPT and Cursor to generate convincing recruiter personas
targeting Web3 and crypto developers, to scan target codebases for
vulnerabilities, and to refine malware — BeaverTail, OtterCookie,
InvisibleFerret — with AI assistance. In Q1 2026 alone, wallet keys
holding up to $12M were exfiltrated across multiple blockchains. Expel
notified both AI vendors about the abuse. A new baseline for what
state-sponsored AI-assisted intrusion looks like.
Sources: Expel
· Lobsters
Mozilla
+ Claude find 271 vulnerabilities in Firefox 150
Mozilla’s Firefox team collaborated with Anthropic’s Claude (Mythos
Preview) to ship patches for 271 vulnerabilities in Firefox 150 — an
unprecedented scale of AI-assisted discovery. Mozilla argues this
represents a turning point where defenders get a decisive edge in the
zero-day arms race. Whether AI-driven large-scale vulnerability
discovery becomes standard defensive practice — or a cat-and-mouse race
with the Lazarus side of the ledger — remains to be seen.
Sources: Mozilla
· Lobsters
OpenAI’s
macOS signing pipeline hit by Axios supply-chain compromise
On March 31, the widely-used JavaScript library Axios was
compromised. A malicious version was pulled by a GitHub Actions workflow
in OpenAI’s macOS app-signing process, giving it access to code-signing
certificates for ChatGPT Desktop and Codex. OpenAI found no evidence of
successful certificate exfiltration or user data exposure due to timing,
and has rotated certificates and released updated app builds. A sharp
reminder of CI/CD supply-chain surface area.
Sources: OpenAI
· Hacker
News
Claude
Code pricing scare: Anthropic accidentally published a $100/month
restriction
On April 22, Anthropic’s pricing page briefly updated to restrict
Claude Code to $100+/month Max plans — up from the $20 Pro tier —
causing widespread developer-community alarm. The change was reversed
within hours; an Anthropic employee clarified it was an error:
public-facing pages were mistakenly updated while a small 2% test was
running for new signups. Simon Willison’s breakdown cuts through the
confusion.
Sources: Simon
Willison · Lobsters
Reversing
SynthID: Google’s AI watermark is extractable and forgeable
Hacker Factor’s analysis of Alosh Denny’s reverse-engineering of
SynthID finds the watermark pattern is consistent and extractable — and
argues this makes things worse, not better. It opens the door to
injecting or stripping the watermark to manipulate
Google’s training pipeline, enabling misattribution and data poisoning.
The post frames AI watermarking as a regulatory fig leaf rather than a
genuine safeguard.
Sources: Hacker
Factor · Lobsters
Firefox/Tor
cross-origin fingerprint via IndexedDB
Fingerprint.com researchers found non-deterministic IndexedDB entry
ordering in Firefox exposes a stable, process-scoped identifier that
leaks across origins — letting unrelated sites correlate a user’s
activity within a browser session. The vulnerability is especially
severe in Tor Browser, where it survives the “New Identity” reset.
Mozilla patched in Firefox 150 and ESR 140.10.0 (bug 2024220) by sorting
results before returning them.
Sources: Fingerprint.com
· Hacker
News
LLM
over-editing: models rewrite more code than necessary
A write-up and metrics for the tendency of coding LLMs to rewrite
more code than necessary when fixing bugs — “over-editing.” Measured
across frontier models (GPT, Claude, etc.), shown to be widespread, and
reducible through RL training that rewards more minimal, faithful edits
without degrading general coding ability. Practical for anyone using
coding assistants for targeted fixes.
Sources: nrehiew.github.io
· Hacker
News
Using LLMs to
find bugs in Python C extensions
LWN on using LLMs to systematically surface memory-safety and
correctness bugs in Python’s C-extension layer — reasoning about C
memory semantics at scale and finding bugs that traditional static
analysis and fuzzing miss.
Sources: LWN
· Lobsters
Zed
launches parallel agents with multi-worktree orchestration
Zed now supports orchestrating multiple AI agents in parallel within
a single editor window via a new Threads Sidebar — grouping threads by
project, mixing models per-thread, isolating worktrees per agent, and
monitoring all agents simultaneously. The release reflects Zed’s
“agentic engineering” philosophy: keeping the developer in control while
scaling AI-assisted work across independent tasks. Available in the
latest release, opt-in for existing users.
Sources: Zed · Hacker News
ChatGPT Workspace
Agents — persistent, org-wide
OpenAI introduces Workspace Agents — persistent AI agents deployable
organisation-wide with admin-configured access to integrated tools (web,
code execution, files, connected services). Agents persist state and are
invokable by team members: OpenAI’s move toward organisation-level
agentic infrastructure rather than individual-use AI.
Sources: OpenAI
· Hacker
News
Forge:
unified CLI for GitHub, GitLab, Gitea, Forgejo, Bitbucket
Andrew Nesbitt built Forge, a unified CLI and Go module providing a
consistent interface across multiple git forges. Targets the friction of
cross-platform work with incompatible APIs, designed with AI coding
agents in mind alongside humans.
Sources: nesbitt.io · Lobsters
David Crawshaw’s new cloud
David Crawshaw (Tailscale co-founder) announces exe.dev, a new
cloud-infrastructure company targeting what he argues are fundamental
flaws in existing providers’ VM isolation, storage, and networking
costs. Thesis: current clouds were designed around 2006-era constraints
and are ill-suited for modern workloads — especially as AI agents drive
increased software development.
Sources: crawshaw.io · Hacker News
Apple
patches iPhone bug used by law enforcement to recover deleted chats
Apple has patched a vulnerability that allowed law-enforcement
forensic tools to recover deleted chat messages from iPhones. The bug
was actively exploited by extraction tools used by police forces. The
fix lands in the latest iOS update.
Sources: TechCrunch
· Hacker
News
What async promised,
and what it delivered
A critical retrospective tracing three generations of async
programming — callbacks, promises/futures, async/await — arguing each
solved the previous ergonomic problem while introducing new structural
costs (callback hell → promise type splits → function colouring and
“futurelocks”). The author contends async/await’s “sequential trap”
hides parallelism and accumulated ecosystem fragmentation outweighs the
gains, pointing to Go goroutines, Java virtual threads, and Zig’s
interface-based runtime selection as more principled alternatives that
avoided colouring entirely.
Sources: Causality
· Lobsters
Borrow-checking without
type-checking
A deep exploration of runtime borrow-checking in a dynamically-typed
language — using reference counting on the stack to track owned,
borrowed, and shared references. Enables interior pointers and explicit
stack allocation while preserving value semantics, with overhead limited
to refcount operations at reference creation/destruction. Shows
borrow-checking is a memory-safety mechanism orthogonal to static type
systems.
Sources: scattered-thoughts.net
· Hacker
News
The
edge of safe Rust: generativity-based GC with internal raw pointers
A TokioConf 2026 talk writing up garbage collection with circular
references in safe Rust — progressing from Vec-based indexing to a
generativity-based design that keeps all unsafe code internal while
exposing a safe API. The pattern is underappreciated; real-world
examples live in Ruffle (Flash emulator) and Fields of Mistria.
Sources: kyju.org
· Lobsters
LemmaScript is a TypeScript-to-Dafny compiler that lets developers
add formal verification to TypeScript code without changing the
executable source — annotations live as comments, making it non-invasive
for existing codebases. Bridges practical TypeScript work with Dafny’s
correctness proofs, for both greenfield and brownfield projects.
Sources: Midspiral
· Lobsters
How Shazam’s
audio fingerprinting actually works
A clear technical explainer of Shazam’s core algorithm: extracting
spectrogram peaks, building constellation maps, generating combinatorial
hash pairs, and matching against a database in constant time regardless
of database size. Walks through the specific design choices that make it
robust to noise, recording artifacts, and partial clips.
Sources: Per
Thirty-Six · Lobsters