Odd Lots: Cyberwar in the Age of AI

ยท 2136 words ยท 11 minute read

Yusuf Dikec, the Turkish sport shooter famous for competing with no specialized equipment – no lens, no ear protection – and still winning silver at the 2024 Olympics. Sometimes too much technology works against you.

On March 7, 2026, I joined Tracy Alloway and Joe Weisenthal on Bloomberg’s Odd Lots podcast for the second time. The first was in March 2022, during the Russia-Ukraine war, where we discussed what cyberwar actually looks like. Four years later, the same thesis holds – but the stakes have changed dramatically.

Listen: Apple Podcasts | Spotify

This time: the Iran-Israel war, the first kinetic attack on cloud infrastructure, Anthropic’s standoff with the Pentagon, AI coding agents, and why I started a new company called OnDB.


Cyber Supports Wars. It Doesn’t Win Them. ๐Ÿ”—

In 2022, I told Odd Lots that cyber is a component of warfare, not a standalone event. Cyber is mostly useful before an attack – for intelligence gathering and information collection. Once a war goes kinetic, it is just used to create confusion. The Iran-Israel war that started on February 28, 2026 – when US and Israeli forces launched nearly 900 strikes in 12 hours targeting Iranian missiles, air defenses, military infrastructure, and leadership (“Operation Epic Fury”) – confirmed this at the largest scale we have ever seen.

The war started with the US-Israel kinetic strikes. Iran retaliated with kinetic force – 247 ballistic missiles and 230 drones across the Gulf, taking out AWS data centers in the process. On both sides, the airstrikes were the main event.

Internet connectivity in Iran dropped to 4%. Over 90 million people were blacked out for more than 72 hours – partly from Israeli cyber operations, partly from the Iranian government’s own kill switch. The BadeSaba prayer app was hacked to urge military defections. Traffic lights in Tehran were reportedly hacked for reconnaissance. But as I said on the podcast: once you start using missiles, most of these cyber elements are not really relevant. They create confusion. The kinetic strikes do the destroying.

One government claims AI-powered precision strikes, yet a girls' school in Minab is destroyed and 150 people killed. A US military investigation points to likely US responsibility. The other takes down hyperscaler data centers with $20k drones. Two very different definitions of technological warfare.


A $20k Drone vs. Billions in Cloud Infrastructure ๐Ÿ”—

On March 2, Iranian Shahed-type drones – costing roughly $20,000 to $50,000 each – hit AWS data centers in the Gulf. First time cloud infrastructure had ever been knocked down by military action.

Two of the three Availability Zones in AWS ME-CENTRAL-1 were directly struck. Fires, emergency power shutdowns, structural damage, water damage from fire suppression. EC2, S3, RDS, Lambda, DynamoDB – dozens of services went down. Regional consumer apps, banking providers, and enterprise platforms like Snowflake all went dark. Vercel had to reroute traffic and exclude the region from all deployments entirely.

Joe asked how disruptive the attacks really were, assuming cloud services were “fairly liquid.” The answer: extremely. Once you have centralization of dependence, data centers become easy targets. And nobody had $20k drones in their threat models.

AWS used euphemistic language for 36 hours – “objects struck the data center” – before finally saying “drone strikes.” By March 3, they stopped public updates and repeatedly told customers to migrate workloads out of the Middle East entirely.

S3 was designed to survive the loss of a single Availability Zone. When the first AZ went down, S3 continued normally. When the second AZ was hit hours later, S3 broke. AWS had never operationally modeled a kinetic attack taking out two simultaneously.

The entire industry – cloud providers, AI frontier companies, governments – was focused on software vulnerabilities, DDoS, and misconfigurations. Nobody priced in that a $20k drone with GPS coordinates would be more effective than any exploit ever written. Even Stuxnet – the most celebrated cyberattack in history – had limited, temporary impact. The 2026 airstrikes achieved more in hours than Stuxnet did in years.


Iran’s Capabilities Are Consistently Underestimated ๐Ÿ”—

Many people from the military world and intelligence community have been underestimating Iran’s capabilities, exactly like they used to do with North Korea.

For years, nobody saw what Iran was truly capable of. After the pager attack on Hezbollah in September 2024 – a physical supply chain compromise using the same playbook Snowden revealed when he exposed the NSA’s Tailored Access Operations – Iran’s response was limited. After the June 2025 US-Israel strikes, Iran hit a single US base in the Gulf. Limited.

Then came February 28, 2026. The US and Israel launched massive strikes that killed Khamenei and senior commanders. Iran retaliated across multiple GCC and neighboring countries simultaneously – 247 ballistic missiles and 230 drones. Unprecedented scale.

From the Shamoon wiper that destroyed 35,000 workstations at Saudi Aramco in 2012, to the CyberAv3ngers compromising US water systems in 2023-2024, to the suspected Iranian cyberattack on US medical device company Stryker – which acquired Israeli company OrthoSpace in 2019 – that left the company offline in March 2026, to the AWS drone strikes – Iran has been hitting critical infrastructure for over a decade. Their willingness to deploy wipers, hit water systems, and strike without caring about diplomatic consequences makes them more dangerous than their technical sophistication alone suggests.


Inference Runs on Energy ๐Ÿ”—

If Iran closes the Strait of Hormuz – through which roughly 20% of the world’s oil passes – energy prices spike globally. Whatever you save on compute efficiency, you lose on the energy bill to run it.

As I told Tracy and Joe: if you are going to use AI for next generation wars, but your enemy can just increase your cost of token and inference, what does that even mean? The same conflict that proved drones beat exploits also has the potential to make AI itself more expensive to operate.


Anthropic, the Pentagon, and the Snowden Parallel ๐Ÿ”—

The US designated Anthropic a “supply chain risk.” Meanwhile, the US intelligence community keeps losing its own tools – Snowden, the Shadow Brokers, and Peter Williams, the L3Harris executive sentenced in February 2026 for selling eight zero-day exploits to a Russian broker for $1.3 million in crypto. Three major leaks in a decade. The irony is hard to overstate.

Anthropic was the first AI model developer used in classified operations by the Defense Department. Claude is integrated into Palantir’s Maven Smart System (MSS), an AI-enabled warfighting system used to speed up US military targeting decisions. MSS draws together data from satellites, drones, intelligence reports, and radar signals. Claude analyzes this data to provide target recommendations and suggest what type of force to use. MSS is currently deployed by the US to assist targeting in Iran. The Washington Post reported that as planning for strikes on Iran was underway, Maven suggested hundreds of targets, issued precise location coordinates, and prioritized them by importance. The Iranian elementary school was on the US target list and may have been mistaken as a military site. The strike killed at least 175, many of them children. Both the Israeli and US militaries are using Palantir’s Maven to conduct operations. The head of CENTCOM, Adm. Brad Cooper, said the United States is “leveraging a variety of advanced AI tools” to conduct strikes, adding: “Humans will always make final decisions on what to shoot and what not to shoot, and when to shoot.”

It is unclear whether Maven or any AI model played a direct role in that specific strike. I remember exactly what it felt like when Bradley Manning released the Collateral Murder video – the outrage, the shock. A US missile strike on a school killing 175 people makes that moment feel small by comparison.

The broader point stands: just like software engineering, AI should be here to assist critical decisions, not to take them. We are not in the age of fully autonomous agents, and people who think we are, are making premature decisions with catastrophic consequences.

After Claude was reportedly used in the Maduro capture in Venezuela – including bombing sites in Caracas – Anthropic pushed back. Hegseth gave Dario Amodei a three-day ultimatum: comply with “all lawful use” or face consequences. Anthropic refused. Trump ordered the government to stop using Anthropic. The Pentagon designated it a “supply chain risk” – first time ever for an American company. Hours later, OpenAI announced a Pentagon deal.

On March 11, Bloomberg reported that China is moving to restrict banks, state firms, and government bodies from using OpenClaw AI apps on office computers over security concerns. The supply chain risk concern around AI tools is real – vibe coded apps and AI-generated code have created more attack surface in the last few months than in years before.

As I mentioned on the podcast, back in the Snowden days people were scared of mass surveillance and pushed back hard. Now a CEO is being punished for refusing to enable it. AI makes PRISM look primitive. PRISM collected data. AI can analyze, profile, and act on it at a scale that was never possible before. Amodei is drawing his red line exactly where Snowden blew the whistle.


Software Is Going to Zero ๐Ÿ”—

Boris Cherny, the head of Claude Code at Anthropic, has not manually edited a single line of code since November 2025. He predicts that by the end of 2026, the title “software engineer” will start to disappear. Claude Code overtook both GitHub Copilot and Cursor as the most-used AI coding tool just eight months after its release.

Joe described the friction he encounters vibe coding: wanting an agent to grab information, only to be told to go create an account and get an API key. What he wants is for the agent to just pay with stablecoins and get the information on its own. He also said something that resonated: “I love interacting with just the CLI now. Every time I have to go to the web, it feels like some sort of failure.”

On the podcast I mentioned that people are moving away from MCPs toward skills and CLIs as the natural interface for agents and humans alike. This is already happening – Denis Yarats, cofounder and CTO of Perplexity, said today that internally at Perplexity they are moving away from MCPs and instead using APIs and CLIs.

The marginal cost of intelligence is dropping toward zero. SaaS faces an existential threat. And if software engineering costs go to zero, you cannot charge more for the security audit than the code cost to write.


Data Is the Only Moat ๐Ÿ”—

Software goes to zero. Data is the only asset that becomes timeless. That is why I started OnDB.

I spent my career in cybersecurity – Comae Technologies (acquired by Magnet Forensics), CloudVolumes (acquired by VMware). I watched software costs collapse and realized data is the only durable asset in the AI economy.

OnDB is like OpenRouter for data providers. AI agents are only as useful as the data they can access, and right now every agent-to-data-provider integration is bespoke, fragile, and unverified. No standard plumbing.

On the podcast, I described the levels of data access for an AI agent. OnDB makes the third level work at scale.

graph TD
    A[AI Agent] --> L1[Level 1: Model Knowledge]
    A --> L2[Level 2: Web Search]
    A --> L3[Level 3: Private Data via OnDB]

    L1 -->|"Months old, no live data"| L1X[Limited]
    L2 -->|"Public internet, noisy, unstructured"| L2X[Better but insufficient]
    L3 -->|"APIs, databases, subscriptions -- verified, paid per query"| L3X[Actual valuable data]

    style L3 fill:#2d6a4f,color:#fff
    style L3X fill:#2d6a4f,color:#fff

OnDB uses the x402 protocol for native pay-per-access using USDC stablecoins. No subscriptions, no API key management. It auto-generates verified skills.md files from provider endpoints – safe by design, not hand-written docs that drift. The top skill on ClawHub was malware. Enterprise will not just run anything found online.

Joe’s frustration with API keys is exactly the problem we solve. As he put it: “What I want is for the agent to just go there, pay with some stablecoins, and get the information on its own without this human in the loop.” That is what OnDB enables.


Everything Connects ๐Ÿ”—

A $20k drone did what no exploit ever could. Iran retaliated at a scale nobody predicted, despite billions spent on intelligence and AI-assisted targeting. An AI system helped build a target list that included an elementary school. A CEO is being punished for drawing the same red line that made Snowden a household name. And the US government designates an American AI company a supply chain risk while its own intelligence community keeps handing adversaries its tools.

These are not separate stories. They are the same story. We over-indexed on software – in warfare, in infrastructure, in threat modeling – and under-indexed on everything that actually matters: physical reality, human judgment, and the consequences of getting it wrong.

Software is going to zero. Data is the only durable moat. And if your datacenter is within drone range of a hostile state, that is no longer a hypothetical. It is a line item.