The U.S. Deflects Blame for Killing ~165 Iranian Schoolgirls — 40 Years Ago, They Also Blamed Radar for Murdering 290 Iranian Civilians
The technology changes every few decades, but the impunity never does — and neither does Washington's refusal to apologize.
On February 28, 2026, a U.S. missile struck the Shajareh Tayyebeh girls’ primary school in Minab, Iran during school hours. According to the United Nations Office of the High Commissioner for Human Rights, at least 165 girls between the ages of 7 and 12 were killed, with large parts of the building destroyed while classes were underway.
When the White House was asked whether U.S. forces hit the school, press secretary Karoline Leavitt replied: “Not that we know of.” She then added, firmly, “The United States of America does not target civilians.”
They never do. Not officially. And that is precisely the point.
If you read this far, you already know why this kind of history matters. Paid subscriptions are what make it possible for me to keep doing it.
Less than 5% of readers are paid subscribers. Please consider joining, it is the price of a coffee cup.
The Pentagon’s Favorite Excuse Has an Algorithm Now
The Pentagon’s initial explanation for the Minab strike followed a familiar script: the school was near a military complex, a “pattern of life” analysis had flagged the area as a high-threat zone, and the targeting system identified a suspicious signature at the site.
Reporting from Al Jazeera’s Digital Investigations Unit highlights how satellite imagery available since 2016 confirmed that the Shajareh Tayyebeh school operated as an independent civilian facility with no access from military checkpoints. To paint the picture it consisted of playground equipment, sports fields, and brightly colored murals on the walls.
None of that was classified information. And yet the school was struck. The nearby clinic, also on the same block as the military complex, was not — a detail the New Lines Institute noted as evidence that the attackers were capable of precise target discrimination.
According to the Associated Press, whose analysis was based on satellite imagery, expert assessments, and a U.S. official who spoke anonymously, the strike was likely carried out by the United States. Three independent experts told the AP that the damage pattern — strikes clustered within the walled compound, direct hits on buildings, no craters in the surrounding neighborhood — was consistent with multiple simultaneous air-to-surface munitions. Researcher Corey Scher, who uses satellite imagery to study landscape changes in conflict zones, says that "All the strikes are clustered within the walled-off compound. That's one level of precision at the block level. And then most of the strikes are basically leading to direct hits on buildings."
The strikes were precise enough to bypass a clinic, but somehow we are meant to believe they were not precise enough to avoid a school full of children. Legal analysis from the New Lines Institute declares that the attack violated the foundational principles of international humanitarian law: distinction, proportionality, and military necessity. According to U.N. human rights experts, schools are expressly protected under treaty and customary international humanitarian law, and intentionally targeting educational buildings that are not military objectives constitutes a war crime under Article 8 of the Rome Statute. U.N. Secretary-General António Guterres said as much during an emergency Security Council meeting, stating that the U.S.-Israeli airstrikes violated international law.
The Machine That Chose the Target
Responsible Statecraft reports that the U.S. military leveraged AI targeting tools to strike over 1,000 targets in Iran during the first 24 hours of the war. Homeland Security Today reports that the Pentagon deployed Anthropic’s Claude large language model as part of its operational support infrastructure during the Iran strikes. Claude is central to Palantir’s Maven Smart System, which provided real-time targeting for military operations against Iran — proposing hundreds of targets, prioritizing them in order of importance, and providing location coordinates, helping the U.S. carry out attacks at a speed and scale not seen since the opening hours of the Iraq War in 2003.
This is what “pattern of life” targeting looks like at scale. The Guardian writes that, AI in this conflict was “identifying and prioritising targets, recommending weaponry and evaluating legal grounds for a strike.” An Israeli intelligence source quoted by the Guardian in the context of Gaza — where the same targeting logic was applied — claims that one officer described spending 20 seconds assessing each AI-generated target, stating: “I had zero added-value as a human, apart from being a stamp of approval.”
A stamp of approval. That is the sum total of human oversight in a system that struck over a thousand targets in a single day.
According to Farzin Nadimi, a senior fellow at the Washington Institute for Near East Policy who studies Iran’s military, the most likely explanation for the school strike is that the targeting system “detected and tracked” activity in the area but “weren’t aware or didn’t have an up-to-date database that a girls’ school was there.” In other words: the algorithm flagged the zone, a human spent who-knows-how-long reviewing it, and the missile was fired.
Peter Asaro, associate professor of media studies at The New School and vice chair of the Stop Killer Robots campaign, told Japan Times: “You can rapidly produce long lists of targets much faster than humans can do it by automating that process. The ethical and legal question is: To what degree are those humans actually reviewing the specific targets that have been listed, verifying their legality and their value militarily before authorizing?” Brianna Rosen, a senior fellow at Just Security and the University of Oxford, says “Even with a human fully in the loop, there’s significant civilian harm because the human reviews of machine decisions are essentially perfunctory.”
There is one more detail that deserves its own paragraph. According to Responsible Statecraft, the operation that deployed Claude in Iran came one day after the U.S. government formally declared Anthropic a supply chain risk and a national security concern, and President Trump directed the government to cease working with the firm. The Pentagon used the AI anyway, and Claude won’t be phased out until the DoD finds a replacement. The political directive to stop using the technology was simply overridden by the operational reality that the military had become dependent on it. As Homeland Security Today reported, “AI dependence has become structurally embedded in American military operations, transcending political directives at the speed of war.”
The machine chose the target. The human stamped it. The girls died. And the company that built the machine had already told the Pentagon it didn’t want it used this way.
They’ve Been Doing This Since 1988
On July 3, 1988, the USS Vincennes shot down Iran Air Flight 655 over the Strait of Hormuz. Records compiled by Iran Chamber Society show all 290 people on board killed, including 66 children. The ship was equipped with the most advanced radar system in the world. It still managed to “see” a commercial Airbus A300 as a military fighter jet.
The Navy’s own data tapes show that Flight 655 was squawking the correct civilian transponder code, climbing steadily, and communicating in English with air traffic control seconds before the missiles were fired. According to the Iran Chamber Society account, officers aboard the Vincennes had simply convinced themselves otherwise, consistently misreporting the plane’s transponder signal as military-coded. A warning that the contact might be a commercial aircraft was acknowledged by the commanding officer and, as the record shows, essentially ignored. The USS Sides, a nearby vessel, had already identified the aircraft as non-hostile and turned its attention elsewhere only seconds before the Vincennes launched its missiles.
A 1990 Washington Post report cited by Iran Chamber Society, says that the Legion of Merit was presented to Captain Rogers and Commander Lustig for their performance that day. Commander Lustig also received the Navy Commendation Medal for “heroic achievement,” praised specifically for his “ability to maintain poise and confidence under fire.” Neither citation so much as acknowledged Iran Air Flight 655.
One month after the shootdown, Vice President George H.W. Bush offered the country his philosophical position on the matter, as quoted in Newsweek: “I will never apologize for the United States of America, ever. I don’t care what the facts are.”
The Algorithm Was Always the Point
By the time the Obama administration formalized the “signature strike” program, the U.S. had spent two decades perfecting the art of killing by inference. According to classified military documents obtained and published by The Intercept in 2015, signature strikes targeted people not because of who they were, but because of what they appeared to be doing — metadata, behavioral patterns, proximity to known locations, threat scores assigned by analysts who often did not even know their targets’ names, referring to them instead as “selectors.”
According to those same documents, during one five-month stretch of Operation Haymaker in Afghanistan, nearly 90 percent of people killed in airstrikes were not the intended targets. The military classified all of them as “enemies killed in action” regardless. The internal Pentagon review underlying those documents stated plainly that kill operations “significantly reduce the intelligence available” — meaning the U.S. kept eliminating people who might have provided useful information, because execution was easier than certainty, and certainty required actually knowing who someone was.
The whistleblower who provided the documents to The Intercept shares that the internal view within special operations was that targets “have no rights. They have no dignity. They have no humanity to themselves. They’re just a ‘selector’ to an analyst.”
The Minab strike is not a malfunction of this system. It is the system working as designed. An algorithm flagged a signature. Someone authorized the strike. The United Nations documents 165 girls died.
The Doctrine of Never Being Sorry
The thread connecting 1988 to 2026 is not technology. It is impunity.
When the U.S. shot down Iran Air Flight 655, it eventually settled with Iran in 1996 for $61.8 million in compensation. According to Iran Chamber Society, the payment came with an explicit clause: it did not constitute an admission of legal liability or responsibility. The men who carried out the strike had already been decorated. When the Obama-era drone program killed hundreds of unintended targets, they were retroactively classified as combatants and the program moved on. The Intercept’s source states that official government statements minimizing civilian casualties were “exaggerating at best, if not outright lies.”
The machine is new. The response is not. In 1988 the technology was a radar. In 2015 it was a targeting algorithm. In 2026 it is a large language model. In every case, when the technology kills the wrong people, the U.S. investigates, offers condolences, pays no legal liability, and decorates the operators.
In a system where the entire architecture of targeting — from the Aegis radar in 1988 to the pattern-of-life algorithms in 2026 — is designed to convert human beings into data points and those data points into acceptable losses, apology is structurally impossible.
You cannot apologize to someone you decided, in advance, was not really a person.
🚨🚨🚨WAIT🚨🚨🚨
Did you enjoy this article? Here’s how you can help me make more 👇🏾
I’m a 25-year-old full-time creator, and I publish around 5–7 articles a week. As an independent journalist, I am not beholden to the motives or messaging of any donors or sponsors. At the same time, unlike most of the largest creators on this platform, I have vowed to keep every single article I publish completely free for anyone to read.
In order of impact, you can support me by:
Becoming a Paid Subscriber
Restacking this article with a note about why you enjoyed it
Sharing this article with a friend, family member, or group chat
And of course, liking this post
Right now, less than 5% of my followers are paid subscribers.
I’m going to be direct: You might think someone else will step up to contribute to this page, but that’s exactly what everyone else is thinking too. This only works if each one of us contributes what we can, which is just the cost of a coffee each month.
If you’re having trouble upgrading your subscription with the above link, visit historycanthide.substack.com/subscribe
Addition: Some of you preferred a one-time donation over a full subscription. To do that you can “Tip” me on Venmo (TheGenZHistorian) or Cashapp ($kahlilgreene00).
Sources
Kelley, Susanna. “The Legal Implications of Targeting a Girls’ Elementary School in Iran.” New Lines Institute, March 4, 2026.
“UN Experts Strongly Condemn Deadly Missile Strike on Girls’ School in Iran, Call for Independent Investigation.” Office of the High Commissioner for Human Rights, United Nations, March 6, 2026.
Reynolds, James. “White House Sheds Light on Whether US Strike Hit Girls’ School in Iran.” The Independent, March 4, 2026.
Ghasemi, Shapour. “Shooting Down Iran Air Flight 655 [IR655].” Iran Chamber Society, 2004.
Scahill, Jeremy. “The Assassination Complex.” The Intercept, October 15, 2015.
Mencini, Damian. “Blast from the Past: Using History to Shape Targeted Strikes Policy.” Georgetown Security Studies Review, June 10, 2014.
Kinsley, Michael. “Rally Round the Flag, Boys.” TIME, September 12, 1988.
Frankel, Julia, and Michael Biesecker. “Evidence Suggests the Deadly Blast at an Iranian School Was Likely a US Airstrike.” Washington Post, March 6, 2026.
Lemieux, Frédéric. “Algorithmic Warfare in the Iran Conflict: Operation Epic Fury and Dawn of the AI Battlefield.” Homeland Security Today, March 6, 2026.
Pabst, Stavroula. “US Used ‘Claude’ to Strike Over 1,000 Targets in First 24 Hours of War.” Responsible Statecraft, March 5, 2026.
“The Guardian View on AI in War: The Iran Conflict Shows That the Paradigm Shift Has Already Begun.” The Guardian, March 6, 2026









Oh , sure! The US lobbed the bombs but they are completely blameless!
God, WTF is wrong with these people? That’s rhetorical; we actually know what’s wrong with them. They are monsters, every last one.