top of page

AI, AR, Fake Barns, and the Ethics of War

Updated: Nov 25, 2025

“Artificial Intelligence, Augmented Reality, Fake Barns, and the Ethics of War”


ree

Image: A Screenshot taken on my Meta Quest 2 of the Horizon Worlds VR Game Action Land - Farm. You can enter this barn in the game; it's not fake. But is it real?

 

In the Black Mirror episode “Men Against Fire,” futuristic soldiers fight against the rebellious and hideous “roaches”: human-like ghouls that make hideous insect sounds when encountered. There’s a virus in the roach blood that threatens humanity’s future progress.

 

The soldiers agree to a brain implant called “mass” that essentially turns their brain into a smartphone: you can send messages to the soldiers and a pop-up with your commands will appear in their visual field, as if they were wearing Tony Stark’s glasses from Spiderman: Far from Home: E.D.I.T.H. (“even dead, I’m the Hero,” Tony’s message informs Peter). Though these are nowhere near as advanced, we already have AR Smart Glasses, from Meta (formerly Facebook) and Ray-Ban: https://amzn.to/4oB8oEI.

 

On his first day in live combat, after dreaming of his presumable wife waiting for him at home, rookie soldier “Stripe” encounters a family of “roaches” and kills them. We see one of the hellish creatures try to kill Stripe, screaming its ghoulish cries as Stripe repeatedly stabs it. The roach dies and Stripe collects himself, ready to leave. But he finds a device — that looks like a flashlight with green dots on it — that blinks into his face before rejoining his colleagues (right before the 15-minute mark, 25% through the episode… this is well-paced storytelling!).

 

Stripe goes back to base worried something might be wrong with his mass implant, but is reassured by the medical officer and psychologist (Dr. Arquette, played by Mike Kelly from House of Cards) that everything is fine. Stripe expresses a lack of remorse for the creatures he killed and agrees that he’d kill the roaches again. “Okay, then why are you here?,” Arquette laughs. “You did a big thing. You should be proud of yourself.” He then says he’s going to make sure Stripe gets a “a good sleep. A really good sleep” and we see him on his computer, presumably writing a prescription… but psychologists don’t write prescriptions. Weird.

 

Stripe then dreams of his presumed wife, again, and this time it’s a sexual experience. But the “dream” appears to “glitch out,” and dozens of other images of his wife enter the bedroom on a loop in the same way (i.e., the same seductive crawl on the bed), while multiple images of himself and the woman in different stages of the sexual encounter populate the screen.

 

Frightened by the romantic dream seemingly cracking, Stripe wakes up in the middle of the night, to see the barracks of sleeping soldiers, all rhythmically moving their fingers, as if each was individually experiencing the same dream. It’s here that we begin to suspect that there might not be anyone waiting for Stripe at home, and begin to question the extent to which the “mass” implant system is doing more than merely relaying messages. (28 minutes in.)

 

The next morning, Stripe and team go to an apartment complex to try to kill some straggler roaches. Stripe’s mass implant continues to glitch, allowing him to smell things like grass for the first time in years. Some soldiers are killed and Stripe enters the building with another soldier.

 

In the building, Stripe sees what we initially think is a roach. But it’s just a woman hiding from the gunfire. Assuming she’s a citizen, Stripe lets her go, only to see her shot by his colleague moments later. This is when our wheels start churning: was that a roach? Why did she look “normal”?

 

Confused that the now-dead woman didn’t look or sound like a roach, Stripe encounters another woman and her two children. He reassures her that he’s not going to hurt them, and then sees his colleague coming to kill them. Stripe incapacitates his colleague and flees with the “roaches.”

 

We then get an exposition scene, with the roach, once they find shelter. When Stripe wakes up, the roach asks:

 

Roach: “You see me as I am?”

Stripe: “Of course I see you.”

Roach: “You don’t see roach?”

Stripe: “You ain’t a roach. Roaches is all…”

Roach: “Fucked up?”

Stripe: “Roaches don’t speak.”

Roach: “You just can’t hear us.”

Stripe: “What the fuck are you talking about?

Roach: “Your implants. Your army implants…”

Stripe: “The Mass system?”

Roach: [Nods] “They put it in your head to help you fight, and when it works, you see us as something other.”

 

She explains that the flashlight device Stripe saw transmits a virus to the Mass system, causing it to malfunction and allowing the soldiers to experience reality as their bodies naturally would without the implants.

 

Stripe has a moment of doubt, expressing how hideous the roaches are and how he has seen them himself.

 

Stripe: “No, they’re monsters. I’ve seen them!”

Roach: “The implant made you see this.” (40-42 minutes in)

 

Eventually, Stripe is captured, the “roaches” killed, and we get another scene with Dr. Arquette. He admits to Stripe that the Mass implant alters the look and sound of the roaches. Their blood has “impurities” like “cancer” and “substandard IQ.” They can’t be allowed to breed with citizens. It’s precisely the fact that they look and sound just like regular citizens that makes them so dangerous:

 

Dr. Arquette: “Humans, you know, we give ourselves a bad rep, but we’re genuinely empathetic as a species. I mean, we don’t actually really want to kill each other. [Laughs.] Which is a good thing, until your future depends on wiping out the enemy. …

 

[Still Arquette] “I don’t know how much history you studied in school, but many years ago, I’m talking early 20th century, most soldiers didn’t even fire their weapons. Or if they did, they would just aim over the heads of their enemies. They did it on purpose. … Even in World War II, in a firefight, only 15, 20% of the men would pull the trigger. The fate of the world at stake, and only 15% of them fired. Now what does that tell you? Well, it tells me that the war would have been over a whole lot quicker had the military got its shit together. So we adapted.”

 

He explains that the rates of fire rose to 15% with military conditioning in the Vietnam War, but that the soldiers came back broken, horrified at what they’d done.

 

Dr. Arquette: “And that’s pretty much how things stayed until Mass came along. You see Mass… well, that’s the ultimate military weapon. … It’s a lot easier to pull the trigger when you’re aiming at the bogeyman, hmm?” (48-51 minutes in)

 

(I looked into whether those statistics are true, and they’re mostly false, or at the very least phrased very misleadingly:

 

He then plays Stripe’s consent tape for him, showing Stripe agreeing to the Mass system with his thumb print and verbally confirming that he won’t remember doing so. “Yo, that’s crazy!,” the younger Stripe says.

 

Arquette then forces Stripe to see himself killing the roach on a loop (like a memory playing out regardless of whether your eyes are open or closed), but now the roach looks like a human instead of a ghoul, and we heard pleading for his life and family instead of hellish screams. Stripe screams to make it stop, and Arquette asks him if he wants to reactivate the implant, the way it normally works.

 

The screen then turns to an older, award-decorated Stripe returning home. We see from his POV a beautiful house and the woman from his dreams walking towards him. The camera then pans out to show a dilapidated house, and Stripe standing in front of it with no woman in sight.

 

Now, as wonderful as Black Mirror is, that’s not why I made this post. We could discuss various things in this episode: the military’s use of an AI “companion” to motivate and reward soldiers, the analogies to racism and bigotry in WWII, Gaze/Israel, or the ICE raids (though this episode was actually written in response to the US’s invasion of Afghanistan). We could even talk about Elon Musk’s Neuralink company that is trying to make something like the Mass implant. But none of that is the topic of this post.

 

Instead, I want to discuss an epistemic-ethical problem raised by this episode that we are much closer to in 2025 than most people realize. We’re led to believe that this episode takes place hundreds of years in the future (“long ago, in the early 20th century,” Arquette says), but we’re basically already here. Or at least, soldiers will be in similar epistemic-ethical positions in the very near future — as in the next year or two, if they’re not already using the tech.

 

In the early 19th century, W. K. C. Clifford, in “The Ethics of Belief,” argued that one should never act on insufficient evidence. I want to argue in this post that soldiers wearing anything like AI- or AR-enhanced glasses cannot be in a sufficient epistemic position to act in a morally responsible way. This means the weight of moral responsibility lies on the government, which is perhaps by design, but this post focuses on the ethics of individual soldiers using AR glasses (like Meta’s Ray-Ban AR glasses or the upcoming Snapchat AR glasses).

 

The epistemic/ethical problem I’m seeing with the glasses is precisely what we see in the Black Mirror episode. Because the soldiers have willingly put on a device that alters their vision, and themselves cannot see the code for how the vision is altered, no soldier wearing such a device can epistemically trust that they’re seeing reality. If one can’t know one’s information is real, one cannot ethically act on that information. Thus, wearing AI-enhanced AR glasses and seeing an enemy combatant is not enough evidence to conclude that there is an enemy in front of you or that it would be morally correct to pull the trigger.

 

The epistemological point partially comes from Alvin Goldman’s famous 1976 article, “Discrimination and Perceptual Knowledge.” There, Goldman famously describes an example involving fake barns and real barns in an imaginary place we’ll call “Fake Barn County.”

 

Fake Barn County is world-famous for hundreds of houses that have fake barns on their property, the kind of one-sided façade a Hollywood set might build for a movie. Goldman’s point is about how the criteria for knowledge — what conditions need to be satisfied to count as knowing — can change in different situations or epistemic environments.

 

(This is similar to, and actually inspired, epistemic contextualism, which is the view that I can “know” we need milk when my wife asks, because the standards for knowledge in that context are very low, but that I can’t know I’m not dreaming in a class on Descartes, since the standards there are very high.)

 

Goldman’s point — or at least the point I’m focusing on here — is that normally a visual appearance of a barn while driving in a countryside is great evidence of the fact that there is an actual barn in front of you. But if you’re driving in Fake Barn County, whether you know the name or not, a visual appearance of a barn is not good evidence that there is a barn in front of you. Since you’re in an environment where fake barns actually can and do exist, you cannot trust that the mere visual perception of a barn-façade is sufficient evidence of a real 3D barn existing in front of you.

 

This is analogous to the situation raised by the AI-enhanced AR glasses, including headsets like the Meta Quest 3 and Meta Quest 3S. (Like old iPhones, the S is the slower model: the 3 and S have identical AR features, but the 3 without the S is faster and crisper.)

 

While it would be easy for a group of citizens to take their glasses or headsets off and confirm whether they’re seeing something tangible existing in front of them, the situation is different with a military encampment all using the googles and instructed to keep them on.

 

Now, I have no idea if the military is using such devices yet. But I find it hard to imagine they’re not currently developing it, or at least that they will in the next year or two as they become more popular. And it’s important to note that, as of yet, there are no reports of mass illusions generated by AR or VR headsets to all devices simultaneously, such that multiple users get confirmation bias that they’re experiencing unaltered reality. But this is possible, right now, already, with devices that we can buy for a couple of hundred dollars on Amazon. What can $10,000 a headset get you from military contractors? How much “augmentation” might these devices have that the users are unaware of?

 

These are important questions for thinking about what might be currently possible, and what will certainly be possible in a few years. If soldiers willingly agree to wear a device that can alter their perception of reality without them knowing that it happened, I contend that none of them are in an epistemically sufficient position to deliberate about the morality of an action. If you don’t know that the boogey-man appearance you’re seeing is actually a regular person, you cannot be held morally responsible for killing a regular person. Though moral responsibility should now transfer to the designer of the headsets, or whoever developed the training program, these are teams and corporations that are harder to morally assess than individuals — and that’s assuming one can even find out who these people are.

 

This is one interesting way epistemology, ethics, war, and technology come together.

 

I hope the military would never alter their soldiers’ perception of the world without them realizing it in this way, but I’m honestly not optimistic — especially under the current administration.


Douglas A. Shepardson, Ph.D.

 

 

Disclaimer: I hold shares and options (June 2026 and January 2028 calls) of Snapchat (SNAP) and do affiliate marketing for Amazon. (That means I get a small percentage if you buy anything through one of the above links, e.g., 1% on electronic devices.) Some might think it strange I wrote a post about an ethical fear resulting from a technology developed by a company I’m invested in. But if I’ve learned anything watching the market since 2015, it’s that Wallstreet doesn’t care about posts like this. If they did, AI would’ve developed a lot slower. I’m also not accusing Snapchat of anything nefarious. One can be interested in a piece of technology and invest in it while simultaneously hoping that it isn’t misused and expressing a philosophical worry about how that could happen.

2 Comments


scruz39
Dec 03, 2025

I found this post to be very interesting. I have followed some of these developments — because I’m concerned. I didn’t consider the extent to which these technologies could be used to dehumanize targets (legitimate and illegitimate alike). Anduril officially released ‘EagleEye’ last month, and it seems to be capable of doing what you described occurred in that Black Mirror episode.

This technology is not even years away… it’s here today.

Like
Replying to

For other readers: https://www.youtube.com/watch?v=x9B02pFKpJo


The top comment writes, "remember boys, there wont be respawns." That jocular allusion to the similarities between wearing EagleEye and gun games has 1.7k likes.


I wasn't even aware of that, so thank you for bringing it to my attention! I agree it's extremely concerning, even without the hypothetical AI-update take-overs I proposed above. As a VR gamer who has also proudly beaten five Halo campaigns on legendary, I could see AR in actual combat use immersion blurring the boundary between reality and VR/AR gaming.


No soldier is at risk of dehumanizing someone in natural (direct vision) combat because of playing first-person shooters, or I'd argue at least. I can't see anyone confusing the behavioral associations…


Like
bottom of page