CompanionCompanion← All posts
Product Update·6 min read·

Why We Moved From OpenAI to Anthropic as Our Default AI Vendor

In March 2026, the Pentagon banned Anthropic from its approved vendor list.

Not because Anthropic had done something wrong. Because Anthropic had refused to allow its models to be used for mass surveillance and fully autonomous weapons systems. The Department of Defense needed a vendor willing to cross those lines. Anthropic said no. OpenAI said yes.

That decision – and the public reporting around it – is what triggered our switch. We moved to Anthropic because of exactly the thing that got them banned.

The background: how we got here

The groundwork for this moment was laid in January 2024, when OpenAI quietly updated its usage policies. Up until that point, OpenAI's policies page specified that the company did not allow the usage of its models for "activity that has high risk of physical harm," including weapons development and military and warfare. OpenAI removed the specific reference to the military, although its policy still states that users should not "use our service to harm yourself or others," including to "develop or use weapons."

The revised policy replaced it with a more permissive framework that explicitly carved out space for national security and defence work in partnership with the US government. At the time, we noted it. We did not act immediately.

The March 2026 news was the confirmation we had been watching for. OpenAI did not merely permit dual-use applications in principle – they actively stepped in to fill the gap left by a company that refused to cross ethical lines. That told us everything we needed to know about how these two organisations would behave when the pressure was real, not hypothetical.

Why being banned by the Pentagon is a signal, not a problem

There is a version of this story where you read "the Pentagon banned Anthropic" and conclude that Anthropic is unreliable or too restrictive for commercial use. We read it the opposite way.

Anthropic drew a line at mass surveillance and fully autonomous weapons. When pushed, they held it – at real commercial cost. That is not a governance document or a responsible AI policy PDF. That is evidence of values under pressure.

For Companion, that distinction matters enormously. Our users share things with us they do not share with anyone else. The AI infrastructure sitting under those conversations has to be built by people who, when it counts, will say no to the wrong use.

What Companion does – and why vendor values matter

Companion is a mental fitness tool. We are not building weapons. We are not running surveillance infrastructure. But the question of whose values sit underneath our product is not abstract – it shapes every architectural decision, every capability we adopt, every guardrail we build.

When we choose an AI vendor, we are choosing a partner whose ethical posture will be reflected in every conversation our users have. OpenAI's decision to become the Pentagon's AI vendor after Anthropic was removed for refusing that role is a clear signal about how they weigh commercial opportunity against ethical constraint.

We weigh those things differently. So does Anthropic – demonstrably, at cost.

What this means in practice

For users, the change is invisible. Companion's conversational quality has remained high. In several internal benchmarks Claude outperforms GPT-4o on empathetic listening tasks and on maintaining coherent, supportive multi-turn dialogues. Response speed is comparable.

The transition also gave us the opportunity to refine our prompting architecture, which we now build on the HAICEF evaluation framework – covered in a separate post.

On transparency

We debated how much to say about this publicly. Vendor decisions are generally considered operational.

We disagree. The people who use Companion have a right to know what infrastructure sits beneath their conversations, and what the values of that infrastructure provider look like when tested. The March 2026 events were a test. The outcomes were public. We think our users should know where we stand.

We moved to Anthropic because they got banned for doing the right thing.


If you have questions about this decision or our infrastructure choices, you can reach us at [email protected].

Try Companion for free

Your personal AI for mental fitness – available 24/7, in your language.

Get started free →

More from Companion

Product Update

Companion Now Speaks Your Language – 100+ Languages Live

Product Update

Introducing the 90-Day Wellbeing Challenge