Should I Opt Out of Meta’s AI Training?

Opting out sounds easy. But what if it costs visibility—or influence? I’m sharing the tension, not the answer. Curious what others are doing.

Should I Opt Out of Meta’s AI Training?
What is Meta coming up with? What will they do with your and mine data?

I’m still not sure

Meta is now asking permission to use our public data to train its AI models. That alone is worth pausing over. For a company with a long history of taking rather than asking, this shift feels less like a moral turn and more like a legal necessity—probably tied to the upcoming EU AI Act.

But whatever the reason, the moment we’re in is real. They’re asking. You can opt out. And I’ve found myself circling around a single question: Should I?

Instagram Help Center

My hesitation is human

On the one hand, it seems like a small and obvious act of resistance. Why would I feed a system I don’t fully understand—especially one likely built for profit, not people?

But on the other hand, I hesitate. Not for principled reasons, but because I’m afraid.

What if opting out quietly hurts my reach on Instagram? What if it excludes me from something emerging—something I don’t even know I’m missing? Will my content be deprioritised? Will my presence be weakened?

These are selfish fears, maybe. But they’re not irrational ones. Because Meta doesn’t tell us what the consequences of opting out actually are. That opacity—intentional or not—turns a simple decision into a strange ethical gamble.

What exactly are we giving?

And still I wonder: what kind of contribution are we being asked to make?

Is this just about the content I post—photos, captions, stories—or is it also about the network I inhabit? Are they training AI not just on what I say, but on who I know, what I like, how I respond?

There’s a finality to this that makes it different from earlier data collection. Once your data has been used to train a model, it can’t be pulled back out. It’s baked into the neural fabric of the system. That makes this not just a technical or legal decision—it’s a cultural one. A form of surrender, or contribution, depending on how you look at it.

💡
Should I opt out? I wasn’t even sure what I’d be opting out of.
Meta’s vague language didn’t clarify what “AI training” really meant.
But I realised: once your data is absorbed, you don’t get to reach it again. So I opted out.

The ethics aren’t abstract

What’s frustrating is that I don’t know what Meta plans to do with the AI it trains on us. I’d like to think it might bring something good into the world—more accessible tools, more creative collaboration, better interfaces. But if I’m honest, I suspect the gains will be theirs, and the risks will be ours.

And this matters, because social platforms aren’t neutral pipes. They’re cultural spaces. What we create, share, celebrate, and discover on them shapes the tone of our time. If those spaces become dominated by synthetic content—optimised for engagement, but void of origin—we lose something. Not everything. But something real.

And if they’re using our authentic contributions to build that synthetic future, shouldn’t we at least know what that trade-off looks like?

This is me thinking out loud

I’m writing this because I don’t have a clear answer.

I haven’t (yet) opted out. But I might. I’ve been moving between suspicion and curiosity, frustration and optimism. I’d rather not become cynical. I’d rather stay engaged. But I don’t want to drift into passive complicity either.

So I’m asking:
Have you opted out? Will you?
Do you know more than I do?
Can we still have this conversation—before the training is done and the inputs are locked forever?

Let’s talk. Not to panic. Not to point fingers.
Just to stay human in the loop.

In the mean time I opted out.