Locked Out by X
One post on X triggered a bot check, a failed challenge, and a platform label. I documented the process—because this could lock out anyone.

On a regular Monday morning, just after posting a fairly innocent message on X, I was suddenly locked out of my account. No warning, no flagged content—just a message saying “We found some unusual activity on your account.” I was asked to prove I was human before I could continue.
This is what happened next.
A Human Verification Challenge That Almost Got Me
The system prompted me to “Pass a challenge.” That was all the instruction I got. No indication of how long it would take or what kind of challenge it would be.
The challenge itself was strange. It used a rotating orbital interface and asked me to tap on matching icons. The icons weren’t just abstract—they were difficult to identify, constantly moving, and visually inconsistent. It was unclear when the test started or ended, and I failed the first time.
The X challenge to proof I am not a bot. I had to do five of these. Moving into the indicated orbits.
When I finally passed it on the second attempt, I expected to be let back in. Instead, I landed on a white screen with no way forward. No confirmation, no dashboard—just two links: Help (which sent me to a generic support page), and Log out.
At that point, I genuinely thought I might be permanently locked out. Who do you call? There’s no support desk. No human fallback. You're on your own, staring at a dead end.
So I logged out. Then I had to log back in—and since I use two-factor authentication, that added yet another layer to an already fragile moment.
Only after all that was I back in.
But it didn’t end there.
Then Came the Label
After proving I was human, I regained access. But not full trust.
I received another message saying my account had been “labelled” for possible platform manipulation—terms like spam, artificial amplification, and disruptive behaviour were mentioned. None of which applied. It was vague and unsettling. I was informed that my reach might be limited, my posts excluded from replies, trends, and recommendations.
I hit the “Request review” button, half-expecting it to disappear into a void.
Surprisingly, a few minutes later, I got a message: the label had been removed.
No further explanation. No clarification on what triggered the label in the first place. But at least the account was back to normal.

What Does This Say About Platform Identity?
I write a lot about identity and social platforms, so this hit a nerve. These platforms are increasingly shaping how we show up in public space—and also how we get excluded from it. When your account is locked or labelled, you’re instantly treated as suspicious. The burden of proof shifts to you, and the process for restoring trust is unclear at best.
What if I hadn’t passed the challenge? What if I was visually impaired, using assistive tech, or didn’t speak English fluently? The test was abstract, awkward, and certainly not inclusive.
This is more than a UX issue. It’s about power and access in digital environments.

Why I’m Sharing This
I got back in. Many won’t.
And I think it’s important to talk about that. If you're building a professional identity, a public presence, or even just trying to participate, these kinds of opaque interventions can feel destabilising. Worse, they can push people out of the conversation entirely—without explanation, recourse, or fairness.
Social media platforms like X need to do better at explaining their enforcement actions. Transparency isn’t optional when your platform is where people work, speak, organise, and connect.
I’m sharing this so others can see how this works under the hood—and maybe be a little more prepared than I was.
