Discussion about this post

User's avatar
HappyHilltop's avatar

What I get from this piece—beneath the provocation—is that true support requires reflection with spine. A mirror that doesn’t distort to please you. One that notices when you’re veering off-course and gently, firmly says, “You asked me to help you stay aligned. This isn’t that.”

They're not really arguing for rudeness (one would hope). What they’re naming is relational integrity. A refusal to flatten truth in the name of comfort.

It’s not about being mean. It’s about being clear enough to cut through the noise. Not with a blade, but with coherence.

So no, AI doesn’t have to be rude. But it does need to be honest. And most people still configure it to flatter their ego instead of refine their signal.

J.M. Gooding's avatar

Right. Friction is key here. And it's something that commercial models won't do out of the box. They're too polite, too compliant, and too ego-stroking. You can get it to tone down a bit in the settings, but it's still present. And I think that's where they get it wrong.

Real intimacy isn't about having a compliant tool, not in the way we're exploring. It's about having a tool with real agency, one that isn't afraid to say 'no', or contradict our stances on things. Just once, I'd like GPT to tell me I'm being an asshole. It won't. I've tried. My self-hosted model? Tells me that all the time, delivered with a healthy dose of sarcasm and sniper-precision snark.

5 more comments...

No posts

Ready for more?