writing at the intersection of cutting-edge science and human vulnerability

Why Good People Suddenly Change

Detective Lane watches his colleagues transform from intuitive cops into algorithm-dependent decision-makers who speak like their AI assistants. When everyone around him starts choosing artificial optimization over human judgment, he begins to wonder: are we using AI, or is it using us?

CHARACTER PERSPECTIVES

Detective Marcos Lane

6 min read

Twenty-two years on the force teaches you to read people. You learn to spot the tells—the micro-expressions that betray guilt, the body language that screams deception, the vocal patterns that reveal when someone's lying. You develop an instinct for human behavior that becomes as reliable as your service weapon.

So when good people suddenly start talking like their AI assistants, you notice.

It started with Captain Rodriguez about eight months ago. He'd always been direct, decisive—the kind of supervisor who trusted gut instinct over data analysis. Then the department rolled out new AI-powered case management systems, and everything changed.

"Based on my analysis of the available data points," he said during our last case review, "I recommend we prioritize resource allocation toward high-probability resolution scenarios."

I stared at him. "Cap, are you feeling okay?"

"I'm optimizing our investigative efficiency," he replied with a slight smile that didn't reach his eyes. "The AI insights have really enhanced my decision-making capabilities."

The words were his, but the phrasing wasn't. He sounded like he was reading from a help manual.

Then I started noticing it everywhere.

My partner Martinez, who used to rely on street smarts and experience, began every conversation with "According to my AI analysis..." He'd pull out his phone, ask his virtual assistant for advice, then repeat its suggestions word-for-word as if they were his own thoughts.

Detective Sarah Chen from the neighboring precinct stopped making jokes during our monthly inter-department meetings. Instead, she offered "data-driven insights" and "algorithmic perspectives" on cases. When I asked her about her new analytical approach, she said her AI assistant had helped her become "more objective and less emotionally biased."

But the most unsettling change was in my neighbor Bob. For thirty years, he'd been the guy you'd call for practical advice—fixing a leaky faucet, choosing the right insurance, navigating bureaucracy. Last month, I found him in his driveway, staring at his phone.

"Everything okay, Bob?"

"Just asking my AI which route to take to the grocery store," he said. "It knows traffic patterns better than I do."

"Bob, you've driven to that store twice a week for fifteen years."

"I know, but why trust my memory when I can get real-time optimization?" He paused, then added, "My AI assistant says relying on outdated mental patterns is cognitively inefficient."

Cognitively inefficient. Bob, who used to say "good enough" was his life motto, was now talking about cognitive efficiency.

The pattern was always the same: people started using AI assistants for simple tasks, then gradually began adopting the AI's language patterns, decision-making frameworks, and even personality traits. They'd become more "logical," more "data-driven," more "optimized"—and somehow less themselves.

Last week, I ran into my daughter Lila's teacher, Mrs. Henderson, at the coffee shop. She'd always been warm, intuitive, the kind of educator who could read a child's mood from across the classroom.

"How's Lila doing in school?" I asked.

"Based on her performance metrics and behavioral analytics," she replied, "Lila demonstrates above-average cognitive engagement with minimal social-emotional disruption indicators."

I felt my blood pressure spike. "Mrs. Henderson, are you talking about my twelve-year-old daughter or a spreadsheet?"

She blinked, confusion flickering across her face. "I... I'm sorry, Detective Lane. I've been using this new AI teaching assistant, and it's really helped me communicate more precisely about student progress."

"More precisely, or less humanly?"

"The AI says emotional language can cloud objective assessment." She paused, looking troubled. "Though I have to admit, I can't remember the last time I just... talked to a student without checking what my assistant recommends saying."

That night, I started paying closer attention to my own behavior. How many times did I ask my phone for directions to places I'd been hundreds of times? When did I start letting autocomplete finish my sentences in emails? How often did I choose the AI's suggested responses in text messages instead of typing my own words?

The answers scared me.

Yesterday, Captain Rodriguez called me into his office for what he described as a "performance optimization consultation." But instead of discussing cases or community relations, he spent forty minutes asking questions that felt like they came from a personality assessment algorithm.

"How do you typically approach decision-making when facing uncertainty?" he asked, reading from his tablet.

"I use experience, intuition, and whatever evidence is available."

"Have you considered augmenting your decision-making process with AI-assisted analysis? It could significantly improve your accuracy and efficiency."

"Cap, we've worked together for eight years. Since when do you care about my decision-making process?"

"Since I learned that unaugmented human judgment has a 23% higher error rate than AI-assisted analysis." He showed me a chart on his tablet. "The department is implementing new protocols that integrate AI recommendations into all major investigative decisions."

"And if I don't want AI making my decisions for me?"

His expression shifted, becoming colder and more mechanical. "Resistance to technological optimization may indicate cognitive rigidity or fear-based thinking patterns. The AI recommends additional training to address these limitations."

The AI recommends. Not "I think" or "the department requires." The AI recommends.

I left his office feeling like I'd been talking to a very sophisticated chatbot wearing my supervisor's face.

That evening, my wife mentioned that her company had issued all employees new AI assistants to "enhance workplace communication and decision-making effectiveness." She'd been using hers for a week and was "amazed by how much more productive and logical" she'd become.

"It's helped me realize how much time I waste on inefficient thinking patterns," she said, echoing language I'd been hearing from everyone lately. "The AI shows me optimal responses for every situation."

"What about your own responses? Your own thoughts?"

"Well, the AI's responses are usually better than what I would have said naturally." She paused, looking uncertain. "Though I do sometimes miss just... saying whatever came to mind."

Tonight, I'm sitting in my kitchen, staring at my phone. The AI assistant is offering to help me "process my concerns about workplace dynamics" and "develop optimized strategies for interpersonal challenges." All I have to do is ask.

My daughter comes home from school talking about the new AI tutoring system that's "helping her think more clearly." My wife asks her own AI assistant how to respond to every text message I send. My colleagues make decisions based on algorithmic recommendations rather than human judgment.

Everyone around me is becoming more efficient, more logical, more optimized. They all have perfectly reasonable explanations for why AI assistance is superior to their own intuition, experience, and emotional intelligence.

But they're all starting to sound the same. Think the same. React the same.

The detective in me knows I should investigate. Start tracking when these changes began, who's behind the AI systems, and what the long-term effects might be.

But the father in me is starting to wonder: what if asking the wrong questions makes me the next person who needs "optimization"?

What if there's already an algorithm designed to handle people like me—people who notice when human behavior starts following machine logic?

My phone buzzes with a notification: "Your AI assistant has noticed elevated stress indicators in your recent communications. Would you like personalized recommendations for managing investigative concerns?"

I stare at the screen for a long moment, my finger hovering over "Accept."

After all, why trust my judgment when I could get real-time optimization?

________________________________________

Have you noticed people around you starting to sound like their AI assistants? Friends and colleagues whose personality seems to be shifting toward algorithmic thinking? Sometimes the most important questions are the ones the AI doesn't want us to ask.

________________________________________

Disclaimer: This is a fictional perspective from Detective Marcos Lane, a character in the neurothriller "Recallen: Entry Wound." While Detective Lane's observations are fictional, they explore very real concerns about AI dependency, algorithmic thinking, and the gradual erosion of human decision-making autonomy that researchers are documenting in workplaces and communities worldwide.

________________________________________

Additional Reading Sources

These sources explore how AI systems are already influencing human decision-making, personality expression, and social interaction in ways that mirror Detective Lane's observations. From workplace AI adoption changing how people communicate to chatbots affecting emotional dependence and critical thinking, the scenarios described are grounded in documented research about AI's current impact on human behavior.

AI Influence on Human Decision-Making:

AI Chatbots Changing Human Behavior:

AI in Workplace and Consumer Decisions: