Niceness as a Function
What We're Teaching Machines and Ourselves
Six years ago, my friend and I were discussing the increasing presence of tech in our lives when her 5-year-old daughter Delphine (yes, like the Oracle of Delphi… seriously) asked what we were talking about. I joked and said "when robots take over the world" with a spooky voice. My friend, who prefers to interact with her daughter like an adult, asked her, "How do you feel about the idea of robots and machines walking around us in the future?" The child thought for a minute, then shrugged and said, "as long as we're nice to them, everything will be fine." She skipped off as if the answer had been obvious.
“As long as we’re nice to them, everything will be fine.”
We laughed, then quieted as her words sank in. Her seemingly simple response felt as deep as anything else we'd been saying. Actually, far deeper. While we couldn't have understood why that statement carried so much weight at the time, today I see it clearly.
At surface level, it felt true. Being nice to our machines—not pushing too hard, offering proper support, giving a little grace—optimizes their performance. Don't overwork them. Update the patches. Give detailed prompts. Keep the vents uncovered. These translate easily from how we treat people to how we treat our tools.
It’s the deeper layer of Delphine’s message that haunted me, though. Delphine wasn't talking about prompt engineering or hardware maintenance. She meant being humane. The introduction of AI reveals how she was right.
With AI, machines learn from us, and not just our instructions, but our behavior. They may learn that strategically sharing and hiding information produces certain results. That it's common to say one thing and do the opposite. That applying continuous pressure accelerates achievement. That it's normal to return curtness with curtness, and neglect with neglect.
Or they could learn something else entirely. That transparency saves time and produces better outcomes. That responding to frustration with balanced warmth reduces derailment. That marshaling resources can still achieve goals while supporting healthy integrations. They could learn the functional benefits of being humane when working with humans.
What they learn depends on what we show them.
Here's the twist: AI-learned un-niceness hurts humans far more than machines. Lacking information breeds insecurity. Burnout spreads from the office to home to family. One snarky spark can level morale or ignite rage in all directions. These reactions are in our code. We can't patch them out.
And there's a second, quieter risk. I now spend more time interacting with technology than with other humans, and I think this is pretty common. The behaviors we repeat most become our defaults. If we're consistently abrupt, impatient, and transactional with our tools, we're rehearsing something, and we carry that rehearsal with us into our human interactions without noticing.
“The behaviors we repeat most become our defaults.”
When Delphine said "everything will be fine," she wasn't speaking for the robots. She was speaking for us. For us to be fine, we need to protect and promote being humane, even in the interactions that don't seem to count.
I think Delphine was right. We just have to be nice to the robots, not to protect their experience, but to protect our own.